当前位置: X-MOL 学术Multimed. Tools Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel gradient foster shared-representation convolutional network optimization for multi-modalities
Multimedia Tools and Applications ( IF 3.6 ) Pub Date : 2021-04-29 , DOI: 10.1007/s11042-021-10774-7
Arifa Javid Shikalgar , Shefali Sonavane

Significant growth has been made with multi-modal data as its entrance in the field of deep learning; whereas, Convolutional Neural Network (CNN) provides sufficient training data to develop a representative encrusted image. Yet, the multi-modality approach in CNN affect the performance by slowly converge the variance along with high-dimensionality, heterogeneity and non-aconvex optimization problems. To abridge these issues, a novel Gradient Foster Shared-representation Convolutional Network (GFSCN) framework is proposed, which improve and optimize the performance interms of accuracy and dimensionality reduction. Initially, the framework incorporates a multiple scant weighted de-noising autoencoder to solve the heterogeneity problem and reduces the dimensionality of data by transforming shared feature representation. Consequently, the work integrated enhanced stochastic variance reduced ascension approach. This approach diminishes the non-convex optimization problem through integrating two gradients consuming mini-batches, which reduced the loss function thereby achieves faster convergence even with the usage of larger dataset. Thus, the proposed framework achieves better performance in terms of achieving utmost accuracy with faster convergence and reduced variance.



中文翻译:

一种新型的多形式梯度促进共享表示卷积网络优化

在深度学习领域,多模式数据已成为重要的增长。然而,卷积神经网络(CNN)提供了足够的训练数据来形成代表性的镶嵌图像。然而,CNN中的多模式方法通过缓慢收敛方差以及高维,异质性和非凸优化问题来影响性能。为了消除这些问题,提出了一种新颖的梯度福斯特共享表示卷积网络(GFSCN)框架,该框架改进并优化了精度和降维方面的性能。最初,该框架结合了多次加权加权降噪自动编码器,以解决异构性问题,并通过转换共享特征表示来降低数据的维数。最后,工作整合了增强的随机方差减少提升方法。这种方法通过整合两个使用迷你批处理的梯度来减少非凸优化问题,这减少了损失函数,从而即使使用较大的数据集也可以实现更快的收敛。因此,所提出的框架在以更快的收敛性和减少的方差实现最大的准确性方面实现了更好的性能。

更新日期:2021-04-29
down
wechat
bug