当前位置: X-MOL 学术Wirel. Commun. Mob. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multideep Feature Fusion Algorithm for Clothing Style Recognition
Wireless Communications and Mobile Computing Pub Date : 2021-04-17 , DOI: 10.1155/2021/5577393
Yuhua Li 1 , Zhiqiang He 1 , Sunan Wang 2 , Zicheng Wang 1 , Wanwei Huang 1
Affiliation  

In order to improve recognition accuracy of clothing style and fully exploit the advantages of deep learning in extracting deep semantic features from global to local features of clothing images, this paper utilizes the target detection technology and deep residual network (ResNet) to extract comprehensive clothing features, which aims at focusing on clothing itself in the process of feature extraction procedure. Based on that, we propose a multideep feature fusion algorithm for clothing image style recognition. First, we use the improved target detection model to extract the global area, main part, and part areas of clothing, which constitute the image, so as to weaken the influence of the background and other interference factors. Then, the three parts were inputted, respectively, to improve ResNet for feature extraction, which has been trained beforehand. The ResNet model is improved by optimizing the convolution layer in the residual block and adjusting the order of the batch-normalized layer and the activation layer. Finally, the multicategory fusion features were obtained by combining the overall features of the clothing image from the global area, the main part, to the part areas. The experimental results show that the proposed algorithm eliminates the influence of interference factors, makes the recognition process focus on clothing itself, greatly improves the accuracy of the clothing style recognition, and is better than the traditional deep residual network-based methods.

中文翻译:

服装风格识别的多深度特征融合算法

为了提高服装样式的识别精度并充分利用深度学习在从服装图像的全局特征到局部特征中提取深度语义特征的优势,本文利用目标检测技术和深度残差网络(ResNet)来提取服装的综合特征,其目的是在特征提取过程中着重于服装本身。在此基础上,提出了一种用于服装图像样式识别的多深度特征融合算法。首先,我们使用改进的目标检测模型提取构成图像的衣服的全局区域,主要部分和局部区域,以减弱背景和其他干扰因素的影响。然后,分别输入了这三个部分,以改进ResNet进行特征提取,这是事先经过培训的。通过优化残差块中的卷积层并调整批归一化层和激活层的顺序,可以改进ResNet模型。最后,通过将服装图像的整体特征从全局区域(主要部分)到局部区域进行组合,获得了多类别融合特征。实验结果表明,该算法消除了干扰因素的影响,使识别过程集中在服装本身上,大大提高了服装风格识别的准确性,优于传统的基于深度残差网络的识别方法。多类融合特征是通过组合服装图像从全局区域(主要部分到局部区域)的整体特征而获得的。实验结果表明,该算法消除了干扰因素的影响,使识别过程集中在服装本身上,大大提高了服装风格识别的准确性,优于传统的基于深度残差网络的识别方法。多类融合特征是通过组合服装图像从全局区域(主要部分到局部区域)的整体特征而获得的。实验结果表明,该算法消除了干扰因素的影响,使识别过程集中在服装本身上,大大提高了服装风格识别的准确性,优于传统的基于深度残差网络的识别方法。
更新日期:2021-04-18
down
wechat
bug