当前位置: X-MOL 学术Neural Comput. & Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
J-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classification
Neural Computing and Applications ( IF 4.5 ) Pub Date : 2020-05-19 , DOI: 10.1007/s00521-020-05015-1
Muhammad Fayyaz , Mussarat Yasmin , Muhammad Sharif , Mudassar Raza

Appearance-based gender classification is one of the key areas in pedestrian analysis, and it has many useful applications such as visual surveillance, predict demographics statistics, population prediction, and human–computer interaction. For pedestrian gender classification, traditional and deep convolutional neural network (CNN) approaches are employed individually. However, they are facing issues, for instance, discriminative feature representations, lower classification accuracy, and small sample size for model learning. To address these issues, this article proposes a framework that considers the combination of both traditional and deep CNN approaches for gender classification. To realize it, HOG- and LOMO-assisted low-level features are extracted to handle rotation, viewpoint and illumination variances in the images. Simultaneously, VGG19- and ResNet101-based standard deep CNN architectures are employed to acquire the deep features which are robust against pose variations. To avoid the ambiguous and unnecessary feature representations, the entropy-controlled features are picked from both low-level and deep representations of features that reduce the dimension of computed features. By merging the selected low-level features with deep features, we obtain a robust joint feature representation. The extensive experiments are conducted on PETA and MIT datasets, and computed results suggest that using the integration of both low-level and deep feature representations can improve the performance as compared to using these feature representations, individually. The proposed framework achieves AU-ROC of 96% and accuracy of 89.3% on the PETA dataset, and AU-ROC of 86% and accuracy of 82% on the MIT dataset. The experimental outcomes show that the proposed J-LDFR framework outperformed the existing gender classification methods.



中文翻译:

J-LDFR:行人性别分类的联合低层和深度神经网络特征表示

基于外观的性别分类是行人分析中的关键领域之一,它具有许多有用的应用程序,例如视觉监视,预测人口统计数据,人口预测以及人机交互。对于行人性别分类,分别使用传统和深度卷积神经网络(CNN)方法。但是,他们面临着一些问题,例如,区别性特征表示,较低的分类精度和较小的模型学习样本量。为了解决这些问题,本文提出了一个框架,该框架考虑了传统的和深层的CNN方法相结合的性别分类方法。为了实现它,提取了HOG和LOMO辅助的低级特征以处理图像中的旋转,视点和照明差异。同时,基于VGG19和ResNet101的标准深度CNN架构用于获取深度特征,这些特征可抵抗姿势变化。为了避免模棱两可和不必要的特征表示,从可简化计算特征维数的特征的低级表示和深层表示中选取熵控制的特征。通过将选定的低级特征与深层特征合并,我们获得了可靠的联合特征表示。在PETA和MIT数据集上进行了广泛的实验,计算结果表明,与单独使用这些特征表示相比,使用低层和深度特征表示的集成可以提高性能。拟议的框架在PETA数据集上实现了AU-ROC达96%的准确性和89.3%的准确性,MIT数据集的AU-ROC为86%,准确度为82%。实验结果表明,提出的J-LDFR框架优于现有的性别分类方法。

更新日期:2020-05-19
down
wechat
bug