当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Group-pair deep feature learning for multi-view 3d model retrieval
Applied Intelligence ( IF 3.4 ) Pub Date : 2021-06-02 , DOI: 10.1007/s10489-021-02471-7
Xiuxiu Chen , Li Liu , Long Zhang , Huaxiang Zhang , Lili Meng , Dongmei Liu

This paper employs Convolutional Neural Networks with pooling module to extract view descriptor of 3D model, and proposes the Group-Pair Deep Feature Learning method for multi-view 3D model retrieval. In the method, view descriptor is learned by the supervised autoencoder and multi-label discriminator to further mine the latent feature and category feature of 3D model. To enhance the discriminative capability of model features, we give the Margin Center Loss that minimizes the intra-class distance and maximize the inter-class distance. Experimental results on ModelNet10 and ModelNet40 datasets demonstrate that the proposed method significantly outperforms the state-of-the-art methods.



中文翻译:

用于多视图 3d 模型检索的组对深度特征学习

本文采用带有池化模块的卷积神经网络来提取 3D 模型的视图描述符,并提出了用于多视图 3D 模型检索的 Group-Pair Deep Feature Learning 方法。在该方法中,视图描述符由监督自编码器和多标签鉴别器学习,以进一步挖掘 3D 模型的潜在特征和类别特征。为了增强模型特征的判别能力,我们给出了最小化类内距离和最大化类间距离的 Margin Center Loss。在 ModelNet10 和 ModelNet40 数据集上的实验结果表明,所提出的方法明显优于最先进的方法。

更新日期:2021-06-02
down
wechat
bug