当前位置: X-MOL 学术Multimed. Tools Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring contextual information for view-wised 3D model retrieval
Multimedia Tools and Applications ( IF 3.6 ) Pub Date : 2020-05-29 , DOI: 10.1007/s11042-020-08967-7
Wenhui Li , Yuting Su , Zhenlan Zhao , Tong Hao , Yangyang Li

Recently, with the rapid development of digital technologies and its wide application, 3D model retrieval is becoming more and more important in graphic communities. In this task, how to effectively represent the 3D model and how to robustly measure similarity between pair-wise models are two crucial problems. In previous work, most papers dedicated to researching how to effectively using the visualize features to represent 3D model and using the visual information to measure the similarity. However, visual feature can not represent 3D model well because of the model variations in poses and illumination. To address this task, we propose an novel framework, which utilizes the visual and contextual information to construct the rank graphs and fuses these two graphs to enhance the similarity measure. When fusing visual and contextual information, we define four strategies to measure the similarity among models according to the relation between the query model and the gallery models. The extensive experimental results demonstrate the superiority of our proposed method compare against the state of the arts.



中文翻译:

探索上下文信息以进行基于视图的3D模型检索

近年来,随着数字技术的迅猛发展及其广泛的应用,3D模型检索在图形社区中变得越来越重要。在此任务中,如何有效地表示3D模型以及如何稳健地测量成对模型之间的相似性是两个关​​键问题。在以前的工作中,大多数论文致力于研究如何有效地使用可视化特征来表示3D模型以及如何使用可视化信息来测量相似性。但是,由于模型在姿势和光照方面的变化,视觉特征无法很好地表示3D模型。为了解决此任务,我们提出了一个新颖的框架,该框架利用视觉和上下文信息来构建等级图,并将这两个图融合以增强相似性度量。在融合视觉和上下文信息时,根据查询模型与图库模型之间的关系,我们定义了四种测量模型之间相似度的策略。广泛的实验结果表明,与现有技术相比,我们提出的方法具有优越性。

更新日期:2020-05-29
down
wechat
bug