当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LPI: learn postures for interactions
Machine Vision and Applications ( IF 3.3 ) Pub Date : 2021-09-02 , DOI: 10.1007/s00138-021-01235-0
Muhammad Raees 1 , Sehat Ullah 1
Affiliation  

To a great extent, immersion of a virtual environment (VE) depends on the naturalness of the interface it provides for interaction. As people commonly exploit gestures during communication, therefore interaction based on hand-postures enhances the degree of realism of a VE. However, the choice of selecting hand postures for interaction varies from person to person. Generalizing the use of a specific posture with a particular interaction requires considerable computation which in turns depletes intuition of a 3D interface. By investigating machine learning in the domain of virtual reality (VR), this paper presents an open posture-based approach for 3D interaction. The technique is user-independent and relies neither on the size and color of hand nor on the distance between camera and posing-position. The system works in two phases—in the first phase, hand-postures are learnt, whereas in the second phase the known postures are used to perform interaction. With an ordinary camera, a scanned image is partitioned into equal size non-overlapping tiles. Four light-weight features, based on binary histogram and invariant moments, are calculated for each part and portion of a posture-image. The support vector machine classifier is trained by posture-specific knowledge carried accumulatively in each tile. By posing any known posture, the system extracts the tiles information to detect a particular hand-posture. At the successful recognition, appropriate interaction is activated in the designed VE. The proposed system is implemented in a case-study application; vision-based open posture interaction using the libraries of OpenCV and OpenGL. The system is assessed in three separate evaluation sessions. Results of the evaluations testify efficacy of the approach in various VR applications.



中文翻译:

LPI:学习交互姿势

在很大程度上,虚拟环境 (VE) 的沉浸感取决于它为交互提供的界面的自然性。由于人们在交流过程中通常会利用手势,因此基于手势的交互增强了 VE 的真实度。然而,选择用于交互的手部姿势的选择因人而异。将特定姿势的使用概括为特定交互需要大量计算,这反过来会消耗 3D 界面的直觉。通过研究虚拟现实 (VR) 领域的机器学习,本文提出了一种基于开放姿势的 3D 交互方法。该技术与用户无关,既不依赖于手的大小和颜色,也不依赖于相机和姿势之间的距离。该系统分两个阶段工作——在第一阶段,学习手部姿势,而在第二阶段,已知姿势用于执行交互。使用普通相机,扫描图像被分割成大小相等、不重叠的图块。四个轻量级特征,基于二元直方图和不变矩,为姿势图像的每个部分和部分计算。支持向量机分类器通过在每个 tile 中累积携带的特定姿势知识进行训练。通过摆出任何已知姿势,系统提取瓷砖信息以检测特定的手部姿势。在成功识别时,在设计的 VE 中激活适当的交互。建议的系统是在案例研究应用程序中实施的;使用 OpenCV 和 OpenGL 库的基于视觉的开放姿势交互。该系统在三个单独的评估会议中进行评估。评估结果证明了该方法在各种 VR 应用程序中的有效性。

更新日期:2021-09-04
down
wechat
bug