当前位置: X-MOL 学术Nat. Electron. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors
Nature Electronics ( IF 33.7 ) Pub Date : 2020-06-08 , DOI: 10.1038/s41928-020-0422-z
Ming Wang , Zheng Yan , Ting Wang , Pingqiang Cai , Siyu Gao , Yi Zeng , Changjin Wan , Hong Wang , Liang Pan , Jiancan Yu , Shaowu Pan , Ke He , Jie Lu , Xiaodong Chen

Gesture recognition using machine-learning methods is valuable in the development of advanced cybernetics, robotics and healthcare systems, and typically relies on images or videos. To improve recognition accuracy, such visual data can be combined with data from other sensors, but this approach, which is termed data fusion, is limited by the quality of the sensor data and the incompatibility of the datasets. Here, we report a bioinspired data fusion architecture that can perform human gesture recognition by integrating visual data with somatosensory data from skin-like stretchable strain sensors made from single-walled carbon nanotubes. The learning architecture uses a convolutional neural network for visual processing and then implements a sparse neural network for sensor data fusion and recognition at the feature level. Our approach can achieve a recognition accuracy of 100% and maintain recognition accuracy in non-ideal conditions where images are noisy and under- or over-exposed. We also show that our architecture can be used for robot navigation via hand gestures, with an error of 1.7% under normal illumination and 3.3% in the dark.



中文翻译:

使用生物启发式学习架构进行手势识别,该架构将视觉数据与可伸缩传感器的体感数据集成在一起

使用机器学习方法进行手势识别在高级控制论,机器人技术和医疗保健系统的开发中非常有价值,并且通常依赖于图像或视频。为了提高识别精度,可以将此类视觉数据与来自其他传感器的数据进行组合,但是这种称为数据融合的方法受到传感器数据质量和数据集不兼容的限制。在这里,我们报告了一种具有生物启发性的数据融合体系结构,该体系结构可以通过将视觉数据与来自由单壁碳纳米管制成的皮肤状可拉伸应变传感器的体感数据相集成来执行人类手势识别。学习体系结构使用卷积神经网络进行视觉处理,然后在特征级别实现稀疏神经网络以进行传感器数据融合和识别。我们的方法可以实现100%的识别精度,并在图像嘈杂,曝光不足或曝光过度的非理想条件下保持识别精度。我们还表明,我们的体系结构可通过手势用于机器人导航,在正常照明下的误差为1.7%,在黑暗环境下的误差为3.3%。

更新日期:2020-06-08
down
wechat
bug