当前位置: X-MOL 学术User Model. User-Adap. Inter. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From perception to action using observed actions to learn gestures
User Modeling and User-Adapted Interaction ( IF 3.0 ) Pub Date : 2020-08-24 , DOI: 10.1007/s11257-020-09275-3
Wolfgang Fuhl

Pervasive computing environments deliver a multitude of possibilities for human–computer interactions. Modern technologies, such as gesture control or speech recognition, allow different devices to be controlled without additional hardware. A drawback of these concepts is that gestures and commands need to be learned. We propose a system that is able to learn actions by observation of the user. To accomplish this, we use a camera and deep learning algorithms in a self-supervised fashion. The user can either train the system directly by showing gestures examples and perform an action, or let the system learn by itself. To evaluate the system, five experiments are carried out. In the first experiment, initial detectors are trained and used to evaluate our training procedure. The following three experiments are used to evaluate the adaption of our system and the applicability to new environments. In the last experiment, the online adaption is evaluated as well as adaption times and intervals are shown.

中文翻译:

从感知到行动,使用观察到的动作来学习手势

普适计算环境为人机交互提供了多种可能性。手势控制或语音识别等现代技术允许在没有额外硬件的情况下控制不同的设备。这些概念的一个缺点是需要学习手势和命令。我们提出了一种能够通过观察用户来学习动作的系统。为了实现这一点,我们以自我监督的方式使用相机和深度学习算法。用户既可以通过显示手势示例直接训练系统并执行操作,也可以让系统自行学习。为了评估该系统,进行了五个实验。在第一个实验中,初始检测器被训练并用于评估我们的训练过程。以下三个实验用于评估我们系统的适应性和对新环境的适用性。在最后一个实验中,评估了在线适应并显示了适应时间和间隔。
更新日期:2020-08-24
down
wechat
bug