当前位置: X-MOL 学术Wirel. Commun. Mob. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Arabic Sign Language Recognition and Generating Arabic Speech Using Convolutional Neural Network
Wireless Communications and Mobile Computing Pub Date : 2020-05-23 , DOI: 10.1155/2020/3685614
M. M. Kamruzzaman 1
Affiliation  

Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.

中文翻译:

卷积神经网络识别阿拉伯手语并生成阿拉伯语音

手语包括手臂和手部的动作,是听力障碍者的一种交流手段。自动化的符号识别系统需要两个主要的过程:特定特征的检测和特定输入数据的分类。过去,已经提出了许多用于分类和检测手语的方法来改善系统性能。但是,计算机视觉领域的最新进展使我们朝着借助深层神经网络进一步探索手势/手势识别的方向发展。阿拉伯手语已经见证了前所未有的研究活动,以使用深度学习模型来识别手势和手势。本文提出了一种基于视觉的系统,将CNN应用于基于阿拉伯手语的字母识别并将其转换为阿拉伯语语音。拟议的系统将自动检测手势字母,并使用深度学习模型用阿拉伯语说出结果。该系统可识别90%的精度,以识别基于阿拉伯手势的字母,从而确保了该系统的高度可靠性。通过使用更高级的手势识别设备(如Leap Motion或Xbox Kinect),可以进一步提高准确性。识别出基于阿拉伯手势的字母后,结果将被输入到语音引擎中的文本中,语音引擎将输出阿拉伯语言的音频。拟议的系统将自动检测手势字母,并使用深度学习模型用阿拉伯语说出结果。该系统可识别90%的精度,以识别基于阿拉伯手势的字母,从而确保了该系统的高度可靠性。通过使用更高级的手势识别设备(如Leap Motion或Xbox Kinect),可以进一步提高准确性。识别出基于阿拉伯手势的字母后,结果将被输入到语音引擎中的文本中,语音引擎将输出阿拉伯语言的音频。拟议的系统将自动检测手势字母,并使用深度学习模型用阿拉伯语说出结果。该系统可识别90%的精度,以识别基于阿拉伯手势的字母,从而确保了该系统的高度可靠性。通过使用更高级的手势识别设备(如Leap Motion或Xbox Kinect),可以进一步提高准确性。识别出基于阿拉伯手势的字母后,结果将被输入到语音引擎中的文本中,语音引擎将输出阿拉伯语言的音频。通过使用更高级的手势识别设备(如Leap Motion或Xbox Kinect),可以进一步提高准确性。识别出基于阿拉伯手势的字母后,结果将被输入到语音引擎中的文本中,语音引擎将输出阿拉伯语言的音频。通过使用更高级的手势识别设备(如Leap Motion或Xbox Kinect),可以进一步提高准确性。识别出基于阿拉伯手势的字母后,结果将被输入到语音引擎中的文本中,语音引擎将输出阿拉伯语言的音频。
更新日期:2020-05-23
down
wechat
bug