当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modelling and Recognition of the Linguistic Components in American Sign Language.
Image and Vision Computing ( IF 4.2 ) Pub Date : 2009-02-26 , DOI: 10.1016/j.imavis.2009.02.005
Liya Ding 1 , Aleix M Martinez
Affiliation  

The manual signs in sign languages are generated and interpreted using three basic building blocks: handshape, motion, and place of articulation. When combined, these three components (together with palm orientation) uniquely determine the meaning of the manual sign. This means that the use of pattern recognition techniques that only employ a subset of these components is inappropriate for interpreting the sign or to build automatic recognizers of the language. In this paper, we define an algorithm to model these three basic components form a single video sequence of two-dimensional pictures of a sign. Recognition of these three components are then combined to determine the class of the signs in the videos. Experiments are performed on a database of (isolated) American Sign Language (ASL) signs. The results demonstrate that, using semi-automatic detection, all three components can be reliably recovered from two-dimensional video sequences, allowing for an accurate representation and recognition of the signs.



中文翻译:

美国手语语言成分的建模和识别。

手语中的手动符号是使用三个基本构建块生成和解释的:手形、动作和发音位置。结合起来,这三个组成部分(连同手掌方向)唯一地确定了手动标志的含义。这意味着使用仅使用这些组件的子集的模式识别技术不适合解释符号或构建语言的自动识别器。在本文中,我们定义了一种算法来对这三个基本组件进行建模,以形成一个标志二维图片的单个视频序列。然后将这三个组成部分的识别结合起来,以确定视频中的标志类别。实验是在(孤立的)美国手语 (ASL) 标志的数据库上进行的。结果表明,

更新日期:2009-02-26
down
wechat
bug