当前位置: X-MOL 学术Theoretical Linguistics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modality and contextual salience in co-sign vs. co-speech gesture
Theoretical Linguistics ( IF 0.6 ) Pub Date : 2018-11-27 , DOI: 10.1515/tl-2018-0014
Diane Brentari

Schlenker has done an excellent job of combining a number of different strands of recent work on sign languages in order to make a larger case that: (i) sign languages make logical form visible, and (ii) this logical visibility is made possible via iconicity. These two hypotheses are intertwined in ways that focus on the fundamental question concerning the boundary between language and gesture, both in signed and spoken languages. Schlenker is focusing on a specific kind of gesture in sign languages; his examples are selectively chosen to engage with gradient, iconic forms that might be considered co-sign gesture and that might have an obvious parallel with co-speech gesture. We can, therefore, ask how co-sign and co-speech gesture might differ from each other. Do sign languages have an advantage in its range of co-sign (vs. co-speech) gestures? In this commentary I will suggest some possible ways that there indeed might be a modality effect in gesture. I will assume from the start that messages in both signed and spoken languages pair linguistic and gestural form. Goldin-Meadow and Brentari (2017) argue that instead of comparing sign vs. speech, a more fruitful comparison would be sign+gesture vs. speech+gesture. But do the gestural elements of signed languages have the same status as those of spoken languages? Schlenker does not take a stand on this issue in this paper and confines the examples in his target article to sign languages, and I raise issues in this paper that pertain to this next step in the work. It may be the case that if we include co-speech gesture in the analysis of spoken languages, the differences between the semantic resources available in signed and spoken language would disappear, or it could be that there are still differences. A further question is whether we should limit our analyses to quintessential iconic, manual gestures, or should the scope of analysis include a broader range of forms—for example, prosody—or even some elements of the broader context. Context enrichment (Bott and Chemla 2016) is a term that can apply to all

中文翻译:

共手势与共语音手势中的情态和上下文显着性

Schlenker出色地完成了将手语最新研究的许多不同工作组合在一起的工作,从而创造了更大的案例:(i)手语使逻辑形式可见,并且(ii)通过象似性使这种逻辑可见性成为可能。这两个假设相互交织在一起,集中关注有关手语和口语中语言和手势之间边界的基本问题。Schlenker专注于手势语言中的一种特定手势。他的示例经过有选择地选择以与可能被视为共同签名手势并且可能与共同语音手势明显相似的渐变标志性图形进行交互。因此,我们可以问共同手势和共同语音手势可能有何不同。手势语在共签名(vs. 共同语音)手势?在这篇评论中,我将提出一些可能的方式,这些方式实际上可能会对手势产生模态效果。我将从一开始就假设手语和口头语言的消息都是语言和手势形式的配对。Goldin-Meadow和Brentari(2017)认为,与其比较手势与言语,不如将手势+手势与言语+手势进行更有效的比较。但是,手语的手势元素与口语的手势元素具有相同的地位吗?Schlenker在本文中未就此问题持立场,而是将其目标文章中的示例限制为手语,并且我在本文中提出了与下一步工作有关的问题。可能是这样的,如果我们在语音分析中加入共语音手势,在手语和口语中可用的语义资源之间的差异将消失,或者可能仍然存在差异。另一个问题是我们是否应该将分析局限于典型的标志性手势,还是分析范围应包括更广泛的形式(例如韵律),甚至是更广泛上下文的某些元素。上下文丰富化(Bott和Chemla,2016年)是一个可以应用于所有领域的术语。
更新日期:2018-11-27
down
wechat
bug