当前位置: X-MOL 学术Trends Cogn. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
What's right about the neural organization of sign language? A perspective on recent neuroimaging results
Trends in Cognitive Sciences ( IF 19.9 ) Pub Date : 1998-12-01 , DOI: 10.1016/s1364-6613(98)01263-7
G Hickok 1 , U Bellugi , E S Klima
Affiliation  

To summarize the argument so far, our main points are (1) that the vast majority of behavioral, neuropsychological, and functional imaging data support the hypothesis that the left hemisphere is dominant for lexical and grammatical aspects of sign language perception and production, (2) that because of potential design confounds, the Neville et al. study does not present any serious challenge to existing claims concerning the lateralization of sign language, and (3) that there is evidence from both lesion and functional imaging data which suggests that the within-hemisphere organization of signed and spoken language is in many respects the same—but not in all respects.One difference (which has been overlooked thus far) in the brain regions that were activated in the processing of ASL stimuli compared with those that are activated in the processing of auditorily presented spoken language stimuli concerns the supratemporal plane, the dorsal aspect of the temporal lobe, which includes the transverse temporal (or Heschl's) gyrus and the planum temporale. This region is uniformly activated in hearing subjects listening to spoken language[17xFunctional magnetic resonance imaging of human auditory cortex. Binder, J.R. et al. Ann. Neurol. 1994; 35: 662–672CrossRef | PubMed | Scopus (299)See all References, 18xFunctional magnetic resonance imaging of the central auditory pathway following speech and pure-tone stimuli. Millen, S.J., Haughton, V.M., and Yetkin, Z. Laryngoscope. 1995; 105: 1305–1310CrossRef | PubMedSee all References, 24xFunctional MR imaging of auditorily presented words: a single-item presentation paradigm. Hickok, G. et al. Brain Lang. 1997; 58: 197–201CrossRef | PubMed | Scopus (27)See all References] but was not activated in deaf subjects watching ASL sentences in the Neville et al. study, nor was it activated in an fMRI study of single-sign perception in a native deaf signer[25xSensory mapping in a congenitally deaf subject: MEG and fMRI studies of cross-modal non-plasticity. Hickok, G. et al. Hum. Brain Mapp. 1997; 5: 437–444CrossRef | PubMed | Scopus (17)See all References][25].One potential explanation for this is that supratemporal plane structures are involved in processing non-linguistic auditory information[26xFunction of the left planum temporale in auditory and linguistic processing. Binder, J.T. et al. Brain. 1996; 119: 1239–1247CrossRef | PubMedSee all References][26]: because these are not language processing systems, perception of ASL would not be expected to activate these areas; speech stimuli on the other hand, would produce activation in supratemporal plane as a result of some type of acoustic response. Another possibility, however, is that the supratemporal plane contains systems directly and critically involved in the perception of speech (that is, extracting linguistic information from an auditory signal), as some authors have suggested (Ref. [27xLanguage-specific phoneme representations revealed by electric and magnetic brain responses. Naatanen, R. et al. Nature. 1997; 385: 432–434CrossRef | PubMed | Scopus (723)See all References][27] and D. Poeppel, PhD thesis, MIT, 1995). This hypothesis could explain the presence of supratemporal activation in auditory language perception and its absence in sign language perception. It also predicts that there should be some processing system outside of canonical language areas involved in the extraction of sign information from the visual input. On this view, there are both modality dependent and modality independent components to the neural organization of language perception. Modality dependent components are those involved in extracting linguistic information from the sensory input, modality independent components are those involved in operating on higher-level linguistic representations. Based on available data, it's possible that supratemporal plane structures are part of a modality dependent system involved in speech perception, whereas lateral temporal lobe structures are part of a modality independent system involved in higher-level linguistic operations.But all of this discussion hasn't really answered the question posed at the outset; that is, what is driving the neural organization of language? Well, we don't yet know for sure. In fact, the data reviewed above render this problem a bit more puzzling (and thus perhaps more interesting). What we do know is that modality-specific factors aren't the whole story. Save for the possibility of speech perception, the neural organization of language appears to be largely independent of the modalities through which it is perceived and produced. But notice that this conclusion rules out the most intuitive and probably the oldest answer to the above question, namely that language systems are really just dynamically organized subsystems of the particular sensory and motor channels through which language is used. Instead, the answer will have to be couched in terms that can generalize over modality. Whether such an account will ultimately appeal to genetically constrained domain-specific regional specializations or to some complex interaction of domain-general processing biases (or both) remains to be seen. Provocative issues indeed.

中文翻译:

手语的神经组织有何正确之处?对近期神经影像学结果的看法

总结到目前为止的论点,我们的主要观点是 (1) 绝大多数行为、神经心理学和功能成像数据支持左半球在手语感知和产生的词汇和语法方面占主导地位的假设,(2 ) 由于潜在的设计混淆,Neville 等人。研究并未对现有关于手语偏侧化的主张提出任何严重挑战,并且 (3) 有来自病变和功能成像数据的证据表明,手语和口语的半球内组织在许多方面是相同——但并非在所有方面。在处理 ASL 刺激时被激活的大脑区域与在处理听觉口语刺激时被激活的大脑区域相比,一个差异(迄今为止已被忽视)涉及颞上平面,即颞叶的背面,包括横向颞(或 Heschl 回)和颞平面。该区域在听力对象听口语时被一致激活[17x人类听觉皮层的功能磁共振成像。粘合剂,JR 等。安。神经病。1994年;35: 662–672CrossRef | 医学 | Scopus (299) 查看所有参考资料,18x 语音和纯音刺激后中央听觉通路的功能磁共振成像。Millen, SJ, Haughton, VM 和 Yetkin, Z. Laryngoscope。1995年;105:1305–1310CrossRef | PubMed 查看所有参考文献,24x 听觉呈现词的功能 MR 成像:单项呈现范式。希科克,G. 等。脑朗。1997年;58: 197–201CrossRef | 医学 | Scopus (27) See all References] 但在 Neville 等人的 ASL 句子中没有被激活。研究,它也没有在本地聋人手语者单一符号感知的 fMRI 研究中被激活[先天性聋人的 25xSensory 映射:跨模态非可塑性的 MEG 和 fMRI 研究。希科克,G. 等。哼。大脑地图。1997年;5: 437–444CrossRef | 医学 | Scopus (17) 参见所有参考文献][25]。对此的一个可能解释是超颞平面结构参与处理非语言听觉信息[26x 左颞平面在听觉和语言处理中的功能。粘合剂,JT 等。脑。1996年;119: 1239–1247CrossRef | PubMedSee all References][26]:因为这些不是语言处理系统,ASL 的感知不会激活这些区域;另一方面,由于某种类型的声学响应,语音刺激会在超时间平面上产生激活。然而,另一种可能性是超时间平面包含直接和批判性地参与语音感知的系统(即,从听觉信号中提取语言信息),正如一些作者所建议的(参考文献 [27xLanguage-specific phoneme representations Revealed by电和磁脑反应。Naatanen, R. 等人 Nature. 1997; 385: 432–434CrossRef | PubMed | Scopus (723) 参见所有参考文献][27] 和 D. Poeppel,博士论文,麻省理工学院,1995 年)这一假设可以解释听觉语言感知中超时间激活的存在和手语感知中的缺失。它还预测,在从视觉输入中提取符号信息时,应该有一些规范语言区域之外的处理系统。根据这种观点,语言感知的神经组织既有依赖于模态的成分,也有依赖于模态的成分。模态相关组件是那些涉及从感官输入中提取语言信息的组件,模态独立组件是那些涉及对高级语言表示进行操作的组件。根据现有数据,超时间平面结构可能是涉及语音感知的模态相关系统的一部分,而外侧颞叶结构是涉及高级语言操作的模态独立系统的一部分。但所有这些讨论并没有真正回答一开始提出的问题;也就是说,是什么驱动了语言的神经组织?好吧,我们还不确定。事实上,上面回顾的数据使这个问题更加令人费解(因此可能更有趣)。我们所知道的是,特定于模态的因素并不是全部。除了语音感知的可能性外,语言的神经组织似乎在很大程度上独立于感知和产生语言的方式。但请注意,这个结论排除了对上述问题最直观,也可能是最古老的答案,也就是说,语言系统实际上只是使用语言的特定感觉和运动通道的动态组织子系统。相反,答案必须用可以概括模态的术语来表达。这样的帐户最终是否会吸引遗传约束的特定领域区域专业化或领域一般处理偏差(或两者)的一些复杂相互作用仍有待观察。确实是挑衅性的问题。这样的帐户最终是否会吸引遗传约束的特定领域区域专业化或领域一般处理偏差(或两者)的一些复杂相互作用仍有待观察。确实是挑衅性的问题。这样的帐户最终是否会吸引遗传约束的特定领域区域专业化或领域一般处理偏差(或两者)的一些复杂相互作用仍有待观察。确实是挑衅性的问题。
更新日期:1998-12-01
down
wechat
bug