当前位置: X-MOL 学术Developmental Psychology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Intersensory matching of faces and voices in infancy predicts language outcomes in young children.
Developmental Psychology ( IF 3.1 ) Pub Date : 2022-04-21 , DOI: 10.1037/dev0001375
Elizabeth V Edgar 1 , James Torrence Todd 1 , Lorraine E Bahrick 1
Affiliation  

Parent language input is a well-established predictor of child language development. Multisensory attention skills (MASks; intersensory matching, shifting and sustaining attention to audiovisual speech) are also known to be foundations for language development. However, due to a lack of appropriate measures, individual differences in these skills have received little research focus. A newly established measure, the Multisensory Attention Assessment Protocol (MAAP), allows researchers to examine predictive relations between early MASks and later outcomes. We hypothesized that, along with parent language input, multisensory attention to social events (faces and voices) in infancy would predict later language outcomes. We collected data from 97 children (predominantly White and Hispanic, 48 males) participating in an ongoing longitudinal study assessing 12-, 18-, and 24-month MASks (MAAP) and parent language input (quality, quantity), and 18- and 24-month language outcomes (child speech production, vocabulary size). Results revealed 12-month intersensory matching (but not maintaining or shifting attention) of faces and voices in the presence of a distractor was a strong predictor of language. It predicted a variety of 18- and 24-month child language outcomes (expressive vocabulary, child speech production), even when holding traditional predictors constant: parent language input and SES (maternal education: 52% bachelor's degree or higher). Further, at each age, parent language input predicted just one outcome, expressive vocabulary, and SES predicted child speech production. These novel findings reveal infant intersensory matching of faces and voices in the presence of a distractor can predict which children might benefit most from parent language input and show better language outcomes. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

中文翻译:

婴儿期面部和声音的感官匹配可以预测幼儿的语言结果。

父母的语言输入是儿童语言发展的公认预测因素。多感官注意力技能(MASks;对视听语音的感官匹配、转移和持续注意力)也被认为是语言发展的基础。然而,由于缺乏适当的措施,这些技能的个体差异很少受到研究关注。新建立的测量方法,多感官注意力评估协议(MAAP),使研究人员能够检查早期 MASK 与后期结果之间的预测关系。我们假设,与父母的语言输入一起,婴儿期对社会事件(面孔和声音)的多感官关注可以预测以后的语言结果。我们收集了 97 名儿童(主要是白人和西班牙裔,48 名男性)的数据,这些儿童参与了一项正在进行的纵向研究,评估 12 个月、18 个月和 24 个月的 MASks (MAAP) 和父母语言输入(质量、数量)以及 18 个月和24 个月的语言结果(儿童言语产生、词汇量)。结果显示,在存在干扰物的情况下,面部和声音的 12 个月感觉间匹配(但不是维持或转移注意力)是语言的有力预测因素。它预测了各种 18 个月和 24 个月的儿童语言结果(表达词汇、儿童言语产生),即使保持传统预测因素不变:父母语言输入和 SES(母亲教育程度:52% 为学士学位或更高)。此外,在每个年龄段,父母的语言输入仅预测一个结果,即表达词汇,而SES则预测儿童的言语产生。这些新颖的发现表明,在存在干扰物的情况下,婴儿面部和声音的感官匹配可以预测哪些孩子可能从父母的语言输入中受益最多,并表现出更好的语言结果。(PsycInfo 数据库记录 (c) 2022 APA,保留所有权利)。
更新日期:2022-04-21
down
wechat
bug