当前位置: X-MOL 学术Human Communication Research › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Comprehension Models of Audiovisual Discourse Processing
Human Communication Research ( IF 4.4 ) Pub Date : 2017-02-16 , DOI: 10.1111/hcre.12107
Courtney Anderegg 1 , Fashina Aladé 2 , David R. Ewoldsen 3 , Zheng Wang 1
Affiliation  

Comprehension is integral to enjoyment of media narratives, yet our understanding of how viewers create the situation models that underlie comprehension is limited. This study utilizes two models of comprehension that had previously been tested with factual texts/videos to predict viewers' recall of entertainment media. Across five television/film clips, the landscape model explained at least 29% of the variance in recall. A dual coding version that assumed separate verbal and visual representations of the story significantly improved the model fit in four of the clips, accounting for an additional 15–29% of the variance. The dimensions of the event-indexing model (time, space, protagonist, causality, and intentionality) significantly moderated the relationship between the dual coding model and participant recall in all clips.

中文翻译:

视听话语处理的理解模型

理解是享受媒体叙事所不可或缺的,但是我们对观众如何创建理解基础的情境模型的理解是有限的。这项研究利用了两种理解模型,这些模型先前已经用事实文本/视频进行了测试,以预测观众对娱乐媒体的回忆。在五个电视/电影片段中,景观模型至少可解释29%的召回差异。假设故事分别采用口头和视觉表示的双重编码版本,极大地提高了模型在四个片段中的拟合度,从而增加了15-29%的差异。事件索引模型的维度(时间,空间,主角,因果关系和意向性)显着缓和了所有片段中双重编码模型与参与者回忆之间的关系。
更新日期:2017-02-16
down
wechat
bug