当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How much is a cow like a meow? A novel database of human judgements of audiovisual semantic relatedness
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2022-04-21 , DOI: 10.3758/s13414-022-02488-1
Kira Wegner-Clemens 1 , Sarah Shomstein 1 , George L. Malcolm 2
Affiliation  

Semantic information about objects, events, and scenes influences how humans perceive, interact with, and navigate the world. The semantic information about any object or event can be highly complex and frequently draws on multiple sensory modalities, which makes it difficult to quantify. Past studies have primarily relied on either a simplified binary classification of semantic relatedness based on category or on algorithmic values based on text corpora rather than human perceptual experience and judgement. With the aim to further accelerate research into multisensory semantics, we created a constrained audiovisual stimulus set and derived similarity ratings between items within three categories (animals, instruments, household items). A set of 140 participants provided similarity judgments between sounds and images. Participants either heard a sound (e.g., a meow) and judged which of two pictures of objects (e.g., a picture of a dog and a duck) it was more similar to, or saw a picture (e.g., a picture of a duck) and selected which of two sounds it was more similar to (e.g., a bark or a meow). Judgements were then used to calculate similarity values of any given cross-modal pair. An additional 140 participants provided word judgement to calculate similarity of word-word pairs. The derived and reported similarity judgements reflect a range of semantic similarities across three categories and items, and highlight similarities and differences among similarity judgments between modalities. We make the derived similarity values available in a database format to the research community to be used as a measure of semantic relatedness in cognitive psychology experiments, enabling more robust studies of semantics in audiovisual environments.



中文翻译:

像喵喵一样的牛有多少?一种新的视听语义相关性人类判断数据库

关于对象、事件和场景的语义信息会影响人类感知世界、与世界互动和导航的方式。任何物体或事件的语义信息都可能非常复杂,并且经常利用多种感官方式,这使得量化变得困难。过去的研究主要依赖于基于类别的语义相关性的简化二元分类或基于文本语料库的算法值,而不是人类的感知经验和判断。为了进一步加速对多感官语义的研究,我们创建了一个受约束的视听刺激集,并得出了三个类别(动物、仪器、家居用品)内项目之间的相似性评级。一组 140 名参与者提供了声音和图像之间的相似性判断。参与者要么听到声音(例如,喵喵声)并判断两张物体图片(例如,狗和鸭子的图片)中的哪一张更相似,要么看到图片(例如,鸭子的图片)并选择与它更相似的两种声音中的哪一种(例如,吠声或喵喵声)。然后使用判断来计算任何给定的跨模态对的相似性值。另外 140 名参与者提供了单词判断来计算单词-单词对的相似度。派生和报告的相似性判断反映了三个类别和项目之间的一系列语义相似性,并突出了模态之间相似性判断之间的异同。

更新日期:2022-04-24
down
wechat
bug