Skip to main content
Short Research Article

Visual Gender Cues Guide Crossmodal Selective Attending to a Gender-Congruent Voice During Dichotic Listening

Published Online:https://doi.org/10.1027/1618-3169/a000496

Abstract. Visual input of a face appears to influence the ability to selectively attend to one voice over another simultaneous voice. We examined this crossmodal effect, specifically the role face gender may have on selective attention to male and female gendered simultaneous voices. Using a within-subjects design, participants were presented with a dynamic male face, female face, or fixation cross, with each condition being paired with a dichotomous audio stream of male and female voices reciting different lists of concrete nouns. In Experiment 1a, the female voice was played in the right ear and the male voice in the left ear. In Experiment 1b, both voices were played in both ears with differences in volume mimicking the interaural intensity difference between disparately localized voices in naturalistic situations. Free recall of words spoken by the two voices immediately following stimulus presentation served as a proxy measure of attention. In both sections of the experiment, crossmodal congruity of face gender enhanced same-gender word recall. This effect indicates that crossmodal interaction between voices and faces guides auditory attention. The results contribute to our understanding of how humans navigate the crossmodal relationship between voices and faces to direct attention in social interactions such as those in the cocktail party scenario.

References

  • Asbjörnsen, A., Hugdahl, K., & Bryden, M. P. (1992). Manipulations of subjects' level of arousal in dichotic listening. Brain and Cognition, 19(2), 183–194. 10.1016/0278-2626(92)90044-M First citation in articleCrossref MedlineGoogle Scholar

  • Avan, P., Giraudet, F., & Büki, B. (2015). Importance of binaural hearing. Audiology and Neurotology, 20(1), 3–6. 10.1159/000380741 First citation in articleCrossref MedlineGoogle Scholar

  • Bahrick, L. E. (2010). Intermodal perception and selective attention to intersensory redundancy: Implications for typical social development and autism. In G. BremnerT. D. Wachs (Eds.), Blackwell handbook of infant development (2nd ed., pp. 120–166). Blackwell Publishing. First citation in articleCrossrefGoogle Scholar

  • Balconi, M., & Carrera, A. (2011). Cross-modal integration of emotional face and voice in congruous and incongruous pairs: The P2 ERP effect. Journal of Cognitive Psychology, 23(1), 132–139. 10.1080/20445911.2011.473560 First citation in articleCrossrefGoogle Scholar

  • Brodeur, D. A., & Pond, M. (2001). The development of selective attention in children with attention deficit hyperactivity disorder. Journal of Abnormal Child Psychology, 29(3), 229–239. 10.1023/A:1010381731658 First citation in articleCrossref MedlineGoogle Scholar

  • Campbell, R. (2007). The processing of audio-visual speech: Empirical and neural bases. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1493), 1001–1010. 10.1098/rstb.2007.2155 First citation in articleCrossrefGoogle Scholar

  • Crosse, M. J., Butler, J. S., & Lalor, E. C. (2015). Congruent visual speech enhances cortical entrainment to continuous auditory speech in noise-free conditions. Journal of Neuroscience, 35(42), 14195–14204. 10.1523/JNEUROSCI.1829-15.2015 First citation in articleCrossref MedlineGoogle Scholar

  • Dickstein, D. P., & Castellanos, F. X. (2011). Face processing in attention deficit/hyperactivity disorder. In C. StanfordR. Tannock (Eds.), Current topics in behavioral neurosciences: Behavioral neuroscience of attention deficit hyperactivity disorder and its treatment (pp. 219–237). Springer. 10.1007/7854_2011_157 First citation in articleCrossrefGoogle Scholar

  • Farroni, T., Csibra, G., Simion, F., & Johnson, M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences, 99(14), 9602–9605. 10.1073/pnas.152159999 First citation in articleCrossref MedlineGoogle Scholar

  • Golarai, G., Grill-Spector, K., & Reiss, A. L. (2006). Autism and the development of face processing. Clinical Neuroscience Research, 6(3–4), 145–160. 10.1016/j.cnr.2006.08.001 First citation in articleCrossref MedlineGoogle Scholar

  • Golumbic, E. Z., Cogan, G. B., Schroeder, C. E., & Poeppel, D. (2013). Visual input enhances selective speech envelope tracking in auditory cortex at a “cocktail party”. Journal of Neuroscience, 33(4), 1417–1426. 10.1523/JNEUROSCI.3675-12.2013 First citation in articleCrossref MedlineGoogle Scholar

  • Hiscock, M., Inch, R., & Kinsbourne, M. (1999). Allocation of attention in dichotic listening: Differential effects on the detection and localization of signals. Neuropsychology, 13(3), 404–414. 10.1037/0894-4105.13.3.404 First citation in articleCrossref MedlineGoogle Scholar

  • Huestegge, S. M., Raettig, T., & Huestegge, L. (2019). Are face-incongruent voices harder to process? Effects of face–voice gender incongruency on basic cognitive information processing. Experimental Psycholology, 66(2), 154–164. 10.1027/1618-3169/a000440 First citation in articleLinkGoogle Scholar

  • Johnson, K., Strand, E. A., & D'Imperio, M. (1999). Auditory–visual integration of talker gender in vowel perception. Journal of Phonetics, 27(4), 359–384. 10.1006/jpho.1999.0100 First citation in articleCrossrefGoogle Scholar

  • Lewald, J., & Guski, R. (2003). Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli. Cognitive Brain Research, 16(3), 468–478. 10.1016/S0926-6410(03)00074-0 First citation in articleCrossref MedlineGoogle Scholar

  • Lovén, J., Rehnman, J., Wiens, S., Lindholm, T., Peira, N., & Herlitz, A. (2012). Who are you looking at? The influence of face gender on visual attention and memory for own-and other-race faces. Memory, 20(4), 321–331. 10.1080/09658211.2012.658064 First citation in articleCrossref MedlineGoogle Scholar

  • Palermo, R., & Rhodes, G. (2007). Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia, 45(1), 75–92. 10.1016/j.neuropsychologia.2006.04.025 First citation in articleCrossref MedlineGoogle Scholar

  • Palmer, M. A., Brewer, N., & Horry, R. (2013). Understanding gender bias in face recognition: Effects of divided attention at encoding. Acta Psychologica, 142(3), 362–369. 10.1016/j.actpsy.2013.01.009 First citation in articleCrossref MedlineGoogle Scholar

  • Pourtois, G., de Gelder, B., Vroomen, J., Rossion, B., & Crommelinck, M. (2000). The time-course of intermodal binding between seeing and hearing affective information. NeuroReport, 11(6), 1329–1333. 10.1097/00001756-200004270-00036 First citation in articleCrossref MedlineGoogle Scholar

  • Schwartz, J. L., Berthommier, F., & Savariaux, C. (2004). Seeing to hear better: Evidence for early audio–visual interactions in speech identification. Cognition, 93(2), B69–B78. 10.1016/j.cognition.2004.01.006 First citation in articleCrossref MedlineGoogle Scholar

  • Talsma, D., Doty, T. J., & Woldorff, M. G. (2006). Selective attention and audiovisual integration: Is attending to both modalities a prerequisite for early integration? Cerebral Cortex, 17(3), 679–690. 10.1093/cercor/bhk016 First citation in articleCrossref MedlineGoogle Scholar

  • van der Zwan, R., MacHatch, C., Kozlowski, D., Troje, N. F., Blanke, O., & Brooks, A. (2009). Gender bending: Auditory cues affect visual judgements of gender in biological motion displays. Experimental Brain Research, 198(2–3), 373–382. 10.1007/s00221-009-1800-y First citation in articleCrossref MedlineGoogle Scholar

  • Vatakis, A., & Spence, C. (2007). Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli. Perception & Psychophysics, 69(5), 744–756. 10.3758/BF03193776 First citation in articleCrossref MedlineGoogle Scholar

  • Walker-Andrews, A. S., & Grolnick, W. (1983). Discrimination of vocal expressions by young infants. Infant Behavior and Development, 6(4), 491–498. 10.1016/S0163-6383(83)90331-4 First citation in articleCrossrefGoogle Scholar

  • Yi, D.-J., & Chun, M. M. (2005). Attentional modulation of learning-related repetition attenuation effects in human parahippocampal cortex. Journal of Neuroscience, 25(14), 3593–3600. 10.1523/JNEUROSCI.4677-04.2005 First citation in articleCrossref MedlineGoogle Scholar

  • Yi, D.-J., Kelley, T. A., Marois, R., & Chun, M. M. (2006). Attentional modulation of repetition attenuation is anatomically dissociable for scenes and faces. Brain Research, 1080(1), 53–62. 10.1016/j.brainres.2006.01.090 First citation in articleCrossref MedlineGoogle Scholar