Skip to main content
Log in

Analysis of conversational listening skills toward agent-based social skills training

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

Listening skills are critical for human communication. Social skills training (SST), performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches speaking skills through interaction with a computer agent. Even though previous work that simulated social skills training considered speaking skills, the SST framework incorporates other skills, such as listening, asking questions, and expressing discomfort. In this paper, we extend our automated social skills training by considering user listening skills during conversations with computer agents. We prepared two scenarios: Listening 1 and Listening 2, which respectively assume small talk and job training. A female agent spoke to the participants about a recent story and how to make a telephone call, and the participants listened. We recorded the data of 27 Japanese graduate students who interacted with the agent. Two expert external raters assessed the participants’ listening skills. We manually extracted features that might be related to the eye fixation and behavioral cues of the participants and confirmed that a simple linear regression with selected features correctly predicted listening skills with a correlation coefficient above 0.50 in both scenarios. The number of noddings and backchannels within the utterances contributes to the predictions because we found that just using these two features predicted listening skills with a correlation coefficient above 0.43. Since these two features are easier to understand for users, we plan to integrate them into the framework of automated social skills training.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Association AP (2013) Diagnostic and statistical manual of mental disorders: DSM-5. Diagnostic and statistical manual of mental disorders. Amer Psychiatric Pub Incorporated. https://books.google.co.jp/books?id=EIbMlwEACAAJ

  2. Bandura A (1978) Social learning theory of aggression. J Commun 28(3):12–29

    Article  Google Scholar 

  3. Baron-Cohen S, Richler J, Bisarya D, Gurunathan N, Wheelwright S (2003) The systemizing quotient: an investigation of adults with Asperger syndrome or high-functioning autism, and normal sex differences. Philos Trans R Soc Lond B Biol Sci 358(1430):361–374

    Article  Google Scholar 

  4. Barry JG, Tomlin D, Moore DR, Dillon H (2015) Use of questionnaire-based measures in the assessment of listening difficulties in school-aged children. Ear Hear 36(6):300–313

    Article  Google Scholar 

  5. Bellack A, Mueser K, Gingerich S, Agresta J (2013) Social skills training for schizophrenia, 2nd edn: a step-by-step guide. Guilford Publications. https://books.google.co.jp/books?id=TSMxAAAAQBAJ

  6. Bohlander AJ, Orlich F, Varley CK (2012) Social skills training for children with autism. Pediatr Clin North Am 59(1):165–174. https://doi.org/10.1016/j.pcl.2011.10.001 (Autism Spectrum Disorders: Practical Overview for Pediatricians)

    Article  Google Scholar 

  7. Cassell J (2001) Embodied conversational agents: representation and intelligence in user interfaces. AI Mag 22(4):67–83

    Google Scholar 

  8. Cigerci F, Gultekin M (2017) Use of digital stories to develop listening comprehension skills. Issues Educ Res 27:252–268

    Google Scholar 

  9. Constantino JN, Davis SA, Todd RD, Schindler MK, Gross MM, Brophy SL, Metzger LM, Shoushtari CS, Splinter R, Reich W (2003) Validation of a brief quantitative measure of autistic traits: comparison of the social responsiveness scale with the autism diagnostic interview-revised. J Autism Dev Disord 33(4):427–433

    Article  Google Scholar 

  10. DeVault D, Artstein R, Benn G, Dey T, Fast E, Gainer A, Georgila K, Gratch J, Hartholt A, Lhommet M, Lucas G, Marsella S, Morbini F, Nazarian A, Scherer S, Stratou G, Suri A, Traum D, Wood R, Xu Y, Rizzo A, Morency LP (2014) Simsensei kiosk: a virtual human interviewer for healthcare decision support. In: Proceedings of the 2014 international conference on autonomous agents and multi-agent systems, AAMAS ’14. International foundation for autonomous agents and multiagent systems, Richland, pp 1061–1068. http://dl.acm.org/citation.cfm?id=2617388.2617415

  11. Duchowski AT (2007) Eye tracking methodology: theory and practice. Springer, New York

    MATH  Google Scholar 

  12. Frith U, Happe F (2005) Autism spectrum disorder. Curr Biol 15(19):R786–R790

    Article  Google Scholar 

  13. Golan O, Baron-Cohen S (2006) Systemizing empathy: teaching adults with Asperger syndrome or high-functioning autism to recognize complex emotions using interactive multimedia. Dev Psychopathol 18(2):591–617

    Article  Google Scholar 

  14. Gosling SD, Rentfrow PJ, Swann WB (2003) A very brief measure of the big-five personality domains. J Res Pers 37(6):504–528

    Article  Google Scholar 

  15. Gratch J, Wang N, Gerten J, Fast E, Duffy R (2007) Creating rapport with virtual agents. In: Proceedings of the 7th international conference on intelligent virtual agents (IVA). Lecture notes in artificial intelligence, vol. 4722. Paris, pp 125–128

  16. Heylen D (2008) Listening heads. In: Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on modeling communication with robots and virtual humans, ZiF’06. Springer, Berlin, pp 241–259. http://dl.acm.org/citation.cfm?id=1794517.1794530

  17. Hoque ME, Courgeon M, Martin JC, Mutlu B, Picard RW (2013) Mach: my automated conversation coach. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing, UbiComp ’13. ACM, New York, pp 697–706. https://doi.org/10.1145/2493432.2493502

  18. Huang L, Morency LP, Gratch J (2010) Learning backchannel prediction model from parasocial consensus sampling: a subjective evaluation. In: Allbeck J, Badler N, Bickmore T, Pelachaud C, Safonova A (eds) Intelligent virtual agents. Springer, Berlin, pp 159–172

    Chapter  Google Scholar 

  19. Klin A, Jones W, Schultz R, Volkmar F, Cohen D (2002) Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Arch Gen Psychiatry 59(9):809–816

    Article  Google Scholar 

  20. KUDO T (2005) Mecab: yet another part-of-speech and morphological analyzer. http://mecab.sourceforge.net/. https://ci.nii.ac.jp/naid/10019716933/

  21. Lala D, Milhorat P, Inoue K, Ishida M, Takanashi K, Kawahara T (2017) Attentive listening system with backchanneling, response generation and flexible turn-taking. In: Proceedings of the 18th annual SIGdial Meeting on discourse and dialogue. Association for computational linguistics, pp 127–136. http://aclweb.org/anthology/W17-5516

  22. Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174

    Article  Google Scholar 

  23. Lee A, Oura K, Tokuda K (2013) Mmdagent—a fully open-source toolkit for voice interaction systems. In: ICASSP, pp 8382–8385

  24. Liu C, Ishi CT, Ishiguro H, Hagita N (2012) Generation of nodding, head tilting and eye gazing for human–robot dialogue interaction. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, pp 285–292

  25. Liu F, Surendran D, Xu Y (2006) Classification of statement and question intonations in mandarin. In: Proceedings of the 3rd speech prosody, pp 603–606

  26. Maynard SK (1990) Conversation management in contrast: listener response in Japanese and American English. J Pragmat 14(3):397–412

    Article  Google Scholar 

  27. Maynard SK (1993) Kaiwa bunseki (discourse analysis) [written in Japanese]

  28. McKeown G, Valstar M, Cowie R, Pantic M, Schroder M (2012) The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans Affect Comput 3(1):5–17. https://doi.org/10.1109/T-AFFC.2011.20

    Article  Google Scholar 

  29. Milne M, Raghavendra P, Leibbrandt R, Powers DMW (2018) Personalisation and automation in a virtual conversation skills tutor for children with autism. J Multimodal User Interfaces 12:257–269

    Article  Google Scholar 

  30. Nori F, Lipi AA, Nakano Y (2011) Cultural difference in nonverbal behaviors in negotiation conversations: towards a model for culture–adapted conversational agents. In: Proceedings of the 6th international conference on Universal access in human–computer interaction: design for all and eInclusion, UAHCI’11, vol. Part I. Springer, Berlin, pp 410–419. http://dl.acm.org/citation.cfm?id=2022591.2022639

    Chapter  Google Scholar 

  31. Ochs M, Libermann N, Boidin A, Chaminade T (2017) Do you speak to a human or a virtual agent? automatic analysis of user’s social cues during mediated communication. In: Proceedings of the 19th ACM international conference on multimodal interaction, ICMI 2017. ACM, New York, pp 197–205. https://doi.org/10.1145/3136755.3136807

  32. Ochs M, Mestre D, de Montcheuil G, Pergandi JM, Saubesty J, Lombardo E, Francon D, Blache P (2019) Training doctors’ social skills to break bad news: evaluation of the impact of virtual environment displays on the sense of presence. J Multimodal User Interfaces 13:41–51

    Article  Google Scholar 

  33. Okada S, Ohtake Y, Nakano YI, Hayashi Y, Huang HH, Takase Y, Nitta K (2016) Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets. In: Proceedings of the 18th ACM international conference on multimodal interaction, ICMI 2016. ACM, New York, pp 169–176. https://doi.org/10.1145/2993148.2993154

  34. Poyade M, Morris G, Taylor I, Portela V (2017) Using mobile virtual reality to empower people with hidden disabilities to overcome their barriers. In: Proceedings of the 19th ACM international conference on multimodal interaction. ACM, New York, pp 504–505. https://doi.org/10.1145/3136755.3143025

  35. Recht S, Grynszpan O (2019) The sense of social agency in gaze leading. J Multimodal User Interfaces 13:19–30

    Article  Google Scholar 

  36. Reeves B, Nass CI (1996) The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge

    Google Scholar 

  37. Sims CM (2017) Do the big-five personality traits predict empathic listening and assertive communication? Int J Listening 31(3):163–188. https://doi.org/10.1080/10904018.2016.1202770

    Article  Google Scholar 

  38. Skinner B (1953) Science and human behavior. Free Press paperback. Psychology. Macmillan, New York

    Google Scholar 

  39. Sveinbjornsdottir B, Johannsson SH, Oddsdottir J, Siguroardottir TP, Valdimarsson GI, Vilhjalmsson HH (2019) Virtual discrete trial training for teacher trainees. J Multimodal User Interfaces 13:31–40

    Article  Google Scholar 

  40. Tanaka H, Adachi H, Ukita N, Ikeda M, Kazui H, Kudo T, Nakamura S (2017) Detecting dementia through interactive computer avatars. IEEE J Trans Eng Health Med 5:1–11. https://doi.org/10.1109/JTEHM.2017.2752152

    Article  Google Scholar 

  41. Tanaka H, Negoro H, Iwasaka H, Nakamura S (2017) Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders. PLoS One 12(8):1–15. https://doi.org/10.1371/journal.pone.0182151

    Article  Google Scholar 

  42. Tanaka H, Negoro H, Iwasaka H, Nakamura S (2018) Listening skills assessment through computer agents. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction, ICMI ’18. ACM, New York, NY, USA, pp 492–496. https://doi.org/10.1145/3242969.3242970

  43. Tanaka H, Sakriani S, Neubig G, Toda T, Negoro H, Iwasaka H, Nakamura S (2016) Teaching social communication skills through human–agent interaction. ACM Trans Interact Intell Syst 6(2):18:1–18:26. https://doi.org/10.1145/2937757

    Article  Google Scholar 

  44. Tanaka H, Sakti S, Neubig G, Toda T, Negoro H, Iwasaka H, Nakamura S (2015) Automated social skills trainer. In: Proceedings of the 20th international conference on intelligent user interfaces, IUI ’15. ACM, New York, pp 17–27. https://doi.org/10.1145/2678025.2701368

  45. Tanaka H, Watanabe H, Maki H, Sakriani S, Nakamura S (2019) Electroencephalogram-based single-trial detection of language expectation violations in listening to speech. Front Comput Neurosci 13:15

    Article  Google Scholar 

  46. Tsai MN, Wu CL, Tseng LP, An CP, Chen HC (2018) Extraversion is a mediator of gelotophobia: a study of autism spectrum disorder and the big five. Front Psychol 9:150

    Article  Google Scholar 

  47. Tyagi B (2013) Listening: an important skill and its various aspects. Criterion Int J Engl 12:1–8

    Google Scholar 

  48. Van Hecke AV, Stevens S, Carson AM, Karst JS, Dolan B, Schohl K, McKindles RJ, Remmel R, Brockman S (2015) Measuring the plasticity of social approach: a randomized controlled trial of the effects of the PEERS intervention on EEG asymmetry in adolescents with autism spectrum disorders. J Autism Dev Disord 45(2):316–335

    Article  Google Scholar 

  49. Veltman K, de Weerd H, Verbrugge R (2019) Training the use of theory of mind using artificial agents. J Multimodal User Interfaces 13:3–18

    Article  Google Scholar 

  50. Ward NG, Escalante R, Bayyari YA, Solorio T (2007) Learning to show you’re listening. Comput Assist Lang Learn 20(4):385–407. https://doi.org/10.1080/09588220701745825

    Article  Google Scholar 

  51. Zhao R, Li V, Barbosa H, Ghoshal G, Hoque ME (2017) Semi-automated 8 collaborative online training module for improving communication skills. Proc ACM Interact Mob Wearable Ubiquitous Technol 1(2):32:1–32:20. https://doi.org/10.1145/3090097

    Article  Google Scholar 

Download references

Acknowledgements

Funding was provided by Core Research for Evolutional Science and Technology (Grant No. JPMJCR19A5) and Japan Society for the Promotion of Science (Grant Nos. JP17H06101 and JP18K11437).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroki Tanaka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tanaka, H., Iwasaka, H., Negoro, H. et al. Analysis of conversational listening skills toward agent-based social skills training. J Multimodal User Interfaces 14, 73–82 (2020). https://doi.org/10.1007/s12193-019-00313-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-019-00313-y

Keywords

Navigation