Skip to main content
Log in

Toward a conversational model for counsel robots: how different question types elicit different linguistic behaviors

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

In recent years, robots have been playing the role of counselor or conversational partner in everyday dialogues and interactions with humans. For successful human–robot communication, it is very important to identify the best conversational strategies that can influence the responses of the human client in human–robot interactions. The purpose of the present study is to examine linguistic behaviors in human–human conversation using chatting data to provide the best model for effective conversation in human–robot interaction. We analyzed conversational data by categorizing them into question types, namely Wh-questions and “yes” or “no” (YN) questions, and their correspondent linguistic behaviors (self-disclosure elicitation, self-disclosure, simple “yes” or “no” answers, and acknowledgment). We also compared the utterance length of clients depending on the question type. In terms of linguistic behaviors, the results reveal that the Wh-question type elicited significantly higher rates of self-disclosure elicitation and acknowledgment than YN-questions. Among the Wh-subtype, how was found to promote more linguistic behaviors such as self-disclosure elicitation, self-disclosure, and acknowledgment than other Wh-subtypes. On the other hand, YN-questions generated significantly higher rates of simple “yes” or “no” answers compared to the Wh-question. In addition, Wh-question type elicited longer utterance than the YN-question type. We suggested that the type of questions of the robot counselor must be considered to elicit various linguistic behaviors and utterances of humans. Our research is meaningful in providing efficient conversation strategies for robot utterances that conform to humans’ linguistic behaviors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

The authors supplied the raw data as supplementary material in an excel file.

References

  1. Uchida T, Takahashi H, Ban M, Shimaya J, Yoshikawa Y, Ishiguro H (2017) A robot counseling system—what kinds of topics do we prefer to disclose to robots? In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 207–212

  2. Bentley F, Luvogt C, Silverman M, Wirasinghe R, White B, Lottridge D (2018) Understanding the long-term use of smart speaker assistants. Proc ACM Interact, Mobile, Wearable Ubiquitous Technol 2(3):1–24

    Article  Google Scholar 

  3. Brennan SE (1990) Conversation as direct manipulation: An iconoclastic view. The art of human-computer interface design, 393–404

  4. Bouwman G, Sturm J, Boves L (1999) Incorporating confidence measures in the dutch train timetable information system developed in the arise project. In: 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), IEEE, vol 1, pp 493–496

  5. Ohtake K, Misu T, Hori C, Kashioka H, Nakamura S (2009) Annotating dialogue acts to construct dialogue systems for consulting. In: Proceedings of the 7th Workshop on Asian Language Resources (ALR7), pp 32–39

  6. Walker M, Passonneau RJ, Boland JE (2001) Quantitative and qualitative evaluation of darpa communicator spoken dialogue systems. In: Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pp 515–522

  7. Cakmak M, Thomaz AL (2012) Designing robot learners that ask good questions. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, pp 17–24

  8. Siei´nska W, Dondrup C, Gunson N, Lemon O (2020) Conversational agents for intelligent buildings. In: Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp 45–48

  9. Rosenthal S, Dey AK, Veloso M (2009) How robots’ questions affect the accuracy of the human responses. In: RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, IEEE, pp 1137–1142

  10. Cohn DA, Ghahramani Z, Jordan MI (1996) Active learning with statistical models. J Artificial Intell Res 4:129–145

    Article  Google Scholar 

  11. Khashe S, Lucas G, Becerik-Gerber B, Gratch J (2019) Establishing social dialog between buildings and their users. Int J Human-Comput Interact. 35(17):1545–1556

    Article  Google Scholar 

  12. Gervits F, Eberhard K, Scheutz M (2016) Team communication as a collaborative process. Front Robot AI 3:62

    Article  Google Scholar 

  13. Lee H, Lee SY, Choi J, Sung JE, Lim H, Lim Y (2020) Analyzing the rules of social dialogue and building a social dialogue model in human-robot interaction. RO-MAN 2020 Workshop on Social Human-Robot Interaction of Human-Care Service Robots

  14. Ellis AP, West BJ, Ryan AM, DeShon RP (2002) The use of impression management tactics in structured interviews: a function of question type? J Appl Psychol 87(6):1200

    Article  Google Scholar 

  15. Bent S, Padula A, Avins AL (2006) Brief communication: better ways to question patients about adverse medical events: a randomized, controlled trial. Ann Intern Med 144(4):257–261

    Article  Google Scholar 

  16. Desai SC, Reimers S (2019) Comparing the use of open and closed questions for web-based measures of the continued-influence effect. Behav Res Methods 51(3):1426–1440

    Article  Google Scholar 

  17. Cappella JN, Ophir Y, Sutton J (2018) The importance of measuring knowledge in the age of misinformation and challenges in the tobacco domain. Misinformation and mass audiences, pp 51–70

  18. Waterman AH, Blades M, Spencer C (2001) Interviewing children and adults: the effect of question format on the tendency to speculate. Appl Cognitive Psychol: Offic J Soc Appl Res Memory Cognit 15(5):521–531

    Article  Google Scholar 

  19. MacDonald S, Snook B, Milne R (2017) Witness interview training: a field evaluation. J Police Crim Psychol 32(1):77–84

    Article  Google Scholar 

  20. Bearman M, Brubacher SP, Timms L, Powell M (2019) Trial of three investigative interview techniques with minimally verbal adults reporting about occurrences of a staged repeated event. Psychol Public Policy Law 25(4):239

    Article  Google Scholar 

  21. Leichtman MD, Ceci SJ (1995) The effects of stereotypes and suggestions on preschoolers’ reports. Dev Psychol 31(4):568

    Article  Google Scholar 

  22. Poole DA, White LT (1993) Two years later: Effect of question repetition and retention interval on the eyewitness testimony of children and adults. Dev Psychol 29(5):844

    Article  Google Scholar 

  23. Warren AR, Lane P (1995) Effects of timing and type of questioning on eyewitness accuracy and suggestibility. In Memory and Testimony in the Child Witness. Applied psychology: Individual, Social, and Community Issues (Vol.1), Zaragoza MS, Graham JR, Hall GCN, Hirschman R, Ben-Yorath YS (eds). Sage: Thousand Oaks, CA:44–60

  24. McGinty AS, Justice LM, Zucker TA, Gosse C, Skibbe LE (2012) Shared-reading dynamics: mothers’ question use and the verbal participation of children with specific language impairment. J Speech Lang Hear Res 55(4):1039–1052

    Article  Google Scholar 

  25. De Rivera C, Girolametto L, Greenberg J, Weitzman E (2005) Children’s responses to educators’ questions in day care play groups. Am J Speech Lang Pathol 14:14–26

    Article  Google Scholar 

  26. Lamb ME (1996) Effects of investigative utterance types on israeli children’s responses. Int J Behav Dev 19(3):627–638

    Article  Google Scholar 

  27. Blything LP, Hardie A, Cain K (2020) Question asking during reading comprehension instruction: a corpus study of how question type influences the linguistic complexity of primary school students’ responses. Read Res Q 55(3):443–472

    Article  Google Scholar 

  28. Fleege PO, Charlesworth R, Burts DC, Hart CH (1992) Stress begins in kindergarten: a look at behavior during standardized testing. J Res Child Educ 7(1):20–26

    Article  Google Scholar 

  29. Peña E, Iglesias A, Lidz CS (2001) Reducing test bias through dynamic assessment of children’s word learning ability. Am J Speech Lang Pathol 10:138–154

    Article  Google Scholar 

  30. Warren-Leubecker A, Bohannon JN (1982) The effects of expectation and feedback on speech to foreigners. J Psycholinguist Res 11(3):207–215

    Article  Google Scholar 

  31. Lee G, Lim Y, Choi J (2018) Automatic data gathering system for social dialog. In: Proceedings of the 2018 international conference on Human Robot Interaction workshop. ACM

  32. Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhj´almsson H, Yan H (1999) Embodiment in conversational interfaces: Rea. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp 520–527

  33. Sinha T, Zhao R, Cassell J (2015) Exploring socio-cognitive effects of conversational strategy congruence in peer tutoring. In: Proceedings of the 1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence, pp 5–12

  34. Zhao R, Papangelis A, Cassell J (2014) Towards a dyadic computational model of rapport management for human-virtual agent interaction. In: International conference on intelligent virtual agents, Springer, pp 514– 527

  35. Brown R (1973) A first language: The early stages. Harvard u. press

  36. Aleven V, Koedinger KR, Popescu O (2003) A tutorial dialog system to support self-explanation: Evaluation and open questions. In: Proceedings of the 11th International Conference on Artificial Intelligence in Education, IOS Press Amsterdam, pp 39–46

  37. Sklare G, Portes P, Splete H (1985) Developing questioning effectiveness in counseling. Couns Educ Superv 25(1):12–20

    Article  Google Scholar 

  38. Stivers T (2010) An overview of the question–response system in american english conversation. J Pragmat 42(10):2772–2781

    Article  Google Scholar 

  39. Watkins CE Jr, Schneider LJ (1989) Self-involving versus self-disclosing counselor statements during an initial interview. J Couns Dev 67(6):345–349

    Article  Google Scholar 

  40. Bickmore T, Cassell J (2000) how about this weather?” social dialogue with embodied conversational agents. In: Proc. AAAI Fall Symposium on Socially Intelligent Agents

  41. Mushaandja J, Haihambo C, Vergnani T, Frank E (2013) Major challenges facing teacher counselors in schools in namibia. Educ J 2(3):77–84

    Article  Google Scholar 

  42. Maree JG (2010) Brief overview of the advancement of postmodern approaches to career counseling. J Psychol Afr 20(3):361–367

    Article  Google Scholar 

  43. Sue DW, Sue D, Neville HA, Smith L (2019) Counseling the culturally diverse: Theory and practice. John Wiley & Sons

  44. Kuntze J, van der Molen HT, Born MP (2009) Increase in counselling communication skills after basic and advanced microskills training. Br J Educ Psychol 79(1):175–188

    Article  Google Scholar 

  45. Rautalinko E (2013) Reflective listening and open-ended questions in counselling: preferences moderated by social skills and cognitive ability. Couns Psychother Res 13(1):24–31

    Article  Google Scholar 

  46. Worley P (2015) Open thinking, closed questioning: Two kinds of open and closed question. Journal of Philosophy in Schools 2(2)

  47. Evans DR, Hearn MT, Uhlemann MR, Ivey AE (2016) Essential interviewing: A programmed approach to effective communication. Cengage Learning

  48. Long L, Long TJ, Paradise LV (1981) Questioning: Skills for the helping process. Brooks/Cole

  49. Agnew SE, Powell MB (2004) The effect of intellectual disability on children’s recall of an event across different question types. Law Hum Behav 28(3):273–294

    Article  Google Scholar 

  50. Gudjonsson GH, Henry L (2003) Child and adult witnesses with intellectual disability: the importance of suggestibility. Leg Criminol Psychol 8(2):241–252

    Article  Google Scholar 

  51. Hershkowitz I (2018) Nichd-protocol investigations of individuals with intellectual disability: a descriptive analysis. Psychol Public Policy Law 24(3):393

    Article  Google Scholar 

  52. Savage LB (1998) Eliciting critical thinking skills through questioning. The Clearing House 71(5):291–293

    Article  Google Scholar 

  53. Kobayashi Y, Yamamoto D, Koga T, Yokoyama S, Doi M (2010) Design targeting voice interface robot capable of active listening. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, pp 161–162

  54. Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M (2006) Common metrics for human-robot interaction. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp 33–40

  55. Raymond G (2003) Grammar and social organization: Yes/no interrogatives and the structure of responding. American sociological review pp 939–967

  56. Yoon KE (2010) Questions and responses in korean conversation. J Pragmat 42(10):2782–2798

    Article  Google Scholar 

  57. Coronado M, Iglesias CA, Carrera A, Mardomingo A (2018) A cognitive ´ assistant for learning java featuring social dialogue. Int J Hum Comput Stud 117:55–67

    Article  Google Scholar 

  58. Li Y, Su H, Shen X, Li W, Cao Z, Niu S (2017) Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint https://arxiv.org/abs/1710.03957

Download references

Acknowledgements

This work was supported by the Technology Innovation Program-Industrialized Technology Innovation Project (10077553, Development of Social Robot Intelligence for Social Human-Robot Interaction of Service Robots) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea).

Funding

This work was supported by the Technology Innovation Program-Industrialized Technology Innovation Project (10077553, Development of Social Robot Intelligence for Social Human–Robot Interaction of Service Robots) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea).

Author information

Authors and Affiliations

Authors

Contributions

Jee Eun Sung contributed to conceptualization and writing—review and editing and acquired funding; Jee Eun Sung and Yoonseob Lim contributed to methodology; Sujin Choi and Hanna Lee contributed to formal analysis and writing—original draft preparation and helped in investigation; Jee Eun Sung, Yoonseob Lim, and Jongsuk Choi contributed to supervision.

Corresponding author

Correspondence to Jee Eun Sung.

Ethics declarations

Conflicts of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Ethics approval

The Institutional Review Board at the Korea Institute of Science and Technology approved all the experiments used in this study (IRB number: 2018–019).

Consent for publication

All authors agreed with the content and gave consent to submit.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, S., Lee, H., Lim, Y. et al. Toward a conversational model for counsel robots: how different question types elicit different linguistic behaviors. Intel Serv Robotics 14, 373–385 (2021). https://doi.org/10.1007/s11370-021-00375-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-021-00375-6

Keywords

Navigation