Skip to main content
Log in

Automation trust increases under high-workload multitasking scenarios involving risk

  • Original Article
  • Published:
Cognition, Technology & Work Aims and scope Submit manuscript

Abstract

Trust is a critical construct that influences human–automation interaction in multitasking workspaces involving imperfect automation. Karpinsky et al. (Appl Ergon, 70, 194–201, 2018) investigated whether trust affects operators’ attention allocation in high-load scenarios using the multi-attribute task battery II (MATB). Results suggested that task load reduces trust towards imperfect automation, then reducing visual attention allocation to the monitoring task aided by the automation. Participants also reported reduced levels of trust in high-load conditions. However, it is possible that the participants in high-load conditions did not trust the system because their poor task performance did not have expressly adverse consequences (i.e., risk). The current experiments aimed to replicate and extend Karpinsky et al. (2018) by asking forty participants to concurrently perform a tracking task and system monitoring task in the MATB II with or without risk. The reliability of the automated aid supporting the system monitoring task was 70%. The study employed a 2 × 2 split-plot design with task load (easy vs. difficult) via magnitude of errors in the tracking task as a within-participant factor and risk (high vs. low) as a between-participant factor. Participants in the high-risk group received an instruction that poor performance would result in a repeat of the experiment, whereas participants in the low-risk group did not receive this instruction. Results showed that trust was comparable between the high- and the low-load conditions, but the high risk elevated trust in the high-load condition. This implies that operators display greater levels of trust when a multitasking environment demands greater attention and they perceive risk of receiving expressly adverse consequence, regardless of the true reliability of automated systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Bailey NR, Scerbo MW (2007) Automation-induced complacency for monitoring highly reliable systems: the role of task complexity, system experience, and operator trust. Theory Issues Ergon Sci 8:321–348

    Google Scholar 

  • Bainbridge L (1983) Ironies of automation. Automatica 19:775–779

    Google Scholar 

  • Billings CE (1997) Aviation automation: the search for a human-centered approach. Lawrence Erlbaum Associates Publishers, Mahwah

    Google Scholar 

  • Bliss JP, Dunn MC (2000) Behavioral implications of alarm mistrust as a function of task workload. Ergonomics 43:1283–1300

    Google Scholar 

  • Bliss JP, Gilson RD, Deaton JE (1995) Human probability matching behaviour in response to alarms of varying reliability. Ergonomics 38:2300–3212

    Google Scholar 

  • Chancey ET, Bliss JP, Yamani Y, Handley HAH (2017) Trust and the compliance-reliance paradigm: the effects of risk, error bias, and reliability on trust and dependence. Hum Factors 59:333–345

    Google Scholar 

  • Chen JYC, Terrence PI (2009) Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics 58:907–920

    Google Scholar 

  • Comstock JR, Arnegard RJ (1992) The multi-attribute task battery for human operator workload and strategic behavior research. NASA Langley Research Center, Hampton

    Google Scholar 

  • Corritore CL, Kracher B, Wiedenbeck S (2003) On-line trust: concepts, evolving themes, a model. Int J Hum Comput Stud 58(6):737–758

    Google Scholar 

  • de Vries P, Midden C, Bouwhuis D (2003) The effects of errors on system trust, self-confidence, and the allocation of control in route planning. Int J Hum Comput Stud 58(6):719–735

    Google Scholar 

  • Golding JF (1998) Motion sickness susceptibility questionnaire revised and its relationship to other forms of sickness. Brain Res Bull 47:507–516

    Google Scholar 

  • Gopher D (1993) The skill of attentional control: acquisition and execution of attention strategies. In: Meyer DE, Kornblum S (eds) Attention and performance XIV. MIT Press, Cambridge, pp 299–322

    Google Scholar 

  • Green CS, Bavelier D (2003) Action video game modifies visual selective attention. Nature 423(6939):534

    Google Scholar 

  • Hancock PA, Warm JS (1989) A dynamic model of stress and sustained attention. Hum Factors 31:519–537

    Google Scholar 

  • Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 52:139–183

    Google Scholar 

  • Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57(3):407–434

    Google Scholar 

  • Hoogendoom R, van Arerm B, Hoogendoom S (2014) Automated driving, traffic flow efficiency, and human factors: literature review. Transp Res Rec 2442:113–120

    Google Scholar 

  • Horrey WJ, Wickens CD, Consalus KP (2006) Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. J Exp Psychol Appl 12:67–78

    Google Scholar 

  • Jeffreys H (1961) Theory of probability, 3rd edn. Oxford University Press, New York

    MATH  Google Scholar 

  • Kahneman D (1973) Attention and effort. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Karpinsky ND, Chancey ET, Palmer DB, Yamani Y (2018) Automation trust and attention allocation in multitasking workspace. Appl Ergon 70:194–201

    Google Scholar 

  • Kennedy RS, Lane NE, Berbaum KS, Lilienthal MG (2009) Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int J Aviat Psychol 3:203–220

    Google Scholar 

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80

    Google Scholar 

  • Lewandowsky S, Mundy M, Tan G (2000) The dynamics of trust: comparing humans to automation. J Exp Psychol Appl 6(2):104

    Google Scholar 

  • Li H, Wickens CD, Sarter N, Sebok A (2014) Stages and levels of automation in support of space teleoperations. Hum Factors 56:1050–1061

    Google Scholar 

  • Luhmann N (1979) Trust and power: two works. Wiley, Hoboken

    Google Scholar 

  • Luhmann N (1988) Familiarity, confidence, trust: Problems and alternatives. In: Gambetta D (ed) Trust: making and breaking cooperative relations. Basil Blackwell, New York, pp 94–108

    Google Scholar 

  • Lyons JB, Stokes CK (2012) Human–human reliance in the context of automation. Hum Factors 54:112–121

    Google Scholar 

  • Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20:709–734

    Google Scholar 

  • Metzger U, Parasuraman R (2001) The role of the air traffic controller in future air traffic management. An empirical study of active control versus passive monitoring. Hum Factors 43:519–528

    Google Scholar 

  • Meyer J (2001) Effects of warning validity and proximity on responses to warnings. Hum Factors 43:563–572

    Google Scholar 

  • Molloy R, Parasuraman R (1996) Monitoring an automated system for a single failure. Vigilance and task complexity effects. Hum Factors 38:311–322

    Google Scholar 

  • Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39:230–253

    Google Scholar 

  • Parasuraman R, Mouloua M, Molloy R, Hilburn B (1996) Monitoring of automated systems. In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and applications. Erlbaum, Hillsdale, NJ, pp 91–115

    Google Scholar 

  • Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE Trans Syst Man Cybern Part A Syst Hum 30:286–297

    Google Scholar 

  • Rice S (2009) Examining single- and multiple-process theories of trust in automation. J Gen Psychol 13:303–319

    Google Scholar 

  • Riley V (1994) A theory of operator reliance on automation. In: Mouloua M, Parasuraman R (eds) Human performance in automated systems: current research and trends. Erlbaum, Hillsdale, pp 8–14

    Google Scholar 

  • Rouder JN, Morey RD (2012) Default Bayes factors for model selection in regression. Multivar Behav Res 47:877–903

    Google Scholar 

  • Santiago-Espada Y, Myer RR, Latorella KA, Comstock JR (2011) The multi-attribute task battery II (MATB-II) software for human performance and workload research: a user’s guide (NASA/TM-2011-217164). National Aeronautics and Space Administration, Langley Research Center, Hampton

    Google Scholar 

  • Sheridan TB (1970) On how often the supervisor should sample. IEEE Trans Syst Sci Cybern 6:140–145

    MATH  Google Scholar 

  • Sheridan TB (2019) Extending three existing models to analysis of trust in automation: signal detection, statistical parameter estimation, and model-based control. Hum Factors. https://doi.org/10.1177/0018720819829951

    Article  Google Scholar 

  • Sheridan TB (2019b) Individual differences in attributes of trust in automation: measurement and application to system design. Front Psychol 10:1117

    Google Scholar 

  • Simon M, Houghton SM, Aquino K (1999) Cognitive biases, risk perception, and venture formation: how individuals decide to start companies. J Bus Ventur 15:113–134

    Google Scholar 

  • Sitkin SB, Pablo AM (1992) Reconceptualizing the determinants of risk behavior. Acad Manag Rev 17:9–38

    Google Scholar 

  • Sorkin RD, Woods DD (1985) Systems with human monitors: a signal detection analysis. Hum Comput Interact 1:49–75

    Google Scholar 

  • Tsang PS, Wilson G (1997) Mental workload. In: Salvendy G (ed) Handbook of human factors and ergonomics, 2nd edn. Wiley, New York, pp 243–268

    Google Scholar 

  • Vanderhaegen F (2017) Towards increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical & Human Systems. Annu Rev Control 44:316–322

    Google Scholar 

  • Warm JS, Parasuraman R, Matthews G (2008) Vigilance requires hard mental work and is stressful. Hum Factors 50:433–441

    Google Scholar 

  • Wickens CD, Dixons SR (2007) The benefits of imperfect diagnostic automation: a synthesis of the literature. Theor Issues Ergon Sci 8:201–212

    Google Scholar 

  • Wickens CD, Hollands JG, Banbury S, Parasuraman R (2013) Engineering psychology and human performance, 4th edn. Pearson, Boston

    Google Scholar 

  • Yamani Y, Horrey WJ (2018) A theoretical model of human–automation interaction grounded in resource allocation policy during automated driving. Int J Hum Factors Ergon 5:225–239

    Google Scholar 

  • Young MS, Brookhuis KA, Wickens CD, Hancock PA (2015) State of science: mental workload in ergonomics. Ergonomics 58:1–17

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusuke Yamani.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sato, T., Yamani, Y., Liechty, M. et al. Automation trust increases under high-workload multitasking scenarios involving risk. Cogn Tech Work 22, 399–407 (2020). https://doi.org/10.1007/s10111-019-00580-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10111-019-00580-5

Keywords

Navigation