A cognitive decomposition to empirically study human performance in control room environments

https://doi.org/10.1016/j.ijhcs.2020.102438Get rights and content

Highlights

  • Cognitive tasks required of control room operators were decomposed.

  • Bloom's taxonomy was used as a tool for classification of task complexity.

  • Participants performed a control room simulation.

  • Objective and subjective measures of human workload were used to validate breakdown.

  • Classifications successfully captured variability in operator cognitive workload.

Abstract

Monitoring tasks in control room environments require operators to perform various mental and physical sub-tasks in series and simultaneously over long periods of time with minimal error. These tasks vary in cognitive complexity, ranging from low-level sensory processing to high-level decision making. Cognitive load, a measure of the effort required by the working memory, can serve as an indicator of tasks that may have higher risk of error. Task decomposition models for cognitive complexity can be combined with objective and subjective measures of workload to measure human performance in response to control room stimuli. In this study, we demonstrate the effectiveness of a cognitive task analysis approach to structure the design of experiments for the purpose of evaluating human performance in control room simulated use activities. Participants completed monitoring tasks in a simulated unmanned aerial vehicle (UAV) control room that required the completion of tasks ranging in cognitive complexity. Performance measures taken during the study were used to validate the breakdown of tasks complexity, and to identify potential sources of human error in workstation monitoring tasks. These findings can be linked to design specifications for workstation optimization. Results indicated that the task breakdown appropriately represented the use-case scenario, and the classification model adequately captured differences in cognitive workload experienced by participants. This research has broad implications on complex system design validation, providing a structure to achieve cognitive depth for the evaluation of human performance and subsequent design risk mitigation.

Introduction

Robotics and automation are increasingly being used to take over tasks once allocated to humans in domains such as military (Kidwell et al., 2012), energy (Le Blanc et al., 2001; Lin et al., 2010), healthcare (Kobayashi et al., 2013; Morelli et al., 2016), and transportation (Chen and Barnes, 2014). The automation of complex systems presents great opportunities for increasing system performance and alleviating human physical and cognitive burden (Bindewald et al., 2014). While automation may relieve humans of the more arduous and repetitive system functions, new supervisory roles for humans can emerge primarily related to monitoring and controlling unmanned automated systems (Feigh and Pritchett, 2014). These new roles present new sources of workload that can influence operator performance, potentially resulting in performance inefficiencies or risks to safety. Examples where these challenges present themselves are found in various forms of control rooms across domains: hospital telemetry rooms (Kohani et al., 2014), control room environments for nuclear power plants (Lin et al., 2010), and unmanned aerial vehicle (UAV) control (Cummings et al., 2013; Donmez et al., 2010).

Cognitive tasks can vary widely in complexity and may contribute varying levels of stress and workload in a human operator. In a control room environment, a human operator is responsible for various cognitive tasks including visual scanning of data, decision-making regarding this data, and implementation of decisions (Schumacher et al., 2011; Yang et al., 2011). Cognitive load, a multidimensional construct used to represent the effort required by human working memory, can serve as an indicator of a task's complexity (Haapalainen et al., 2010). Cognitive load can be measured directly through self-reporting and measurement of task performance (e.g. timing, accuracy), as well as indirectly using neurophysiological measurements, such as heartrate and pupillary response (Chen and Epps, 2013; Shakouri et al., 2018).

The monitoring and decision-making tasks in control room environments can be decomposed into cognitive and physical subtasks. These tasks include scanning for and interpreting salient information. This can include the physical movement involved with looking around as well detection and processing of sensory information. This information then needs to be used to make decisions regarding the system being monitored, which requires higher level cognitive processing to evaluate the given information with respect to the goals of the system. The decision must then be enacted, requiring physical movement as well as an understanding of how the decision is to be enacted. These subtasks can each influence operators uniquely. A formalized method for classifying these subtasks provides a basis for evaluation.

The process of decomposing a task into cognitive subprocesses to examine the underpinning mental framework and knowledge required to complete the task is known as Cognitive Task Analysis (CTA) (Clark et al., 2008). In control room environments, these cognitive subtasks must be performed in series and simultaneously over long periods of time with minimal human error. Examination of these subtasks could aid in the optimization of workstation environments for maximum human performance. Each subtask may require interaction with different elements of the workstation. Design specifications which define these elements may have a significant influence on the performance of each subtask. Thus, by identifying at depth each cognitive subtask, a designer can more precisely identify the design elements that may need adjusted to alter overall operator performance.

The control room serves as the communication interface between the human operator and the autonomous system. This study seeks to identify subtasks related to monitoring an autonomous system in this environment that may result in relatively higher levels of cognitive load. Various models of cognition exist which seek to classify cognitive activity into levels of complexity. These models can be used to identify tasks that may be at significant risk for human error, and thus require additional efforts to manage cognitive load. In safety-critical systems, mitigating risk of human error means mitigating risk of adverse health and safety outcomes. UAVs are commonly used in operations whose outcomes can have a significant impact on human livelihood and are at significant risk for spurring unintended and destructive consequences. It is the responsibility of system designers and other stakeholders tied to these systems to ensure UAV operators can maintain efficient and effective communication with the UAV system to minimize this risk.

In this study, cognitive tasks analysis (CTA) is used to decompose the tasks related to monitoring in a control room environment, and Bloom's cognitive taxonomy (Krathwohl, 2002) and Harrow's psychomotor taxonomy (Harrow, 1972) are used to classify each task in a hierarchal manner. A virtual multi-screen control room environment was built in a CAVE virtual reality simulator. Participants (n = 35) were recruited to perform control room monitoring simulations in the environment, and both subjective and objective measures of cognitive load were collected. These measures were used to validate the task breakdown and classification of task complexity performed prior. Current CTA methods rarely include structured taxonomical classification of cognitive task complexity and are not appropriate for modeling cognitive complexity at the desired depth. Those that do have not been validated with both direct and indirect measures of human performance. This research into cognitive classification and task workload provides a transferable basis for evaluating risk of poor human performance during monitoring of autonomous systems. The results of this study validate the use of CTA for achieving a level of cognitive depth necessary for evaluating differences in human performance in these contexts.

Section snippets

Background

The decomposition of system tasks for the identification of sources of error and human workload has been addressed previously in literature. In this section, cognitive task analysis (CTA), taxonomical structures for describing tasks, cognitive workload, and control room environments are all discussed.

Methods

The objective of this study was to identify how task complexity influences human performance in a control room environment. A UAV monitoring simulation was developed as the representative control room environment. Utilizing the taxonomies discussed in the literature review, a method was developed to integrate the cognitive and psychomotor hierarchies into a CTA for monitoring and decision-making control room tasks. Participants were recruited to complete 4 simulation trials. Several measures of

Results

After the removal of 12 participants during preprocessing due to data quality issues, 35 participant's data remained for analysis. 29 of the participants were ages 18–25 years old. The remaining 6 were 26–35 years old. Each simulation trial was 5 min long. Therefore, simulation time for each participant was 20 min. Combined with time spent instructing participants, donning experimental equipment, and taking surveys, the experiment took approximately 45–55 min per participant. Table 6 shows the

Discussion

The results of this study demonstrated that Bloom's taxonomy can be a valid tool for modeling cognitive complexity when paired with a CTA. It was expected that tasks classified as high cognitive complexity (Identify and Analyze) would elicit higher levels of cognitive workload than those classified as low complexity (Detect and Respond). Survey results indicated this to be the case. Both Detect and Respond showed significantly higher odds of eliciting a low difficulty response from

Conclusions

This study successfully demonstrated the suitability of CTA combined with Bloom's taxonomy for describing the cognitive complexity of monitoring tasks in a control room environment. A scenario was developed that simulated conditions commonly found in control rooms. This scenario underwent cognitive task decomposition, and tasks were classified based on Bloom's and Harrow's taxonomy. Participants were required to complete the simulation in which periodic decisions were required given salient

Funding

This work was supported by the Naval Air Systems Command [Grant #N004211820001] and the Nuclear Regulatory Commission [Grant #NRCHQ6017G0022].

CRediT authorship contribution statement

Benjamin M. Knisely: Methodology, Formal analysis, Writing - review & editing, Writing - original draft, Investigation, Data curation. Janell S. Joyner: Methodology, Software, Writing - original draft, Investigation. Anthony M. Rutkowski: Writing - original draft, Formal analysis. Matthew Wong: Software, Writing - original draft. Samuel Barksdale: Software, Writing - original draft, Data curation. Hayden Hotham: Software, Writing - original draft. Kush Kharod: Writing - review & editing, Data

Declaration of Competing Interest

We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

Acknowledgements

We would like to thank the United States Naval Air Systems Command Education Research Grant and the Nuclear Regulatory Commission Faculty Development Grant for their funding. We would also like to thank the Naval Air Systems Command Human Integration and Performance Division for their input, as well as University of Maryland undergraduate research assistants who assisted with simulation development and data collection.

References (76)

  • H. Saitwal et al.

    Assessing performance of an Electronic Health Record (EHR) using cognitive task analysis

    Int. J. Med. Inf.

    (2010)
  • S. Schumacher et al.

    Job requirements for control room jobs in nuclear power plants

    Saf. Sci.

    (2011)
  • M. Shakouri et al.

    Analysis of the sensitivity of heart rate variability and subjective workload measures in a driving simulator: the case of highway work zones

    Int. J. Ind. Ergon.

    (2018)
  • B.P. Smith et al.

    The accuracy of subjective measures for assessing fatigue related decrements in multi-stressor environments

    Saf. Sci.

    (2016)
  • C.-W. Yang et al.

    Operators’ signal-detection performance in video display unit monitoring tasks of the main control room

    Saf. Sci.

    (2011)
  • R. Abbasi-Kesbi et al.

    Technique to estimate human reaction time based on visual perception

    Healthc. Technol. Lett.

    (2017)
  • B.S. Bloom

    Taxonomy of Educational Objectives: the Classification of Educational Goals

    (1956)
  • P.R.B.d Campos et al.

    Proposal of a new taxonomy of the psychomotor domain for to the engineering laboratory

  • C.V. Chan et al.

    A framework for characterizing eHealth literacy demands and barriers

    J. Med. Internet Res.

    (2011)
  • J.Y.C. Chen et al.

    Human–agent teaming for multirobot control: a review of human factors issues

    IEEE Trans. Hum.-Mach. Syst.

    (2014)
  • R. Clark et al.

    Cognitive task analysis

    Handb. Res. Educ. Commun. Technol.

    (2008)
  • L.-E. Cox-Fuenzalida

    Effect of workload history on task performance

    Hum. Fact.

    (2007)
  • M.L. Cummings et al.

    Boredom and distraction in multiple unmanned vehicle supervisory control

    Interact. Comput.

    (2013)
  • Department of Transportation Federal Aviation Administration, 2009. Alarms and Alerts in the Technical Operations...
  • R.D. Dias et al.

    Systematic review of measurement tools to assess surgeons’ intraoperative cognitive workload

    BJS

    (2018)
  • B. Donmez et al.

    Modeling workload impact in multiple unmanned vehicle supervisory control

    IEEE Trans. Syst. Man Cybern. - Part Syst. Hum.

    (2010)
  • J.C. Fackler et al.

    Critical care physician cognitive task analysis: an exploratory study

    Crit. Care

    (2009)
  • K.M. Feigh et al.

    Requirements for effective function allocation: a critical review

    J. Cogn. Eng. Decis. Mak.

    (2014)
  • F. Flemisch et al.

    Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations

    Cogn. Technol. Work

    (2012)
  • C.K. Foroughi et al.

    Pupil size as a measure of within-task learning

    Psychophysiology

    (2017)
  • E. Haapalainen et al.

    Psycho-physiological measures for assessing cognitive load,

  • A.J. Harrow

    A Taxonomy of the Psychomotor Domain: a Guide for Developing Behavioral Objectives

    (1972)
  • Y.-C. Huang et al.

    The comparison of different sensory outputs on the driving overtake alarm system

  • E. İşbilir et al.

    Towards a multimodal model of cognitive workload through synchronous optical brain imaging and eye tracking measures

    Front. Hum. Neurosci.

    (2019)
  • A. Jain et al.

    A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students

    Int. J. Appl. Basic Med. Res.

    (2015)
  • R.J. Jansen et al.

    Hysteresis in mental workload and task performance: the influence of demand transitions and task prioritization

    Hum. Fact.

    (2016)
  • D.B. Kaber

    Issues in human–automation interaction modeling: presumptive aspects of frameworks of types and levels of automation

    J. Cogn. Eng. Decis. Mak.

    (2018)
  • P. Kenneth et al.

    Relationship between alertness, performance, and body temperature in humans

    Am. J. Physiol. - Regul. Integr. Comp. Physiol.

    (2002)
  • Cited by (0)

    View full text