Elsevier

Computers & Security

Volume 98, November 2020, 102020
Computers & Security

When believing in technology leads to poor cyber security: Development of a trust in technical controls scale

https://doi.org/10.1016/j.cose.2020.102020Get rights and content

Abstract

While technical controls can reduce vulnerabilities to cyber threats, no technology provides absolute protection and we hypothesised that people may act less securely if they place unwarranted trust in these automated systems. This paper describes the development of a Trust in Technical Controls Scale (TTCS) that measures people's faith in four of these technical controls. In an online study (N = 607), Australian employees demonstrated a greater degree of trust in firewalls and anti-virus software than they did in spam filters and social media privacy settings. Lower scores on the four item TTCS were related to better information security awareness (ISA) and higher scores on tests of cognitive abilities such as non-verbal IQ and cognitive reflection. The TTCS predicted an individual's ability to detect a phishing email to a similar degree as other factors such as ISA, non-verbal IQ and cognitive impulsivity. However, unlike ISA, the scale did not predict the strength of passwords people constructed. Results suggest that the TTCS is a useful complement to ISA in understanding and predicting certain cyber security behaviours.

Introduction

One of the most common factors contributing to incidents that undermine the security of an organisation's information system is the behaviour of employees (Alavi, and Mouratidis, 2016; Gyunka, and Christiana, 2017; Proctor.and Chen, 2015). It follows that relying on technological security solutions alone will never adequately protect our systems (Furnell et al, 2006, 2019; Vroom and von Solms, 2004). It is important that we understand the human factors that are associated with good security behaviours so that not only can we can better predict vulnerabilities but we can design training and education programs to address these underlying factors. A number of variables have been previously proposed as factors that may predict good cyber security behaviour in the work place such as; office environment (Pattinson et al., 2018), cyber security culture (Da Veiga and Eloff, 2010; Wiley, McCormac and Calic, 2020), resilience and work stress (McCormac et al., 2018), cognitive and cultural biases (Butavicius et al., 2017; Tsohou, Karyda and Kokolakis, 2015), learning styles (Pattinson et al., 2019), personality and risk taking propensity (McCormac et al., 2017b) and information security awareness (Parsons et al., 2017; Siponen, 2000).

In this paper, we explore a novel factor influencing cyber security behaviours, namely, people's trust in technological security solutions. These technical controls, such as firewalls and spam filters, do not provide absolute protection against all cyber threats (Furnell et al., 2019; Proofpoint, 2019; Yadron, 2014). However, if an individual incorrectly believes that this technology does provide such absolute protection, they may, as a consequence, engage in risky behaviours. Even if people know what policy states that they should do, they may still engage in non-malicious, unsafe behaviour because they believe, incorrectly, that the actions will have no consequences because of the technical security measures in place. For example, an employee may choose to click on a link in a potentially suspicious but interesting email because they believe that their actions will be safeguarded by their organisation's cyber security controls. Understanding how technical controls work, and how they can fail, is a type of knowledge. However, it is a more specific type of knowledge that is not incorporated into definitions or measures of general information security awareness in the literature (Bulgurcu, Cavusoglu, and Benbasat, 2010; Kruger and Kearney, 2006; Parsons et al., 2017; Siponen, 2000). While the information security policies tell people what to do, an understanding of the fallibility of technical safeguards relates to why people should behave in a certain way and, as such, this type of knowledge is not included in extant information security awareness measures.

Our approach to examining this concept was as follows. In the next section, we provide a review of relevant literature on trust and cyber security behaviours and formulate a set of research questions. We then present a new measure of people's faith in technological security measures, known as the Trust in Technical Controls Scale (TTCS), and then use this measure to test our research questions empirically. Specifically, we tested this measure in on online study to examine the level of trust in different safeguards (i.e., spam filters, firewalls, social media privacy settings and antivirus software), looked at how it relates to established tests of information security awareness and cognitive processing and then tested how well it can predict cyber security behaviours such as phishing email detection and password creation. Finally, we discuss the findings of this study, discuss their practical implications, indicate directions for future research and summarise the conclusions.

Section snippets

Literature review

In our review of the literature, we found no previous research that had directly measured people's trust in technical cyber controls. However, we did find evidence of the role trust plays in influencing people's behaviour when using a digital device. In short, this review suggests that while there are inconclusive results as to the role of the personality characteristic of trust in predicting cyber security behaviours, there is evidence that trust in a specific platform may predict our actions

The current study

While the literature review provides some a priori justification of the role that trust in technical controls may have in influencing poor cyber security behaviours, we found no evidence of a specific instrument to measure this trust in the literature. In this paper, we describe the development and validation of a Trust in Technical Controls Scale (TTCS) that examines the extent to which people over-trust technical cyber security safeguards including firewalls, spam filters, social media

Methodology

Participants were asked to complete an online survey, administered through Qualtrics, a Web-based survey platform. Ethics approval was granted by the Human Research Ethics Subcommittee of the School of Psychology at the University of Adelaide. The survey included demographic questions, the Trust in Technical Controls Scale (TTCS), the Human Aspects of Information Security Questionnaire (HAIS-Q), Cognitive Reflection Tests (CRT and CRT2), a measure of abstract reasoning (MRT), a phishing test

TTCS items

Initial processing of the values involved reverse scoring items 1 and 4. The raw responses to these two items were transformed so that high scores were recoded as low scores and vice versa (e.g., a score of ‘1’ was recoded as ‘5’ to indicate trust in the safeguard). After this reverse scoring, values for all four items were summed to form an overall “Trust in Technical Controls Scale” score in the range of 4 (lowest trust) to 20 (highest trust). Overall, there was a trend towards low rather

Discussion

Our results suggested that participants generally displayed a bias towards distrusting, rather than trusting, cyber security technical safeguards (RQ1). In addition, participants were almost four times less likely to trust the efficacy of social media privacy settings and spam filters than firewalls and antivirus filters (RQ2). The reasons behind these different levels of trust is worthy of further investigation. While firewalls, spam filters and antivirus-software generally do not demand much

Conclusions

This study provided evidence for the soundness and utility of the Trust in Technical Controls Scale (TTCS). The scale demonstrated good internal reliability, construct validity and criterion validity. Our findings demonstrated that participants were more likely to express a general distrust of technical controls. The items of the TTCS all appeared to relate to the same underlying concept and low scores on the scale predicted better information security awareness (ISA) and better security

Author contribution statement

M.B conceived of the presented concept, the research hypotheses as well as the original items. K.P, A.M., M.B., M.P. and D.C. collaborated on refining the items of the tool. All authors contributed to the overall experimental design. M.B., K.P and M.L. performed the statistical analyses. M.B. took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis and manuscript.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

The authors would like to acknowledge the valuable suggestions of Josh Green, Justin Fidock, Andrew Reeves and Carla Morelli in the preparation of this manuscript. This research was funded by Defence Science and Technology, Department of Defence, Australia.

References (70)

  • R.E. Petty et al.

    The elaboration likelihood model of persuasion

    Adv. Exp. Soc. Psychol.

    (1986)
  • K. Parsons et al.

    The human aspects of information security questionnaire (HAIS-Q): two further validation studies

    Comput. Security

    (2017)
  • O. Stavrova et al.

    Belief in scientific–technological progress and life satisfaction: The role of personal control

    Personal. Individual Differences

    (2016)
  • A Tsohou et al.

    Analyzing the role of cognitive and cultural biases in the internalization of information security policies: recommendations for information security awareness programs

    Comput. and Security

    (2015)
  • R. Alavi et al.

    An information security risk-driven investment model for analysing human factors

    Infor. Comput. Security

    (2016)
  • M. Bada et al.

    Cyber security awareness campaigns: Why do they fail to change behaviour

  • J.M. Blythe et al.

    Unpacking security policy compliance: The motivators and barriers of employees’ security behaviors

  • N. Bos et al.

    Effects of four computer-mediated communications channels on trust development

  • B. Bulgurcu et al.

    Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness

    MIS Q.

    (2010)
  • M. Butavicius et al.

    Understanding Susceptibility to Phishing Emails: Assessing the Impact of Individual differences and Culture

  • D. Calic et al.

    Self-disclosing on Facebook can be risky: Examining the role of trust and social capital

  • D. Calic et al.

    Naïve and Accidental Behaviours that Compromise Information Security: What the Experts Think

  • R.B. Cattell

    Abilities: Their structure, growth, and action

    (1971)
  • S Chen et al.

    The heuristic-systematic model in its broader context

  • J.M. Cortina

    What is coefficient alpha? An examination of theory and applications

    J. Appl. Psychol.

    (1993)
  • L.L. Couch et al.

    The assessment of trust orientation

    J. Pers. Assess.

    (1996)
  • T. Devos et al.

    Conflicts among human values and trust in institutions

    Br. J. Soc. Psychol.

    (2002)
  • P. Dourish et al.

    Security in the wild: User strategies for managing security as an everyday, practical problem

    Personal and Ubiquitous Comput.

    (2004)
  • S. Dymond et al.

    Safe from harm: learned, instructed and symbolic generalization pathways of human threat-avoidance

    PLoS One

    (2012)
  • S. Egelman et al.

    Scaling the security wall: Developing a security behavior intentions scale (sebis)

  • S Egelman et al.

    Does my password go up to eleven?: The impact of password meters on password selection

  • L.R. Fabrigar et al.

    Evaluating the use of exploratory factor analysis in psychological research

    Psychol. Methods

    (1999)
  • S. Frederick

    Cognitive reflection and decision making

    J. Econ. Perspect.

    (2005)
  • J.P. Friesen et al.

    Seeking structure in social organization: Compensatory control and the psychological advantages of hierarchy

    J. Pers. Soc. Psychol.

    (2014)
  • E. Girden

    ANOVA: Repeated measures

    (1992)
  • Cited by (13)

    • Why people keep falling for phishing scams: The effects of time pressure and deception cues on the detection of phishing emails

      2022, Computers and Security
      Citation Excerpt :

      Previous literature suggests that successful phishing email detection may be linked to a bias towards System 2 decision making processes rather than System 1 decision making processes. Several studies have shown that increased impulsivity in decision-making, as measured by Frederick's (2005) Cognitive Reflection Test (CRT), is associated with poorer phishing email detection (Butavicius et al., 2016; Butavicius et al., 2020; Parsons et al., 2013) but not with spear phishing detection (Butavicius et al., 2016). Generally speaking, both attentional and motor impulsivity have been linked with risky cyber security behaviours such as clicking on links in emails from an unknown source (Hadlington, 2017).

    • Exploring the factors that influence the cybersecurity behaviors of young adults

      2022, Computers in Human Behavior
      Citation Excerpt :

      Eventually, these can manifest as cyberattacks and hacking activities, including sending malware, phishing, and social engineering (Farooq et al., 2015; Ricci et al., 2019; Shaik & Shaik, 2014). There is near consensus in some of the current research that understanding users' cybersecurity behaviors (CSB) is important for identifying the measures and factors that can help reduce cyber threats (Alshamrani et al., 2019; Butavicius et al., 2020; Cain et al., 2018; Gratian et al., 2018; Hassandoust & Techatassanasoontorn, 2018; Iriqat et al., 2019; Shappie et al., 2020; Thompson et al., 2017; Tsai et al., 2016; Zwilling et al., 2020). CSB refers to the extent to which an individual practices cybersecurity measures to avoid or attenuate cyber threats that they are vulnerable to (e.g., use of protection software, limited browsing of unsafe websites, carefulness, data backup) (Hong & Furnell, 2021).

    • The effect of automation trust tendency, system reliability and feedback on users’ phishing detection

      2022, Applied Ergonomics
      Citation Excerpt :

      An “inappropriate” level of trust in detection tools leads to negative financial and privacy consequences to users. For example, an employee may click on a potentially suspicious but interesting link in an email because he or she believes that his or her organization's network security controls will protect his or her actions (Butavicius et al., 2020), which will lead to a subsequent loss of trust and use of the network (Li et al., 2014; Reeder et al., 2018). Therefore, it is necessary to study the influence of automation trust.

    View all citing articles on Scopus
    View full text