Introduction

Engineers in hazardous industries are regularly faced with choices trading off expenditure with the potential to impact others. These are typically framed as issues of risk management. Despite literally dealing with matters of life and death, considerations of safety and reliability are largely mute when it comes to ethics and morality (Roeser, 2006; Vanem, 2012). The question of why accidents should be prevented is seemingly taken for granted within the extensive literature on accident causation and prevention (Ratilainen et al., 2016). When safety is seen as needing a justification, the literature leans on either the business case for safety (Fang et al., 2004; Veltri et al., 2013) or compliance with regulations (Nielsen & Parker, 2012). Yet, the accident literature contains many cases where professionals, including engineers, knew of the potential for failure and yet chose to proceed. The Ford Pinto case is frequently taken as an landmark narrative of corporate amorality, where the car went into production despite the known potential of killing occupants in the event of a collision (Birsch, 1994). More recently, the Boeing 737 MAX aircraft continued to be rolled out to airlines despite knowledge of significant design flaws (Herkert et al., 2020).

The ethical dimensions of engineering work are distinguished by the collective nature of decision-making, the complexity of causal chains, and the fact that decisions are typically taken in uncertainty (Doorn & van de Poel 2012). Following Jonas (1984), these characteristics of engineering ethics have been principally discussed in terms of their implications for how we conceptualize and assign responsibility in cases of failure, rather than a concept of ethics informed by traditional philosophy. In what is commonly referred to as the “many hands” problem (Thompson, 1980; van de Poel et al., 2012), decisions in the design and operation of complex sociotechnical systems are distributed across many actors, and so it can be difficult, if not impossible, to assign responsibility in cases of failure. Scholars also describe responsibility as temporally situated, acknowledging that engineers can act to avoid blame (so-called backward looking responsibility), or take decisions that serve the long-term integrity of an asset (so-called forward looking responsibility) (van de Poel, 2011), also articulated as “preventative ethics” (Harris, 2008) or “virtue-responsibility” (Kermisch, 2012).

While important, these debates center on a limited definition of what ethics means. In this article, we define engineering ethics as a form of professional ethics (Lynch & Kline, 2000), in which we can consider both the outcomes of technological development, and the practice of technological development itself (Swierstra & Jelsma, 2006). As part of a broader STS tradition of examining the doing of scientific work, this directs attention towards “the complexities of engineering practice that shape decisions on a daily basis” (Lynch & Kline, 2000). Some scholars have suggested that “business as usual” domains of technological development are free from ethical consideration (Grunwald, 2000), however, others have demonstrated that, even in low level design cases, codes and standards do not completely prescribe a course of action, and so designers are not relieved of the need for ethical reflection (van de Poel & van Gorp, 2006). Davis (1991) argued that an engineering code of ethics would set out “the rules of the game” between practitioners, resolving many of the pressures and coordination challenges they face. Coeckelbergh (2006, p. 237) called for fostering of “moral imagination” among engineers, in order to “discern the moral relevance of design problems, to create new design options, and to envisage the possible outcomes of their designs”. Such a moral imagination is by no means trivial, and to a large degree runs counter to the engineering disposition created in the course of engineering education and professional life in which “the way they see themselves is generally less expressed in terms of their relationship with people than with (material) artifacts” (Coeckelbergh, 2006).

Very few studies have addressed what practicing engineers make of their ethical responsibilities. One notable exception involved an examination of university engineers’ consideration of ethics with respect to responsibility, finding that they tended to “play down” their social responsibilities, emphasizing the commercial constraints over technological development, and the impossibility of assessing the consequences of technologies in advance (Swierstra & Jelsma, 2006). We contribute to this line of inquiry, focusing on how engineers express, or mute, ethical matters in their work in both design and operations. The gas pipeline engineers we interviewed principally advocated for outcomes in technical language, and so we explore the function of this technical language with respect to ethics. While ethical language is mostly absent from engineers’ accounts, an ethical (or moral) imagination (Coeckelbergh, 2006) mostly underpins their work. However, muteness on ethics can obscure the nature of the risk, leading to some inconsistencies in ethical concepts across the profession, and problems where decision-makers are not working from the same tacit ethical concepts. Next, we introduce the scholarship on moral language, muteness and cognition to situate our analysis.

Moral Language, Muteness and Cognition

Ordinarily, people self-regulate their behavior in line with their own ethical standards. However, there can be a disconnect between what an individual may think of as the right thing to do, and what they do when presented with a scenario in their workplace. In what has been termed a linguistic turn in management (Werhane, 2018), scholars are increasingly recognizing how language, as connected to cognition, serves to support or suppress ethical action. Discursive practices in medicine, such as referring to patients by their ailment or bed number, may not have malicious intent but nevertheless do not treat patients as whole persons and thus dehumanize them (Stollznow, 2008). Such linguistic dehumanization may be part of a coping mechanism that allows medical practitioners to undertake their role. In military and policing domains, use of euphemisms such as collateral damage and friendly fire perform a similar function.

Scholars of business contexts observe a common absence of moral language in organizations, and an absence of moral content in decision-making scripts. This is significant because “[l]anguage affects what people see, how they see it, and the social categories and descriptors they use to interpret their reality. It shapes what people notice and ignore and what they believe is and is not important” (Ferraro et al., 2005). Issue framing influences interpretation, by priming certain cognitive categories which are then used to interpret the information (Butterfield et al., 2000). In describing this absence, Bird & Waters (1989) adopted the term “moral muteness”, i.e. a reluctance for technical and management professionals to describe their decisions in moral terms, even when there are clear moral dimensions. Such moral muteness is illustrated in the Ford Pinto case, where the word “problem” was forbidden and the euphemistic label “condition” was used instead (Gioia, 1992). Unemotional or euphemistic language reflects “muted or underdeveloped ethical prototypes”, which can follow from a lack of experience with ethical questions, or a dominance of experience where ethical questions are not treated in ethical terms (Jordan, 2009).

Professional ethics do not necessarily guard against the doing of great harm. Doctors, lawyers, engineers, and accountants contributed to the Holocaust perpetrated by the Nazis, acting consistently with the norms of professionalism and technical rationality, obeying procedures and the scientific method to the point of dehumanization and murder. In Adam’s view, administrative evil is “deeply woven into the identity of professions in public life” (Adam, 2011). It is characterized by the tendency to mask evil; the diffusion of individual responsibility by modern, complex organizational structures; and cultures of technical rationality. Professionals going about the normal course of their work may take decisions in ignorance, without recognizing the consequences, or seemingly for the greater good, and so evil acts are mostly committed unknowingly. In such contexts, individuals are rarely confronted with decisions that are black and white, and so often “a series of small, usually ambiguous choices are made, and the weight of serial commitments and of habit drives out ethical considerations over time” (Adam, 2011). Such technical rationality and diffusion of individual responsibility in complex organizational structures is observable in hazardous industry.

The concept of moral disengagement (Bandura, 1999) has been adopted to explain cases where workers violate their own moral standards in the course of their work “without feeling obliged to any kind of reparation” (Petitta et al., 2017). The small number of studies that have attended to this phenomenon have pointed to its role in neutralizing responsibility for transgressions at corporate levels (Bandura et al., 2000; White et al., 2009), and accident underreporting at lower levels of the organizational hierarchy (Petitta et al., 2017). Violations are typically attended to at the level of individual behaviors, pointing to mechanisms such as moral justification, advantageous comparison, euphemistic labelling, displacement and diffusion of responsibility, dehumanization of others who have been harmed, and attributing blame to others (Petitta et al., 2017). However, scholars have also demonstrated the organizational context of such violations, suggesting that approaches such as an organizational priority of honesty over increasing revenue would go some way to guarding against lying (Barsky, 2011). In a safety context, as many as 80% of workplace accidents can be underreported due to production pressures and job insecurity (Probst & Graso, 2013).

A consistent observation across these research traditions is that euphemistic language or “muteness” on ethical matters is not innocuous, but systematically supports, at best, blindness to ethical matters, and at worst, their blatant disregard. This risk is particularly heightened within modern, complex organizations underpinned by technical rationality, where individual decisions that have an ethical dimension can be taken without raising alarm bells. The following specifically takes up the case of ethical language among engineers to (1) identify how engineers express, or mute, ethical matters in their work, and (2) to assess the degree to which we should consider moral muteness in engineering as reflective of administrative evil.

Methods

Research Design

The findings in this article are drawn from a qualitative study of the holistic attributes that help gas pipeline engineers to make good decisions in conditions of uncertainty (Hayes et al., 2021). We investigated these holistic attributes via semi-structured interviews. Towards the end of the interview, we asked interviewees: Do you see an ethical dimension to engineering decision-making? What professional qualities do you think help drive the right choices? Can you give me an example? In some cases, interviewees also raised an ethical dimension in engineering decision-making themselves earlier in conversations. Our analysis focuses on these observations about engineering ethics.

We conducted the interviews between August and October 2020. We used Skype for the interviews due to travel restrictions and social distancing requirements during the COVID-19 pandemic. While synchronous video-based interviews do not raise significant concerns in terms of rapport or authenticity (Sullivan, 2012) in order to counter-act some of the potential for a loss of meaning and to manage risk of any failures of the technology (such as a loss of connection by one of the interviewers), all interviews were conducted by two members of the research team. This work was approved by the relevant university human research ethics committees.

Participants

Recruitment for the interviews targeted practicing and recently retired pipeline engineers in technical roles (rather than managerial or field-based personnel). Most interviewees had spent their professional career working in the downstream gas sector in Australia although some interviewees had international experience. Specific duties included developing asset integrity management plans, running inspection activities and managing repairs, designing new facilities and modifications to existing facilities, supervising construction of new facilities and modifications to existing facilities, undertaking risk assessments, assessing engineering work done by others, and supervising other engineers. Participants’ engineering disciplines spanned mechanical, materials, process, chemical, civil, structural, and electrical and instrumentation, based on their initial qualifications as described to us in interview. All interviewees had undertaken accredited engineering degrees. Practicing engineers in Australia are not licenced in most states and ongoing professional development is largely voluntary although these requirements are changing.

Interview participants were recruited in two ways: by direct invitation to pipeline engineers known to the research team as a result of a decade of previous research with this group; and via email invitation from a mailing list provided by industry partners. Employing organizations are all Australian based although some of the operating companies are wholly owned subsidiaries of overseas companies and some consulting firms offer their services internationally. This resulted in a total of 41 interviews. No population data is available regarding gender, discipline, or employing organisation for the target group as a whole but the interviewee group is broadly representative based on our general knowledge of this group. Details are shown in Table 1.

Table 1 Interviewee demographic details

All interview recordings and transcripts were assigned a project code and cannot be attributed to specific individuals. In the Findings, we identify individual interviewees by their project code.

Data Analysis

Discussions were audio-recorded and transcribed by a professional transcription company for analysis with the consent of participants. For the purposes of this article, the authors worked together to analyze the transcripts using thematic analysis (Ezzy, 2013) informed by our reading of the literatures on engineering ethics and moral muteness and cognition. We investigated how pipeline engineers conceptualize ethics in engineering work; the organizational context of their ethical considerations; and the use of language to address ethical matters.

Findings

Engineering Ethics as Connected to Public Safety

Many engineers were emphatic that engineering has an ethical dimension: “Of course, yes” (I26). Another said: “Without a doubt” (I29). For others, the ethical obligations of engineers were important, but often “implicit” (I01). Most interviewees connected engineering ethics to public safety. In the words of one, “Safety is all about protecting people, and that’s fundamentally an ethical issue” (I05). Another said that engineering is ethical if we are talking about “consequences that affect the general public” (I30). And another: “We need to be selfless and work for the public not for our personal benefit or even our client” (I08). Some interviewees emphasized the significance of this component of engineering ethics, saying that it was “core” to doing engineering work, as opposed to a side consideration: “Our core business is taking public health and safety seriously … so I guess it aligns very much with the ethical side of things” (I25). Another said: “I think as an engineer our role is to make sure things are safe and that’s fundamental. At the end of the day I just think that’s what we do. I don’t think that’s a side issue” (I15). Some interviewees expanded on what this commitment to protect public safety might mean in practice. They described potential challenges that they might face with a decision that involves the potential for a pipeline failure in a built-up area. When making such decisions they needed to maintain an awareness of the bigger picture. Awareness of the bigger picture was supported by various tests, such as an engineer putting themselves in the shoes of the people who could be affected by a decision.

I think it’s just having that appreciation for the bigger picture … would you think it’s better to run a pipeline through the verge of a school, or maybe a more appropriate location across the road? And to do that you’ve got to now [drill horizontally under the road] twice to get across the road and back again, and okay, there’s a bit of cost there but have we made the right decision to do that? Well, not the right decision, but is that a more ethical decision, on the assumption that that’s going to change the measurement length [impact distance in the event of a pipeline rupture and fire] overlay onto that school, for instance. So I think for me, it’s just trying to put myself in the shoes of the party that is potentially most disadvantaged by the decisions that we make, and think if that was me, would I be accepting of this decision or not? (I24)

Some engineers felt an additional ethical burden in such cases, as the residents of these areas were not necessarily aware of the risks, or not consulted in development processes.

I think we have a responsibility to make the right decision really. There’s a lot of people [involved in decision-making], whether it’s within an organization or, for instance with this suburb development, the one party that isn’t in that room are the residents. They’re not represented … so we as engineers are the ones that are making decisions on behalf of parties that are ultimately going to be affected. (I38)

One engineer described taking on a kind of collective, humanistic ethics, emphasizing the connection between his work, his family and society in general. Using such non-technical language to describe engineering work is unusual as we will discuss later.

When you design something, and even when … there’s a small percentage that what you’ve designed led to such a catastrophic failure, then the human part of it that kicks in. You say, as a human, “I shouldn’t do something that can end so badly.” Because I might be a victim of somebody else’s bad design, bad choices. My family might become somebody else’s victim of that. We’re all part of the same society, it’s not me, you, my family; it’s like a spiderweb. We’re all connected. You think that you are bending a rule somewhere or breaking the rule somewhere, but you don’t understand that the whole system is working together. (I39)

In rare cases, engineers referred to the potentially disastrous consequences of decisions more graphically in terms of the potential for loss of life and damage resulting from a disaster.

If something went wrong, if we had a pipeline rupture, or something, in a city and people were injured or killed, and houses were burnt, that sort of stuff, that’s a disaster … as an engineer, you’ve got a responsibility to the community. If you’re building a bridge you need to design the bridge so that it won’t fall down, if you’re looking after a pipeline, you need to maintain the pipeline, so it won’t blow up. (I17)

It’s quite obvious to the majority of engineers who work in the types of industries we’re in that a poor decision on their part could lead to a serious problem—deaths, destruction, fires … the reason you take on being an engineer, in a way, is often because you want to do good for society, and you want to build things and make them move, and deliver services to people, and have everything done in a very safe way. And it fundamentally undermines the values that you’ve got in pursuing that job to make unethical decisions. I don’t know that there’s a strong moral code like the Hippocratic oath or something like that which binds you to that, and when you become an engineer you’re anointed with this great scheme of ethics that comes with such a noble profession. (I01)

These interviewees positioned engineering ethics in the context of doing social good through the provision of infrastructure, resources, or services without harm. Interviewee 1 connects engineering ethics to the Hippocratic oath in the medical profession, but notes that unlike the way he imagines medicine, in engineering, ethics is implicit, rather than openly discussed. Many interviewees referred to engineering ethics using the language of doing the “right thing” or the imperative to take reasonable action in line with both professional and the community sense of morality.

While many interviewees spoke of the ethical dimension to public safety decision-making, some extended ethics to include other types of undesirable consequences such as the safety of workers:

Absolutely I think that there’s an ethical—we have a responsibility to protect the people who work on, around, and when they’re building our assets. I think that probably overrides all the other aspects of what we do ... It’s just the right thing to do. (I19)

And to environmental protection:

Instead of just “holding paramount the safety of the public”, that obligation has been extended in recent years to include responsibility for ecological issues, and I strongly agree. (I07)

A small number of engineers could not make the connection between their professional role and ethical considerations with respect to public safety. We will discuss these cases in the final section of our findings.

Raising Ethical Matters in Technical Language

Most interviewees addressed ethical matters—particularly those that pertained to public safety—without adopting a language of ethics, i.e. without being specific about the ultimate consequences of failures in terms of deaths or public damage. Some engineers noted difficulties in speaking about ethics. One engineer explained: “You always find discussion around ethics quite challenging, to be honest. I mean in terms of defining it” (I30). Another answered our question on whether engineering had an ethical component reluctantly, explaining: “Engineers aren’t supposed to be flowery and warm” (I32). Others noted a general absence of discussion on ethics in their workplaces. One interviewee reflected: “It is probably a hard one to talk about, and I don’t think we really have—I can’t remember conversations with engineering colleagues about ethics and, ‘Are we doing the right thing here?’” (I17). Another said: “It’s [ethics] not one that comes up in conversation a lot” (I33).

This is not to say that pipeline engineers do not grapple with issues that have ethical implications, or that the uncommon use of ethical language signals wide-spread unethical practices. Instead, there is evidence that the engineers address ethical matters using technical language. Any scenario in which the pipeline fails is undesirable and requires preventative measures to be put in place. While there can be significant uncertainty in the details, the types of threats to pipeline integrity that lead to failure are predictable and subject to engineering control. Practitioners work with various decision-making tools, such as standards and risk matrices; they collect data on the state of the system, such as through inspection and condition monitoring. Appreciating the current level of risk requires a strong focus on these data and tools, in order to predict the likelihood of failure, the potential consequences of that failure, and what engineers and their organizations should do to prevent failure. At the point of assessing and advocating for a course of action, focusing on a language of ethics tells practitioners neither what they need to do, nor how to advocate for this action at higher levels.

Many engineers described this decision-making context and their role in it in terms of the advice that they would give to address a fault. Rather than appealing to the ethical compass of senior managers in their organization (that is, those within one or two reporting levels of the top of the organization where individuals hold significant financial delegations and hence decision-making authority), they seek to convince through technical “facts”. In the words of one interviewee: “If the client’s not aligned with you, you just try and objectify everything as much as possible, so they can just see it” (I26). Technical analysis and advice offers the most accuracy over the nature of the risk and renders visible the “right” course of action. Another interviewee explained this through the various failure scenarios that could result from a damaged or degraded pipeline and its impacts on his use of language.

I very much use terminology of not alarmist type terminology. But just probably pure facts … when we talk about the blow up or the rupture type scenario, it’s very hard to—from a data perspective predict what will be a rupture versus what will be a leak … I use terminology like it’s highly likely to leak at this location by this certain date if you don’t repair it. Or I’ll use terminology like I predict with tool tolerances … that it may leak by this date … I might use terminology like it’s long enough to rupture. (I35)

This interviewee is making a deliberate distinction between a pipeline leak (which might result in a very small loss of material and may not have any consequences for external people or property) and a pipeline rupture (which in engineering terms is a massive failure that may result in a fire and deaths some hundreds of meters away from the pipeline). He continued:

I think that if you use that alarmist terminology it’s a bit like the boy who cried wolf. If you constantly say that something’s going to blow up, then you lose a lot of credibility. As opposed to just informing people of there’s a potential for a rupture … Because a lot of your communication is around data that’s got tolerances and data that’s got inaccuracies … I think that’s enough to get things done anyway. (I35)

In his view the uncertainty in predicting how the pipeline might fail is such that he is more likely to get approval to take action if he focuses on the evidence that the pipeline might fail, rather than the consequences if it does. Taking the scenario right through to the point of deaths of members of the public introduces so many uncertainties that it opens him up to having the entire chain of events contested to the point where no action might be taken.

Another interviewee similarly contrasted technical and alarmist approaches, noting that the latter introduces risks of credibility, and again, that such an approach is typically not needed to get a risk addressed.

I have seen some people perhaps overstate the issues with a particular hazard or risk, overstated the consequences and they have lost credibility when other people in the room have sort of challenged them and sort of swayed opinion and I think, you know, then perhaps that individual is then sort of seen as a bit of an alarmist and maybe some of their views aren’t taken as seriously after that point … most people I think are—in my experience, they’ve been willing to listen to people as long as it’s a reasonable argument, you know, and based on knowledge and experience. (I40)

Many engineers linked their use of technical language and communication forms to professionalism. An individual may have their own personal sense of what would be an ethical course of action in a given scenario, but their professional judgments need to be communicated in a (technical) report form. In this professional way of speaking, deaths and destroyed houses are framed as categories of consequences in a risk assessment tool, rather than potential outcomes impacting real people.

When I speak about it [an issue that has the potential to impact on public safety] it’s generally in a report format. When you do the risk assessment there’s always—you’re talking about how much, if you allow this what’s the risk … I think it’s just being an engineer, it’s easy to think of it as risk, it’s second nature. Part of the risk analysis will include all those, like loss of public life, environmental—it will include them, but it will be in terms of risk … Unfortunately, being a technical person, yes we cannot explicitly rely on the morals. Maybe personally, for example, you’re talking about people might blow up, and maybe on a personal basis that’s what I think what would happen. But … I do need to write it in a certain way … you’ve got to write it in a professional manner. (I41)

Engineers as Holding the Technical Line of Defense

Technical decision-making is the product of teams of engineers as well as other disciplines and business areas. In making decisions about the design or maintenance of an asset, often as not interviewees indicated that everyone would be in agreement. Their professional values and the values of their company are essentially aligned, and so there is little conflict that would require debate over ethical matters. One commented that, in their organization, “safety is number one” meaning that they have not been faced with “too many ethical dilemmas” (I09). Another observed that individuals seek out employers that share their values in order to avoid ethical dilemmas (I19).

However, some of our respondents identified how ethical considerations could need to be balanced against other organizational objectives, typically related to the cost implications of risk management.

Safety decisions, I get caught up a bit more in issues around who pays for protection where there’s new developments around pipelines and things like that. (I18)

You put a case forward, you know what the right answer is … I think it’s when you’re getting pressure from above about certain [targets] that you need to meet, that you might end up compromising on some of those decisions, or you might get overridden based on short-term needs of the Board … there comes a point where you have to decide whether you’re comfortable working in an organization or not, depending on the alignment of your and their values. (I26)

Reflective of the focus on technical language, some engineers had a strong sense that their role in advocating for a specific outcome that they saw as having an ethical dimension was limited to technical matters.

I think the treatment is the treatment. If there’s a particular risk that drives a particular treatment in your business, then that should be consistent until there’s maybe an information change or something that would drive a different outcome. (I24)

Ethically right is technically right, it’s the same thing. (I28)

For purely technical things, I guess if you see something, you probably have a responsibility. (I21)

This focus on technical advice served as “the line of defense” against organizational decision-making that could risk violating ethical obligations. Critically, the job of finally deciding on actions that balance public safety and other organizational requirements regarding budget and so on is seen as the job of senior management, not of technical engineers.

I won’t change my decision-making because it’s hard to get to or expensive. So, I find that that’s something that as an integrity engineer I’m always very cognizant of. I’m just not shying away because something might be hard to get to or expensive or blow budgets or all that sort of stuff. I think communicating the facts is very important. And not trying to convince yourself out of it because others might not like the decision or the information that you’re putting forward. So, from an ethical perspective I’ll always put forward what is the right thing to do, regardless of whether that’s the expensive option or not. And I’ll let other senior management within companies either accept certain risks of delaying certain locations. Or, trying to do it a different way or certain things like that. I think as my role I see it’s the line of defense, the integrity engineer is the line of defense and needs to be pretty clear from that end and let the ambiguity be managed at a higher level. (I35)

One interviewee explained that “doing the right thing”, holding this technical line of defense, can be difficult, particularly in a consulting role, as it can come at the cost of repeat business: “Doing the right thing prevents people from sticking up their hand and saying, ‘No, just a minute, I don’t think that’s good engineering’. Because too often there’s pressure for them to just shut up and get on with the job, because if we spend time re-doing the engineering, we’re going to be late, and it’s going to cost more.” (I7).

Limits of the Technical when Making Ethical Choices

While most interviewees were able to connect their decisions to ethical considerations, there were a handful that could not make this connection. When questioned, some engineers positioned ethics in terms of sustainability: “Certainly some ethical questions around, ‘How do we actually prosper in a low carbon world?’”. Another as a matter of commercial probity: “You have to then recommend against maybe giving the outsourcing to some of your friends” (I02). Last, in terms of guarding against modern slavery: “All our supply and the ethics behind supply chain things has become very prevalent” (I36). A minority of interviewees disputed the role of professional ethics entirely, suggesting safe outcomes are a function of systems and power in organizations. In the words of one, “I personally think it’s not a matter of ethics, but it’s the matter of implementation, and setting a[n organizational] hierarchy” (I10). Some others who were aware of the ethical dimension to their own work felt that colleagues sometimes did not have the same view and could focus on implementation and compliance at the cost of seeing the bigger picture: “Sometimes [people] even rely on standards, ‘It says that in AS2885 so therefore, it’s got to be safe,’ without actually applying, or without looking at the big picture, without considering what they’re actually trying to protect at the end of the day, so public safety” (I34).

In many, if not most cases, we have the sense that the engineers have ethical considerations as an unspoken set of guiding principles that underpin their work. The trouble with limited explicit discussion of ethics is that we cannot be sure that all decision-makers have a shared appreciation of the nature of the risk and so the importance of a given course of action. One senior engineer (I01) described the lower profile that technical integrity and the potential public safety implications can have in organizations in contrast to the attention paid to worker safety. In his view, the legislative framework holds the managing director and board personally liable for worker health and safety, and unsafe practices are widely publicized when cases are brought before the courts. Far fewer engineering and technical failures have occurred in Australia. Also in such cases, penalties are much lower and do not have the same potential personal consequences for senior management. In terms of organizational hierarchy, he told us that, in his organization, unlike workplace safety, there is also not an asset safety manager at the highest level and so senior management awareness is lower. This interviewee also observed an issue with metrics and rewards.

We have individuals who take personal responsibility for bits of kit that they regard as their responsibility, but that’s based on them almost volunteering to be the custodian of them rather than something that’s enforced via management structures. And enforced via reward systems … Those things are all applauded at a technical level, but they’re not understood by our management because we don’t track them, we don’t report them, and we don’t make the same degree of hullabaloo about them as we do health and safety awards. (I01)

The profile of action on public safety matters is a matter of language, too. Engineers adopt technical language when conducting risk assessments in order to discuss the details of causation and prevention, so as to categorize and classify risks for the purposes of comparison. Another key risk parameter used in pipeline design is the extent of the impact of the worst case pipeline failure. Technically, this is defined as the radius at which a full bore rupture of the pipeline would result in serious burns and death but in the relevant Australian Standard this is euphemistically called the “measurement length”. The way in which this is used in conversation is shown in the quote above where an engineer described a decision about a pipeline near a school. Despite this euphemistic language being used in formal communication, pipeline engineers are well aware of the term’s meaning and in private sometimes use more graphic metaphors and black humor (Hayes, 2015).

Despite this technical and even euphemistic language, some interviewees see the unspoken link to ethics in risk assessment as this interviewee describes:

We do have a great framework for some of those decisions which involve ethical considerations, we definitely have the process in place, but how to drive it is still done by the individual teams, and they do rely on their own collective experience and view of the world. (I12)

As this interviewee also highlighted, ultimately decisions are driven by judgment. The following interviewee contested the adequacy of these broadly adopted practices, on the grounds that the nature of the risk is obscured, particularly for non-technical decision-makers under greater pressure with respect to business objectives.

I absolutely understand that you do need to temper what you say in some circumstances, but when you are in a risk assessment and it’s the key conversation that is going to be a make or break as to whether or not a risk control is implemented, I absolutely think the full picture should be shared … you can’t expect [a senior manager] to make appropriate decisions unless they are given all of the information … in really high consequence stuff, I think sometimes you have to … put the likelihood aside and talk about the consequence of it because I have seen previously … you will make it a medium by tweaking the likelihood and that can mean that some risks don’t actually go further up the tree and I think sometimes they have to. (I38)

Risk assessment tools are designed to identify probability and consequences, and depending on the outcome of this exercise—which involves judgment and grey areas—the most grave risks can be left unspoken, or at least unheard by the most senior decision-makers. In such cases, there is less justification for the avoidance of ethical language, since as described above it is these very people who are ultimately called upon to balance ethical considerations with other business priorities of cost and schedule. Without full awareness of the potential consequences being faced, they are in no position to make the best choice.

Discussion

Most engineers had an ethical or moral imagination in so far that they acknowledged an ethical component to engineering practice, chiefly with respect to protecting public safety. Their articulations reflect conceptualizations of ethics as part of professional practice in a day-to-day sense (Coeckelbergh, 2006). They focused on the everyday decisions that they make, how they relate to outcomes that could impact on public safety as well as the safety of workers and environmental protection, and what this means for their design decisions and integrity advice. They described working through challenges in a way that reflects a “new” approach rather than slavish adherence to codes. This reflects a focus on getting the best outcomes for communities that will ultimately live with the technology, rather than focusing narrowly on compliance. Some went so far as to describe a kind of humanistic ethics, in which they were motivated to do what was “right”, putting themselves and their families in the position of the community that they served. In only one case did an engineer claim that—in effect—their work was “business as usual” (Grunwald, 2000) and so ethical matters did not arise.

This focus on an ethics of practice differs in important ways from how engineering ethics has been treated in conceptual debates. Often, engineering ethics is examined as a question of how we conceptualize and assign responsibility (van de Poel et al., 2012; Kermisch, 2012). These considerations of responsibility manifest in two ways. First, ethics as responsibility is a matter of how we might assign blame in cases of failure. In our conversations with engineers, there was a striking absence of discussion of ethics in terms of blame and liability. Elsewhere, we have directly examined engineering liability, where we also found limited connection between blame avoidance and ethics (Maslen et al., 2020). Ethics as responsibility has also been addressed in the literature as forward-looking (van de Poel, 2011) and as “preventative” (Harris, 2008). These considerations of ethics had more in common with the ideas articulated by the engineers we interviewed, as in the humanistic ethics just described.

Our interviewees typically considered their role in ethical action as a technical matter, reflected in their dominant use of technical language to discuss questions with an ethical dimension. How should we interpret the use of technical language in this context? Adam (2011) would argue that the focus on the processes and the language of technical rationality, and un-naming of ethics, are two of the pillars of administrative evil (the third being the operation of modern, complex organizations). Using this lens, the adoption of technical language is a problem. It takes the ethical language out of what are ethical decisions, and so people can lose sight of the potential consequences. Similarly, the idea of “moral muteness” (Bird & Waters, 1989) suggests that the choice to not speak about the ethics of decisions fundamentally affects the cognition of actors (Jordan, 2009).

However, it is important to keep in sight the function of technical language and cognition in maintaining the safe design and operation of an asset. Technical language supports thinking through the nature of the risk and what can be done in response. The vocabulary of ethics provides no assistance with this. In cases where there is the potential for contestation over the “right” course of action, technical artefacts act as a “prop” (Goffman, 1959) to reason through different framings of risk (Maslen & Ransan-Cooper, 2017). Reflective of this, engineers who recognized the potential for conflict over ethical matters also saw themselves as holding the technical line of defense. They needed to speak up on what they identified as “purely technical matters” even if this was difficult because they knew that their arguments would be scrutinized. In this way, the technical, at least in part, becomes a framework for engineers to conceptualize and express ethical matters, even if not in ethical language. That is, the technical language extends “talk” on ethics to a significant degree.

A technical approach to ethical matters is closely linked to engineers’ professional identities. They saw the technical focus as not only the way that they needed to get things done, but also what it means to be an engineer. They are technical professionals who operate principally in the domain of material artifacts (Coeckelbergh, 2006), and so they need to limit their expression to this domain. To do otherwise is largely unnecessary in their view and also introduces professional risk to them as individuals, due to being seen as an alarmist for stepping outside the ethical scripts given to them within their organizations and profession.

While the engineers mostly articulated the outcomes that they were seeking to avoid, they could be “mute” when it came to considering the most disastrous consequences. The action-oriented technical focus on how safety is to be achieved is a core professional achievement but it comes with a potential downside. With an unstated justification, safety can become fungible in organizations under financial pressure and so subject to normal business case considerations, i.e. as a cost benefit matter, not whether it is the right or the wrong thing to do. This is particularly problematic for prevention of rare events, where there may not be a business case for protecting human life (Hopkins, 2015). The engineers do not talk about loss of life often, though encouragingly some said that, in exceptional circumstances, they might need to in order for people ultimately responsible for a decision—probably non-technical—to fully appreciate the nature of the risk that they may be taking.

The limits of technical language are most clear where euphemistic language was adopted to discuss matters that have consequences for human life, as in terms such as “measurement length” to describe impact distance in the event of a pipeline rupture and fire. This use of language may be a practical strategy to undertake engineering work in the face of death and destruction, as in the use of euphemistic language among medical practitioners, and the military and police (Stollznow, 2008). Neat risk-related categories allow engineers to focus on their work, the smaller tasks for which they are specifically responsible within complex, distributed, and uncertain decision-making contexts (Hayes, 2015), rather than being overwhelmed by the potential implications of the choices they must make.

To address the risks of euphemism and the often implicit treatment of engineering ethics, we suggest some areas for work. On euphemism, there have been some moves among senior engineers in the industry to express consequences in plain language, rather than via terms like “measurement length” adopted in the standard. We see such changes as productive for the treatment of high consequence scenarios. Revision to the Standard along these lines would formalize such a change in language throughout the profession. We also suggest that continuing professional education could help engineers explicitly reflect on their ethical obligations. With this in mind, we are currently developing and testing case based learning approaches to foster consideration of ethics among other professional competencies among practitioners (Hayes & Maslen, 2020; Maslen & Hayes, 2020a, b). We suggest that narrative based learning is particularly strong in developing an ethical or moral imagination (Hayes & Maslen, 2015).