This essay elaborates three key claims made in my 2018 book, Exceptional Technologies. The claims are that philosophy of technology stands to benefit from: (1) a renewed sense of the transcendental as an approach to argument or method; (2) attention to artefacts and practices that show up as paradoxical exceptions to our received sense of what constitutes a ‘technology’; (3) experimentation with different pictures of method. I explore these claims through different examples and emphases than feature in the book, also taking into account criticisms I have (gratefully) received.

Forwarding these claims does not, I hold, amount to placing philosophy of technology at a ‘crossroads’. I take exception to that picture because I think it cedes too much to a picture of ‘Technology’ (with a ‘capital T’) as a kind of road down which we are travelling (Smith 2018: 1–5). The key claim developed in this essay, then, is this: pervasive underlying pictures of a ‘road’, ‘path’, ‘track’ or ‘way’ on which various ‘turns’ can be made, or on which ‘crossroads’ appear, while ostensibly trivial, can in fact be deeply limiting for the types of philosophy of technology we envision and practise. This essay takes exception to those pictures and argues for experimentation with different ones. The aim in doing so is to emphasise philosophy of technology’s status, not ‘on’, ‘at’, or ‘as’ a crossroads, but as an exciting and multidimensional problem space, to be explored and experimented with on many different levels and in many different directions at once.

[T]o make objection to, find fault with, disapprove; also (chiefly with at), to take offence at. Formerly sometimes without preposition, to take (an) exception: to make (an) objection, to object or complain (that). (OED 2019).

This is how the OED defines ‘taking exception’. I have called this essay ‘Taking Exception’ and I played on the notion twice in the previous paragraph. There are at least three dangers here. First, word play: do I have in mind a notion of ‘taking exception’ that is mere rhetorical play on the title of Exceptional Technologies? Second, seeming contrarian: is the act of ‘taking exception’ invariably a destructive or anarchic one, emblematic of a brittle or querulous character? Third, ingratitude: the seminar on which this collection of papers is based, held at Radboud University in Nijmegen in 2018, was called ‘philosophy of technology at the crossroads again’—isn’t it a bit rude to take exception to the key image involved here?

The way out of these dangers is to emphasise that ‘taking exception’ is usually a transitive act: ‘taking exception to’ some direct object or process. The title of this essay is somewhat misleading, then, because it gives the impression of a total form of ‘taking exception’, to anything and everything. Such a totalising form might be logically possible (aside from how desirable or liveable it might be). It is not the kind of ‘exception taking’ this essay seeks to valorise, however. This is because the aim of this essay is to examine and perform the act of ‘taking exception’ as a critical gesture that is always localised and determinate: when we take exception to something, we are taking exception to something specific, for specific purposes. Far from being an instance of contrariness or ingratitude, then, ‘taking exception (to something)’ can instead be read as an act of situated and focused critical attentiveness towards a direct object or process.

The direct objects of this essay are pictures of ‘Technology’ as a ‘road’, ‘path’, ‘track’ or ‘way’ down which we are travelling. To the extent that they valorise these pictures, other approaches to philosophy of technology will be indirect objects of critique. In Exceptional Technologies, I made these points by leaping off from the picture of a road to roam relatively widely through the recent history of philosophy and technological case studies. Here, I will try to do it through examples that themselves make ‘roads’, ‘paths’, ‘tracks’ or ‘ways’ central.

Part one argues for ‘trivialising the transcendental’. In philosophy in general and philosophy of technology in particular, ‘transcendental’ is often a dirty word (Achterhuis 2001: 3; Verbeek 2005: 7; Brey 2016: 129). Part one takes exception to this and argues that a sense of the transcendental is not merely relatively common throughout the history of philosophy, but that it is a process of making sense of things that enables acts of philosophical ‘exception taking’ par excellence. As mentioned above, ‘taking exception’ is, at its best, a transitive act. At its very best, however, it is not merely transitive, but transcendental. This is because it involves taking exception, not merely to something considered ‘given’ once and for all, but to the conditions of possibility under which it is given. To make this case, I compare the sense of the transcendental argued for in Exceptional Technologies and the sense of the transcendental as a ‘logic of design’ argued for in Luciano Floridi’s The Logic of Information.

Part two begins by outlining the concept of ‘exceptional technologies’. ‘Exceptional technologies’ are: (1) artefacts and practices that show up as limit cases for our received pictures of what constitutes a technology; (2) that themselves ‘take exception’ to the limits of our pictures and (3) that challenge us, by virtue of this, to reflect on the constitutive conditions involved in technologies in thoroughgoing ways (Smith 2018: 5). I focus on the case of autonomous vehicles to develop these points. The famous ‘Trolley Problem’ is often the picture through which philosophers are asked to approach this example (Bogost 2018a, b). I take exception to this, arguing in favour of a subversive use of Google Street View as a way of gaining better purchase on logical, epistemological and ontological issues obscured by the trolley problem.

Part three concludes with a focus on the opening passage of Heidegger’s famous ‘Question Concerning Technology’ essay. Heidegger asserts that philosophical questioning ‘builds a way’ (1977: 3). Through a focus on a particular limit case (global warming and the Anthropocene), I argue that this picture is untenable. I argue that philosophy of technology should instead be considered as a multidimensional problem space where, contra-Heidegger, it might be better to focus ‘our attention on [apparently] isolated sentences and topics’ (1977: 3).

1 Trivialising the Transcendental

Exceptional Technologies argues for a renewed sense of the transcendental as an approach to argument or method. In the history of philosophy, ‘transcendental’ is a famously loaded term, to the point where some authors think it might simply have become overdetermined and unhelpful.Footnote 1 Most famously, the term is associated with Kant’s ‘transcendental idealism’, as developed in the Critique of Pure Reason. ‘Transcendental’, in this sense, is meant to describe not what is ‘beyond’ or ‘above’ experience, but the set of formal conditions that necessarily and universally obtain prior to experience and that make experience and knowledge possible: the set of formal conditions that experience actualises (Vitali–Rosati 2012: 24–25).Footnote 2 In some contemporary contexts, including philosophy of technology, however, the term is also sometimes confusingly used as a synonym for that which is ‘transcendent’ or ‘out of this world’ (see, for instance, Hallward 2006: 74–75).

Exceptional Technologies argues for a sense of the transcendental that is not reducible to these senses. Instead, it argues for a ‘meta-philosophical’ sense of how it has developed as a theme in the history of philosophy.Footnote 3 Transcendental philosophy, in this sense, is not reducible to Kant’s ‘transcendental idealism’. Instead, it is philosophy that addresses the following question in different ways: ‘given X, what are the conditions for the possibility of X?’ Following from this, approaches ranging from those of Kant, Hegel and Marx, through to more contemporary approaches from thinkers including Foucault, Derrida, Deleuze and Malabou can, I argue, be characterised as ‘transcendental’ to varying degrees, and are to be distinguished as such by how far they go in problematising the key terms at work in this question differently (‘given’, ‘X’ (that is: ‘objecthood’), ‘conditions’, ‘possibility’).

It is the spirit and example of Kant’s philosophy that counts for more than the letter of its doctrine on this account (on this, see Kant 2004: 44). The spirit of Kant’s philosophy consists in commitment to critical inquiry into conditions of possibility.Footnote 4 The letter consists, at the very least, in the series of often quite illuminating mistakes and anachronisms that transcendental idealism makes about the priority of a particular account of the human mind over the world as it appears and the paradoxes and inconsistencies that Kant’s ‘architectonic’ approach commit him to (most notoriously: the doctrine of ‘things in themselves’).

The issue of Kant’s example is more complex. According to a received picture, Kant is not an experimental or dynamic thinker, open to fallibilistically or pragmatically revising his philosophy (in terms of new ‘givens’ or ‘conditions’, for instance). Instead, he appears as the thinker of static anthropocentrism par excellence, whose transcendental idealism seeks to make human intelligence the measure of all things and for whom reality cannot be thought outside of how it appears in correlation with the forms and categories of the human mind (Meillassoux 2006). This received image is not what I take to be exemplary about Kant. Instead, the example I take concerns how he opens up philosophical problems concerning the ‘given’, ‘objecthood’, ‘conditions’ and ‘possibility’ that point beyond him and that he cannot shut down or control (see Deleuze 2004: 171). On this account, the palpable anthropocentrism of Kant’s transcendental idealism in fact covers over and shuts down a deeper opening up of transcendental inquiry into conditions of possibility that ought to be read as his real example and legacy. Given certain received ways of reading Kant, this reading might appear paradoxical. This does not mean it is unwarranted.Footnote 5

In philosophy of technology since the ‘empirical turn’ of the late 1990s and early 2000s, ‘transcendental’ has typically been viewed as a dirty word, indicative of the pitfalls of so-called ‘classical’ philosophy of technology. This move typically involves criticising ‘classical’ philosophers for reifying ‘Technology’ into something sublime and otherworldly, with figures such as Hannah Arendt, Gunter Anders, Karl Jaspers and, above all, Martin Heidegger, cited as exemplars of the ‘classical’ approach (Achterhuis 2001: 3; Verbeek 2005: 7; Brey 2016: 129). Problematically, however, any notion of an empirical turn away from the classical approach seems to involve a parallel gesture of reification towards ‘classical’ philosophers. This is because it involves reifying the ‘Transcendental’ into a sublime and otherworldly realm these philosophers are committed to. In contrast, I hold that our sense of the transcendental should be de-reified: ‘transcendental’ is not a noun connoting a realm of entities ‘beyond’ or ‘above’ the empirical; it is a process of making sense that should be read adjectivally and that can be focused in empirically acute ways (see Smith 2018: 11–33).

The ‘transcendental’, on this account, is not something to be ‘sublimed’ (see Wittgenstein 2009: 46–47). On the contrary, it should be trivialised as a relatively common theme in the history of philosophy.Footnote 6 It occurs, in some measure, wherever critical exception is taken to how a question or issue is framed or ‘given’ (whether in philosophy or elsewhere) and it is a process of making sense of things that makes different pictures possible. A sense of the transcendental is, then, what enables acts of philosophical ‘exception taking’ par excellence. This gesture, I take it, is manifested in Kant’s philosophy in ways that have had enduring effects for the history of philosophy and is what constitutes one of his real and enduring examples.Footnote 7

*

Let me close this part with a reflection on how this approach compares with Luciano Floridi’s elaboration of the transcendental as a ‘logic of design’. In his recent book The Logic of Information, Floridi writes:

Imagine that one is interested in understanding the genesis of a system, how it came to be, or what brought it about. This is the purpose orienting the choice of LoAs [Levels of Abstraction]. One then focuses on modelling the conditions of possibility of a system under investigation. This is clearly a past‐oriented approach, consistent with causal, genetic, or genealogical forms of reasoning that lead to a particular modelling or conceptualisation of reality, with the identification of necessary and (perhaps) jointly sufficient conditions, with investigations about what must have been the case for something else to be the case. The approach goes hand in hand with an interest in the natural sciences, or any discipline that tries to mimic their logic. One looks at a system and tries to understand what brought it about…. [T]his conceptual logic finds its roots in Kant’s transcendental logic. (Floridi 2019: 190).

Floridi also states that we should aim to be ‘….more, not less Kantian than Kant’ (2019: 191) and that transcendental logic is ‘implicitly used by many disciplines’ (2019: 188). He also identifies transcendental logic at work in the writings of thinkers as varied as C.I. Lewis, Husserl, Foucault, Carnap and Wittgenstein (2019: 188–205). Each of these points is, I think, consistent with the approach I outlined above. Notwithstanding this, however, Floridi does go on to identify an important deficiency of transcendental logic:

Carnap, analytic philosophers, and structuralists and deconstructionists of all kinds seem to forget that structures [of systems] are not only unveiled, investigated, known, discovered, reconstructed, or deconstructed. They are also, and perhaps today above all, built, constructed, engineered, in a word: designed. What does this mean? (2019: 198, original emphasis).

For Floridi, the meaning is very clear: transcendental logic, insofar as it is regressively directed towards past conditions of possibility, fails – along with Hegelian dialectical logic—to address the future-oriented ‘conditions of feasibility’ of a system. For Floridi, the upshot is that a ‘logic of design is a third conceptual logic that is still missing and needs to be developed’ (2019: 188). Instead of being regressively focused on past conditions of actual systems, this logic would be focused on the future requirements of possible (or virtual) ones.Footnote 8

This, I think, is an acute insight and one that is integral to Floridi’s interesting account of philosophy as ‘the ultimate form of conceptual design’ (2019: xi, original emphasis). But Floridi’s own response (2019: 204) appears promissory at best:

[W]hat interesting systems one may design… is a matter of talent, intuition, hard work, opportunity, good fortune, free thinking, and imagination, and many other variables that are hard to pin down. As Donald Schön correctly put it, the designer (or indeed the logician and the philosopher and anyone who creatively designs solutions) is:

like a chess master who develops a feeling for the constraints and potentials [affordances] of certain configurations of pieces on the board.

Can’t we do considerably better than vague invocations of ‘chess mastery’ here? The kinds of variables Floridi mentions may indeed be ‘hard to pin down’. Notwithstanding this, however, can’t a very particular design task be focused on the variables themselves? Namely: the task of designing the kinds of ‘interesting systems’ that might be relatively better at training, eliciting and honing them. To pursue Floridi’s (imperfect) analogy with chess: what are the types of ‘interesting systems’ (that is: challenges, paradoxes, problems or puzzles) that can help chess players gain mastery of the rules?Footnote 9

Floridi states that ‘….designing is not an empirical kind of experimenting (contrary to widespread methodological claims) but more an independent epistemic praxis through which one can acquire genuine ab anteriori knowledge’ (2019: 193). By ‘ab anteriori’, he means ‘weakly a priori’ (2019: 172), in the sense of a revisable a priori (2019: 193). In what ways, however, might designing be a matter of both? That is: in what ways can it be both a matter of an empirical kind of experimenting and a relatively ‘independent epistemic praxis’ involving insights into logical conditions (whether ‘transcendental’, ‘dialectical’, or ‘feasibility’-related)?

Reconsider the chess analogy. Particular types of design (including philosophy as ‘conceptual design’) often involve elements of empirical experimentation. These are like the pieces we move around the board. Floridi’s claim, however, is that, just like chess, types of design also involve rules that act as constraining conditions for the praxis and into which we can have independent insight. When Floridi states that designing is ‘more an independent epistemic praxis' (my emphasis), he seems to be implying that these rules, while always revisable, are more fundamental for determining what a particular game is and how it should be played.

In the case of chess, this seems logical: chess can conceivably be played with all sorts of entities—from material tokens like rocks and coins, to bounces of a ball, to abstract numbers—and so the nature of the game seems to be more more fundamentally constituted by its rules (Kenny 1976: 74–76). But experience shows that chess is not the only possible game, nor the only game worth playing. On the contrary, we are required to play many different kinds of games, according to many different types of rules. Indeed, we are often required to play these games at once, sometimes without knowing whether we are more ‘pawn’ or ‘player’ (see Wittgenstein 2009: 36).

The point here is not to deny Floridi’s claim that designing can be an ‘independent epistemic praxis’: games have rules and it is (logically) possible to learn the rules without experimenting with the game. Instead, it is to render moot the priority he assigns to the independence of designing as an ‘epistemic praxis’ (the intentional emphasis of his ‘more’) and to unfold the relation between ‘empirical experimentation’ and ‘epistemic praxis’ (the unintentional relation that his ‘more’ implies). This is not to deny that games have rules that might be learned independent of playing them. It is to say that, where a plurality of (actual and possible) games are at play, experience and experimentation are often required to address questions like this: Which particular game is being played here? How can its particular rules be observed or revised? Is the game worth playing? How can we stop playing it? How can new games be invented? In what ways does such invention occur out of old games?

The really important feature of Floridi’s approach to the logic of the transcendental (but also to dialectical and ‘feasibility’ logics of design) is, I think, his focus on systems, not experience. This formal approach carries the advantage of making it clear that transcendental logic is not reducible to a form of Kantian ‘transcendental idealism’ centred on the subject and criteria it brings to experience.Footnote 10 What is missing from Floridi’s approach, however, is a sense of how a focus on particular kinds of ‘interesting system’ could help experimentally develop a future-oriented sense of ‘conditons of feasibility’: that is, a focus on how designing as a relatively ‘independent epistemic praxis’ can be focused on empirical experimentation with particular types of ‘interesting systems’ (independent of any concern for whether it must be so focused).

Going well beyond chess analogies, some relevant questions to explore in this sense would be: What makes a system ‘interesting’? What happens when we take, not merely past actual systems that work as a focus, but also, for instance, merely imagined, failed and impossible ones? Under what conditions can these be ‘interesting’ and for what purposes? What are the ‘interesting systems’ that not merely allow us to ‘gain mastery’, but that force us to revise the rules of the games we take ourselves to be playing? And: what are the ‘interesting systems’ that render any sense of ‘mastery’ moot?

2 Exceptional Technologies (as ‘Interesting Systems’)

The contention I want to develop in this part is that what I have called ‘exceptional technologies’ can be viewed as such ‘interesting systems’. This is not to say that exceptional technologies are necessary or sufficient for arriving at a future-oriented logic focused on conditions of feasibility. It is to say that they are the kinds of fortuitous contingencies that might possibly help develop the kinds of ‘hard to pin down’ variables that Floridi mentions above, as part of a broader set of competencies and methods.

I say ‘fortuitous contingencies’ here because exceptional technologies are as much produced and found as ‘designed’. On the one hand, exceptional technologies are the kinds of artefacts and processes that are produced by systems when, for example, these systems remain ‘merely conceptual’ or ‘imagined’, are badly designed, don’t work, or when they work in unexpected ways. ‘Exceptional technologies’ are, in short, the very particular types of artefacts and processes that are produced and found when our systems take exception to the logic of our apparently ‘well designed’ pictures concerning how these systems ought to work. On the other hand, I hold that it is logical to try to design a new concept of ‘exceptional technologies’ to describe such artefacts and processes. This is to avoid writing them off as so much ‘triviality’, ‘redundancy’, ‘noise’, or ‘waste’ and to avoid reducing them to preconceptions we may harbour regarding well-established concepts with which ‘exceptional technologies’ bear a family resemblance, such as ‘thought experiments’, ‘science fiction’, ‘outliers’, or ‘artworks’ (see Smith 2018: 130–131).Footnote 11

In Exceptional Technologies, I make these points by arguing that philosophy of technology, especially since the ‘empirical turn’, has tended to rely on a relatively unclarified common sense of what constitutes a ‘technology’. In the book, I argue that this is a corollary of the fact that ‘transcendental’ has, since the empirical turn, generally been used as a pejorative term: where wide-ranging enquiry into the conditions that constitute technologies is blocked, we are primed to fall back on preconceptions. As discussed above, the typical claim made here is that so-called ‘classical’ philosophers of technology focused on a reified sense of ‘Technology’ and that they avoided empirical engagements with ‘technologies themselves’ (Achterhuis 2001; Verbeek 2005). While there is merit to this claim in particular cases, Exceptional Technologies argues that it is problematic in two main respects: first, it tends to repeat the gesture it condemns by reifying the ‘Transcendental’ itself; second, it esteems a sense of ‘technologies themselves’ that tends towards positivism and presentism, as if our sense of what constitutes ‘a technology’ should just be obvious.

I argued above that one way to address the first of these issues is to de-reify the transcendental and to view it adjectivally, as a method or process. This also allows us to address the second issue, which is the more crucial one facing us here. This is because, understood in a dynamic way, a sense of the transcendental opens up the possibility of critical reflection on the conditions that constitute our sense of ‘technology’ in fine-grained ways across different situations. In Exceptional Technologies, this leads to the claim that, rather than focusing on case studies that align with our common sense of what technologies are (a smartphone, the Internet, AI, or nanotechnology, for instance), we can learn just as much (and sometimes much more) from case studies of ‘exceptional technologies’ that show up as paradoxical. I develop this in the book through case studies of merely imagined, failed and impossible technologies (respectively: Vannevar Bush’s ‘memex’, Francis Galton’s ‘composite photography’ and Arthur Ganson’s work of kinetic sculpture, ‘Machine with Concrete’).

A key critical question that has been posed to me recently is this: ‘how common are exceptional technologies?’Footnote 12 The answer, I think, is that, ontologically, every technology is ‘exceptional’ insofar as it is something singular that is irreducible to a mere token of a ‘type’. Trivially put, exceptional technologies are ontologically unexceptional and can always be produced and found, by or for any given technological system. Epistemologically, however, certain technologies show up as more exceptional or ‘interesting’ than others under certain circumstances (for instance: Bush’s memex as an exception to the situation of networked digital computing). In what remains of this part, I want to develop this point through an example that differs from those in the book: autonomous vehicles.

*

When we are asked to think about autonomous vehicles as a philosophical issue, the case often arrives framed in terms of a famous thought experiment: the ‘Trolley Problem’. I recently had direct experience of this.

In March 2019, I was asked to appear on a drivetime chatshow for BBC Radio Scotland. The topic was autonomous vehicles and, while flattered, my feelings were ambivalent. Vanity said ‘go for it!’ Then circumspection and scepticism kicked in. I was being asked to do this at very short notice and part of me said ‘someone else must have dropped out and certain keywords have been Googled.’ I didn’t take up the invite. I felt regret at first, followed by relief. After the initial invite, I had learned that I was being asked to speak in the context of something very specific—a 293-page Law Commission report called ‘Automated Vehicles: A Joint Preliminary Consultation Paper’ (Scottish Law Commission 2018). This piece had been popularised by a national newspaper and this was the direct catalyst for the chatshow (Naysmith 2019).

What was I taking exception to in not appearing? It wasn’t really the short notice (this could have been overcome at the expense of some nerves). Nor was it being second choice (I might have been completely wrong about this and being asked at all was a privilege). Nor, I think, was I being too conceited or ‘choosy’.

The problem was that, on learning of the specific context, I couldn’t work out how to take exception to how the topic had been framed. I had been ready to take up the offer, but the way the topic was being framed brought me up short. What made me feel uneasy was the way both the Law report and the newspaper story made use of the Trolley Problem. The problem was that it was not being used merely as one example among others. It was being used as synonymous with ‘philosophical’ perspectives on autonomous vehicles. I felt there was something wrong with this, but, without further research, didn’t know how to constructively make the point.

But what is so exceptionable about the Trolley Problem?Footnote 13 Consider how Ian Bogost describes both the canonical version of it and its import for autonomous vehicles:

You know the drill by now: A runaway trolley is careening down a track. There are five workers ahead, sure to be killed if the trolley reaches them. You can throw a lever to switch the trolley to a neighbouring track, but there’s a worker on that one as well who would likewise be doomed. Do you hit the switch and kill one person, or do nothing and kill five? …. In addition to its primary role as a philosophical exercise, the trolley problem has been used as a tool in psychology – and more recently, it has become the standard for asking moral questions about self‐driving cars (Bogost 2018b).

What is problematic about the Trolley Problem is not (just) the crude either/or at its heart: although they will infrequently be as lurid as this one (either five people or one person), disjunctive choices often have to be made (as any student of Frege or Kierkegaard knows). And nor is it (just) the fact that recourse to this problem has in some ways become ‘automatic’ when discussing philosophical issues concerning autonomous vehicles (a kind of ‘drill’). Instead, there are two related problems that Bogost helpfully draws out: 1.) the fact that the either/or is between two ‘tracks’ that we are locked into (‘careening down’); 2.) the fact that, in becoming ‘the standard for asking moral questions about self-driving cars’, the Trolley Problem has locked us into a picture that occludes other important philosophical questions concerning autonomous vehicles (it ‘railroads’ them off the track, if you will).

Bogost’s article does a particularly good job of addressing the second problem here, arguing that a crudely consequentialist calculus derived from the Trolley Problem has tended to trump a more nuanced approach to virtue ethics that initially inspired it (2018b).Footnote 14 The real problem, however, might be deeper still: recourse to the Trolley Problem has not merely occluded alternative ethical and moral questions; it has also tended to occlude whole other species of philosophical questions and issues—most obviously, logical, epistemological and ontological ones.

The point here is not that we can or ought to be pursuing an ‘amoral’, ‘unethical’ or ‘value-neutral’ approach to autonomous vehicles. As Bogost rightly puts it, we ought ‘…to consider the more complex moral situations in which these apparatuses operate’ (Bogost 2018b). The point, however, is that doing this might mean being prepared, not merely to ask alternative moral and ethical questions, but also to ask ones concerning other areas of philosophy.

The headline of the newspaper article I mentioned above, for instance, was ‘Driverless Cars ‘Will Decide Who to Hit in an Accident’’ (Naysmith 2019). This developed a key concern of the Law report (2018: 176–179). Although sensationalised for the newspaper headline, the ethical issues at play here are very much part of the debate surrounding autonomous vehicles. The point, however, is that they are only a part and they are very much downstream of logical, epistemological and ontological issues concerning, for instance, sensor reliability, GPS dependency and the soundness of inferences made by neural networks in image recognition (Broussard 2018; Samudzi 2019).

Crudely put, the point here is this: the logic of ethical action typically presumes recognition as a condition of possibility and recognition is a type of epistemic insight requiring a theory (an ‘ontology’) of the types of beings that are being framed as ‘known’. In the case of the Trolley Problem, it is presumed that the beings down either track are recognised as living humans. Some of the most pressing problems arising in the case of autonomous vehicles occur, however, at the kinds of epistemic and ontological levels that are at stake here and the logic of autonomous vehicle development dictates that they cannot be left to implicit presumption. Viewed in this way, ethical and moral questions concerning whether autonomous vehicles might be programmed or hacked to decide who to hit are at best partial and sometimes downright distracting. What they distract from are more fudamental epistemic and ontological issues concerned less with decision than recognition. Put simply, autonomous vehicles can also hit entities they didn’t decide to hit at all, because they didn’t recognise them, and this is not just a technical issue, but a deeply philosophical one (Broussard 2018; Bogost 2018a).

Engaging these kinds of issues requires a shift away from a view of the Trolley Problem as a ‘standard’.Footnote 15 This is something that Bogost’s article addresses at the moral level by proposing Thomas Nagel’s theory of ‘moral luck’ as a more suitable precedent for engaging the case of autonomous vehicles (2018b). But what about the logical, epistemic and ontological issues? Are there alternative ways of focusing these?

When I declined the radio invitation, it was a sense of any such suitable alternative that I was wholly lacking. While I might have been able to take exception to the way the Trolley Problem was being used, I wouldn’t have been able to propose a constructive alternative at short notice. On reflection, however, there is an alternative I would like to explore: Google Street View.

Street View’s Policy page states:

We have developed cutting‐edge face and license plate blurring technology that is designed to blur identifiable faces and license plates within Google‐contributed imagery in Street View. If you see that your face or license plate requires additional blurring, or if you would like us to blur your entire house, car, or body, submit a request using the ‘Report a problem’ tool (Google 2019).

The point to be explored here is that comparable logical, epistemic and ontological conditions obtain in the cases of Google Street View and autonomous vehicle development. In both cases, there are specific types of entities that need to be recognised. These include faces, license plates and houses in the case of Street View. In the case of autonomous vehicles, there are requirements to recognise both these entities and more complicated sets: not merely human faces, but all kinds of human and animal body parts and the aggregates they form (whether individuals or crowds); not merely license plates, but a whole range of other vehicles, landmarks, threats and obstacles; not merely the margins of the road where the houses start, but the dimensions of the whole road at key moments (especially its centre). In both cases, analogous ‘cutting-edge’ technology is deployed to meet these challenges: from facial-recognition algorithms developed through neural networks that draw on CAPTCHA data (How many images contain a bus?’), to Mechanical Turk-style data sourced from human reviewers (Hancock et al. 2018), to users complaining about how their ‘face, license plate, house, car, or body’ features on the platform.

This last point, however, turns out to be a key a pressure point where our analogy breaks down: if algorithms fail to recognise a human being in the case of Street View and the human finds out about it, they can submit a request using Google’s ‘Report a problem’ tool; if cameras attached to an autonomous vehicle fail to facillitate recognition at a crucial moment, however, the human might only find out about it when the vehicle collides with them.

Consider the reference to ‘additional blurring’ in Google’s Policy section. This is vague enough to cover instances where insufficient blurring of a face or license plate has occurred. But it also covers instances where no blurring at all has occurred.Footnote 16 In other words, it is an oblique way of admitting that the ‘cutting-edge’ technology employed in face and license plate blurring is fallible, with degrees of fallibility.

What makes Street View a useful way of focusing on some of the logical, epistemological and ontological issues implicated in autonomous vehicle development, given this admission, is the play between analogy and disanalogy the comparison implies. Analogous technology is employed in both cases and so, we may infer, are analogous issues concerning the types and degrees of fallibility affecting this technology. Beyond this, however, there are important disanalogies to be considered.

Any user of Street View can access the platform to look for entities or situations that should be blurred but are not. It turns out they are quite plentiful.Footnote 17 Consider, further, that users of Street View can play this game in order to find their own ‘face, license plate, house, car, or body’ and, as outlined above, request blurring through ‘Report a problem’. Prima facie, it seems either impossible or irrelevant to seek to test or learn about the systems involved in autonomous vehicle development in any comparable way: it seems impossible because of a lack of public access to the platforms implicated and because the kind of agency implied in the ‘Report a problem’ function has been delegated away from a wide set of users to a small set of developers and algorithmsFootnote 18; and it seems irrelevant because autonomous vehicles don’t seek to build a total and lasting picture of their environment—they build a picture of part of it at a time and move on, deleting data on parts moved through (Broussard 2018).

In other words, it seems impossible or irrelevant to try to play such games because the platforms involved in autonomous vehicle development are much more ‘blackboxed’ and realtime than in Street View. But it is also vital that such points of apparent disanalogy are not railroaded from view. This is because they indicate quite precisely what more democratic forms of technological development might do differently. To put the matter crudely: perhaps what ought to occur in the present case is for some key disanalogies to become analogies. Perhaps, for instance, we need to work towards a form of autonomous vehicle development that could incorporate a ‘Report a problem’ function on the model of Street View, not merely for developers and algorithms working for private companies, but for all the other users of public roads. Suppose such a thing turned out to be possible: it would then expose the senses of impossibility and irrelevance discussed at the end of the previous paragraph as matters of economic and political contingency, not logical necessity.

What can be taken from this consideration? At the very least, there are good reasons to be uneasy with how the Trolley Problem has been elevated into a philosophical ‘standard’ for engaging autonomous vehicles. We need to take exception to this standardisation, by pointing out what is problematic with its underlying logic and by exploring different ways of approaching the case. Logically, for instance, we need to resist the fallacy of mistaking the part for the whole: what became the canonical version of the ‘Trolley Problem’ was only part of the initial context inspiring it (Bogost 2018b); more fundamentally still, however, moral and ethical questions and issues are only part of a broader set of philosophical questions and issues.

I have tried to take both these points into account in proposing Street View as an alternative. This means that it has not been proposed as exclusive or definitive. Google Street View ought not to be a new ‘standard’ or ‘transcendental’ for approaching the case of autonomous vehicles. Instead, it ought to be read as one example among (potentially) many others that ‘takes exception’ to how philosophical issues surrounding autonomous vehicles have been framed. The point, in this sense, is to provide another way of making sense of the case, where relevant analogies and disanalogies can be learned and tested, and through which the logic of standardisation can itself be resisted.

3 ‘Isolated Sentences and Topics’

In what follows we shall be questioning concerning technology. Questioning builds a way. We would be advised, therefore, above all to pay heed to the way and not to fix our attention on isolated sentences and topics. The way is one of thinking. All ways of thinking, more or less perceptibly, lead through language in a manner that is extraordinary (Heidegger 1977 : 3, my emphasis).

These are the opening words of Heidegger’s ‘Question Concerning Technology’. The first several times I read the essay, however, I scarcely noticed them at all. Perhaps this is because it takes time to read weighty authors like Heidegger. Or perhaps it had to do with how this particular piece was framed—not as one essay among others, but as a ‘founding document’ of philosophy of technology, by a deeply divisive figure (Scharff and Dusek 2014: 247).

The first words I was on the lookout for, then, were Heidegger’s famous exhortation to ‘prepare a free relationship’ with technology, which occur in the very next sentence. The words that occur before them, however and that I skipped over multiple times, strike me as much more important now.

Heidegger asserts that ‘[q]uestioning builds a way’ and that ‘we would be advised … to pay heed to the way’. I want to conclude this essay by suggesting that we don’t follow this advice.

This could easily be confused with one further ‘turn’ on a road or a crude dialectical permutation premised on directly contradicting Heidegger. As I have argued throughout this essay, however, we need to ‘take exception’, not merely to Heidegger’s advice, but to the picture of method underpinning it. In Heidegger’s case, the picture is of a way that is built as one thinks. This is a subtle variation on pictures of ‘roads’, ‘paths’, ‘tracks’ and ‘ways’. It remains one nevertheless.

So how might thinking about technology take place at all, ‘off road’? The last sentence of the above extract may in fact imply a better piece of advice: ‘all ways of thinking, more or less perceptibly, lead through language in a manner that is extraordinary’ (my emphasis).

The point is that we cannot afford to take our habitual pictures, language and concepts to be fully adequate to reality (or ‘Being’ or the ‘event of Being’, as Heidegger might prefer). We also need to find ways of being open to how the world (or ‘reality’, or ‘Being’) questions these pictures. That is, we ought to be prepared to be ‘led through’ our habitual pictures, language and concepts in quite a literal sense: by entities and processes that take exception to them.

Consider the following scenario from Timothy Morton:

You are walking out of the supermarket. As you approach your car, a stranger calls out, ‘Hey! Funny weather today!’ With a due sense of caution – is she a global warming denier or not? – you reply yes. There is a slight hesitation. Is it because she is thinking of saying something about global warming? In any case, the hesitation induced you to think of it. Congratulations: you are living proof that you have entered the time of hyperobjects. Why? You can no longer have a routine conversation about the weather with a stranger (2013: 99).

Far from being too weighty, like Heidegger, Morton’s words might appear off-putting in another way: as too jocular or frivolous. Just as we were ‘led through’ Heidegger’s words above, however, from an explicit piece of advice to an implicit one, Morton’s words do imply several ‘extraordinary’ points.

Heidegger advised that we steer clear of ‘isolated sentences and topics’. What Morton’s scenario demonstrates, however, is that what show up as apparently ‘isolated sentences and topics’ depends on the picture or ‘Level of Abstraction’ at which a problem is engaged.Footnote 19 According to a picture provided by common and good sense, his scenario involves a stranger exclaiming on an everyday topic, perhaps with the aim of building rapport and a shared sense of ‘lifeworld’. According to the flipped picture he describes, however, the stranger might be trolling and the exclamation might be a kind of shibboleth: a test to see where his protagonist sits on an ideological spectrum. Beyond this, however, a third picture is at work, at the Level of Abstraction at which Morton most urgently wants us to operate: the picture of global warming as a ‘hyperobject’ we are literally living inside and that acts as a fundamental condition on even our most ‘routine’ conversations.

The lineaments of this third picture are both the most pressing and difficult to make out. They are the most pressing because global warming and the Anthropocene pose real and objective existential threats to the futures of human and nonhuman forms of life. They are the most difficult, however, because they appear as hyperreal and hyperobjective affronts that take exception to the limits of the other two pictures sketched out in Morton’s scenario.

This is because both the other pictures make human agency central. According to the first picture (common and good sense), we should all expect to get along and strangers (or at least certain types of strangers) should be met with ‘on the level’. According to the second picture, we are living in the flipside of this: a paranoiac world where no one is trusted and where every gesture is over–interpreted. In the first picture, no one is to be taken exception to. In the second, everyone is on a hair–trigger, ready to take exception to everything. The point, however, is that a wholly different kind of ‘exception taking’ is taking place in the third picture: one that literally takes exception to our conventional pictures of the limits of human thought and action and that demands new and searching questions concerning our ‘givens’, ‘conditions’ and ‘possibilities’.

The characters in Morton’s scenario meet in a parking lot. According to the first two pictures discussed above, their possibilities for thought and action seem not merely limited but exclusive, like so many turns on a road (or crossroads) where every turn towards is also a turn away from something (or someone) else: they can, for instance, get back into their separate cars and go about their business (polluting further); or they can go into the supermarket (consuming further); or they can have some nice small-talk (or, indeed, a ferocious confrontation).

The problem is that these possibilities show up as deeply partial at the Level of Abstraction required by the third picture. This is because this picture requires us to think and act according to different limits and according to a logic where every turn, whether ostensibly ‘towards’ or ‘away’, has implications in multiple different directions at once.

How are these two strangers meant to get at the range of complex issues implied by the parking lot, their cars, the supermarket and one another, at once? It might be objected that the parking lot simply ‘isn’t the time or place’, or that such questions are meaningless because they just ramify the problem. But both these objections ignore a real possibility: that both of these strangers might be led through the language of the other towards something ‘extraordinary’—the parking lot as it already is. That is: a complex and multidimensional problem space where many different complex implications are at stake and where all kinds of apparently ‘isolated sentences and topics’ take place and are to be made sense of. Where we are stuck on pictures of ‘roads’, ‘paths’, ‘tracks’, or ‘ways’, the parking lot is void space off the beaten track. Where exception is taken to such pictures, it can show up for what it already is: replete with potential for thought and action.