Let us jump back to 1603, the year of the death of Queen Elizabeth the first. William Shakespeare was at the peak of his creative genius as tragic dramaturge, having just produced Hamlet and on the verge of writing Othello, King Lear, and Macbeth. James VI of Scotland ascended the throne as James I of England and Ireland, and would sponsor the “King James” translation of the Bible. It was a period of unparalleled English aesthetic achievement.

England was not isolated in this cultural flourishing. In the Netherlands it was the middle of the Eighty Years' War and beginning of the Dutch Golden Age; Rembrandt van Rijn would be born six years later. In Spain, Cervantes was nearing the end of his life; in France, Molière was approaching the beginning of his.

Having served in minor positions under Elizabeth, Francis Bacon (1858), age 42, was elevated by the new monarch; he was immediately knighted and in fifteen years became Lord Chancellor. To that point his life had spanned not just the greatest period of English language creativity to date but simmering religious strife on the island and, on the continent, from the Peasants' War (which began in Germany in 1524) forward, one religious conflict after another, all predicated on some fine point of doctrinal disagreement. Catholics were killing Protestants, Protestants were killing Catholics, Catholics were killing Catholics, and Protestants were killing Protestants. The internecine warfare would produce at least 10 million deaths across a population considerably under 100 million, before it was moderated in 1648 by the Peace of Westphalia. At Westphalia the exhausted parties agreed that enough was enough, accepted the equal legitimacy of different Christian traditions across national borders, and recognized the sovereignty of nations to make their own choices.

It was in this Janus-faced context of creativity and fanaticism that Lord Chancellor Bacon in 1620 (during a period in which Thomas Hobbes was serving as his secretary) published Novum Organum and sought to turn minds from theologically nit-picking based mutual slaughter to a grander and more beneficial ethical vision.

Printing, gunpowder and the compass: These three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, in so much that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries. (Novum Organum I, 129, adapted Spedding trans.)

Over the next two hundred years this nascent pro-technology ethics—which strived to shift attention away from conquering others to save their eternal souls toward a collective conquest of nature for the material benefit of our mortal lives—blossomed into the Enlightenment technological project. Following in the footsteps of Niccolò Machiavelli, Bacon’s paradoxical effort to raise the mind to lower standards in human affairs succeeded in accord with his wildest expectations.

Bacon’s observation and associated argument, with its effort to redirect moral energy away from reflection mired in small-scale confessional religious casuistry toward large-scale concern for this-worldly human power and wealth, became the distinctive foundation for the ethics of modern technology. It served as the core justification for both the Royal Society (founded in 1660) and the Institution of Civil Engineers (from 1818).

The Royal Society can be read as a Baconian response to a post-Baconian outbreak of theological political fanaticism within England that led to the beheading of King Charles I and animated two decades of civil war. When the crown was restored to Charles II in 1660, among his first acts was to formally charter the Royal Society to “encourage philosophical studies, especially those which by actual experiments attempt … to shape out a new philosophy”. The goal was “to extend not only the boundaries of the [British] Empire, but also the very arts and sciences” by promoting “the sciences of natural things and of useful arts” so that they “may shine conspicuously amongst our people.” “At length”, proclaimed the King, “the whole world [should] recognize us … as the universal lover and patron of every kind of truth.” Truth in biblical religion was in the process of being supplemented with (eventually to be superseded by) truth in natural philosophy and its utilities for power and improvement.

According to Thomas Sprat’s early History of the Royal Society, its inspiration was “one great Man, who had the true Imagination of the whole extent of this Enterprise, as it is now set on foot; and that is, the Lord Bacon.” Sprat would have preferred, he wrote, “there should have been no other Preface to the History of the Royal Society” than Bacon’s own works (Sprat 1667, p. 35).

Shortly after its founding the Institution of Civil Engineers (ICE) sought to emulate the Royal Society. When in 1828 it applied for a royal charter, King George IV requested a definition of this new thing called “engineering.” For precisely what was he to grant a royal approval? ICE President Thomas Telford, the most famous engineer of the realm, tasked a younger colleague, Thomas Tredgold, with drafting a summary statement. The resultant short essay, "Description of a Civil [meaning non-military] Engineer," opened as follows:

Civil Engineering is the art of directing the great Sources of Power in Nature for the use and convenience of man; being that practical application of the most important principles of natural Philosophy [that is, science] which has in a considerable degree realized the anticipations of [Francis] Bacon, and changed the aspect and state of affairs in the whole world [note the echo of Bacon’s claim about printing, gunpowder, and the compass]. The most important object of Civil Engineering is to improve the means of production and of traffic in States, both for external and internal Trade. (See Mitcham 2020, p. 368)

The first sentence of this short white paper, with its appeal to the value of human “use and convenience” (a principle grounded in the moral theory of David Hume), was then incorporated into the Royal Charter—and has ever since served in some form as the standard definition of English-speaking engineering. Transforming material culture by means of quantitative productivity in physical goods and trade is what marks engineering off from, for instance, scientific knowledge production and the architectural design of domestic or civic space. The ICE constituted a social institutionalization of Bacon’s lowering of the standards in the name of this-worldly achievement.

Over the course of the next century this grand but simple ethical vision—which became the core morality of the Industrial Revolution—was progressively subject to Romantic and socialist challenges: what have been called the cultural and political criticisms of science, engineering, and technology. The cultural criticism is succinctly illustrated by the poet William Blake’s petition: “May God us keep From Single vision & Newton’s sleep” (Letter to Thomas Butt, 22 November 1802). Reality is greater than what is revealed by modern science. As he also wrote in “Mock On, Mock On, Voltaire, Rousseau” (from Blake’s 1804 Notebook):

The Atoms of Democritus

And the Newton’s Particles of Light

Are sands upon the Red Sea shore,

Where Israel’s tents do shine so bright.

This is more than a strictly epistemological criticism. The domination of scientific reason deforms culture and thereby human achievement.

A political criticism emerged in association with that elaboration of Hume’s moral theory into the doctrine of utilitarianism. Although Napoleon’s defeat in 1815 ushered in a long-nineteenth century peace between European states, civil strife continued domestically: over the condition of the industrialized working class and about the distribution of goods mass produced by industrially engineered technology. In England, philosophers such as Jeremy Bentham and John Stuart Mill were more than professors of ethics. Utilitarian theory developed as a way to reform law and the state. Mill himself served for a short period as a Member of Parliament. The stop-and-go democratization of the techno-lifeworld over the course of the 1800 s and into the twentieth century was repeatedly galvanized by what many academic philosophers today might well term big, sloppy ideas. Indeed, in an effort to escape such sloppiness, the turn of the century witnessed attempts in the English-speaking world to professionalize ethics with an increasingly narrow and restrained focus.

G.E. Moore’s Principia Ethica (1903) can serve as a case in point. Its opening words, in the analytical table of contents, read as follows:

In order to define Ethics, we must discover what is both common and peculiar to all undoubted ethical judgements; ... this is not that they are concerned with human conduct, but that they are concerned with a certain predicate “good,” and its converse “bad,” which may be applied both to conduct and to other things. (Moore 1903, p. xiii)

Compare that with the opening, a hundred years prior, of Jeremy Bentham’s An Introduction to the Principles of Morals and Legislation (1781):

Nature has placed mankind under the governance of two sovereign masters, pain and pleasure.... The principle of utility recognizes this subjection, and assumes it for the foundation of that system, the object of which is to rear the fabric of felicity by the hands of reason and of law. (chapter 1, paragraph 1)

Moore is concerned about “the difficulties and disagreements [in ethics], of which its history is full” and seeks to respond with conceptual clarifications. Bentham, having studied the law, is disgusted and wants to reform it. Bentham ushered in more than century of ethically stimulated social reform: He argued for making prisons more humane, expanding the democratic franchise, free education, safer working conditions, guaranteed employment, a minimum wage, sickness benefits, and retirement insurance. He worked to pass child labor and public health laws. He collaborated with the utopian socialist Robert Owen to defend the social construction of New Lanark. He opposed not just slavery, well before its abolition in Britain in 1833, but the whole of colonialism.

Bentham’s meliorist ethics of technology was radicalized by the soon-to-be resident alien Karl Marx, whose declaration that the purpose of philosophy “is not just to interpret the world but to change it” is often quoted. In rallying the working class to revolutionary, global action, Marx and Friedrich Engels recognized the need to simplify their interpretation of the science, technology, society relationship in order to cast it in big-picture terms: The Communist Manifesto (1848). The American pragmatist John Dewey (1927), also sought large-scale reforms in education and government precisely to address the disharmonies and injustices introduced by capitalist technology, but never presented his argument in pamphlet format. Dewey just kept writing and writing, saying the same thing over and over again, with little rhetorical flair—and having no more than marginal influence. Although feted as America’s greatest philosopher, he never had the impact of a Ralph Waldo Emerson or Henry David Thoreau.

Both Marx and Dewey argued that modern technology introduced into the human lifeworld challenges that called for political as well as conceptual and personal responses. For Marx, power needed to shift from the minority capitalist class, which, like any class, would always view the use of technology through the lens of its own self-interest, into the hands of a majority class whose interests were more expansive and thus more just. Justice required that industrial production be governed not just by those who profited from it but by all those affected by technology, especially those suffering its negative impacts. Marx and later Marxists found it difficult to imagine this taking place without social disruption—but, in their efforts at revolution, sponsored sufferings of unimaginable proportions. Humans could be mobilized to slaughter others in the name of future this worldly benefits as well eternal salvation.

For Dewey, meliorism was preferable to revolution. The separations or mediations that technological power introduces into human perception and action call for the gradual transformation of society into a kind of democratically guided technocracy.

Take the quotidian case of drinking water. For thousands of years people had usually been able to identify water that was safe to drink on the basis of non-instrumented perception: If water from a stream appeared clear, smelled OK, and a small sample tasted good, it was usually safe to drink. If it did not meet these criteria, one could always look for another source. But in industrial cities, where water is often contaminated by chemicals that cannot be seen, smelled, or tasted, and there are no alternatives to the tap, governments must establish technical agencies to provide and monitor water systems. The process of the socially constructing water systems took place gradually over a hundred year period. This was not a construction process that could work well by revolution. Additionally, since we are all dependent on engineers and scientists, we need to develop sufficient technoscientific literacy to appreciate, provide secondary level oversight, and understand the need to pay taxes necessary to support the infrastructure, its operation, management, and maintenance. This process of cultivating the relevant technoscientific literacy begins in primary school when we are taken on field trips to visit water treatment plants and meet its managers.

In the mid-twentieth century this democratic socialist ethics of science, engineering, and technology in the form of regulatory and meliorist programs was undermined on three interrelated fronts.

First, we became increasingly aware of unintended consequences of technological actions—even actions that were undertaken by virtuous experts, with the right intentions, that produced some good consequences. There were often also negative side effects or second-order consequences. We needed a new kind of agency to do technology assessment—which paradoxically and unintentionally introduced potentials for public mistrust of the abilities of experts to accurately predict the results of their work.

Second, we became aware that socialism was not enough. There was a non-human environment to consider. This gave rise of the second wave environmental movement and establishment of environmental protection agencies. Here the work of public spirited scientists such as Rachel Carson (1962) served as major catalysts. But Carson was also charged with masking her personal appreciation for the environment in scientific claims that had negative health and economic consequences for others.

Indeed, third, we became aware that the managers of technical agencies—including technology assessment and environmental protection agencies—tended to form a “new class” that often promoted its own self-interests. This became the big neo-liberal argument for outsourcing government regulation and replacing planned with emergent orders through market forces and other libertarian forms of interaction: in Friedrich von Hayek’s (1969) phrase, as “the result of human action but not of human design.” Spontaneous order was to be preferred over consciously designed order on both economic and moral grounds. Recent rhetoric associated with the concept of the Anthropocene as an emergent new global order in climate can be strangely comforting to the neoliberal mind.

In an effort to negotiate these three overlapping challenges, and in association with increasing academic professionalization, the ethics of technology retreated to prioritizing small problems over big ones. In the last quarter of the twentieth century, the ethics of technology moved to abandon any broad claims to talk about Technology (with a capital T) in favor of a much more narrowed focus. The ethics of technology became environmental ethics, biomedical ethics, computer ethics, information ethics, engineering ethics, research ethics, nanoethics, neuroethics, and more. In each of these technological regionalizations there were further micro-issues of risk, safety, privacy, participation, and more.

There were some good reasons for this. Some cultural and political criticisms of technology had indeed become exaggerated artifacts of rhetoric, which critics castigated as substantialist or essentialist theories lacking any practical purchase on the real world of engineering and its technologies. Cultural elites and technical experts have struggled with growing gaps between the few and the many, not just in economic terms of the rich versus the poor. The perennial philosophical inclination to return “back to the things themselves” sponsored the birth of case studies and more than one empirical turn (Kroes and Meijers 2000; Achterhuis 2001). Risk benefit analyses were safer than big dangers talk. The frustrating inability to make big changes has probably also been a factor that suggested there might be more hope for small ones—even smaller than those for which a proponent of piecemeal social engineering such as Dewey had argued.

And yet big problems remain—and are getting bigger. In the ethics of science, engineering, and technology we must contend with a paradox of the impotence of small efforts. Rationally, we would expect small efforts to be more likely to be effective than big ones, when in fact this is not always the case. Sometimes it is big efforts and big ideas—and not just rhetorical ones, which is what academic philosophers often pursue, in their own efforts to stand out in a crowded academic marketplace. The paradox echoes an economic cliché: Large corporations easily become mired in small ideas that produce little real innovation. They become focused on branding. Small start-ups regularly replace established corporations, precisely because of their new big ideas.

Recall that it was big ideas such as those popularized from Martin Heidegger’s “The Question Concerning Technology” (1954) and Jacques Ellul’s The Technological Society (1954, 1964) or Herbert Marcuse’s One-Dimensional Man (1964) and Lewis Mumford’s The Myth of the Machine (1967/1970) that strengthened protests against nuclear weapons and the Vietnam War. In the ethics and politics complex, more nuanced ideas and arguments are often weak in their consequences; simplistic ideas may sometimes get results.

My own argument here is a big one, lacks many nuances, and might well be described as philosophically sloppy. Nevertheless, I would defend it as deserving consideration, especially as we contemplate a future mutation of the human condition involving.

  • Continuing nuclear proliferations;

  • Population growth and consumption intensification, which together gobble up resources and flood the planet with consumer goods and wastes;

  • Progressive biodiversity losses coupled with genetic engineering and the nanoscale design of materials, processes, and products;

  • Geological scale transformations of the atmosphere, oceans, and landscape;

  • An infosphere awash in artificial intelligence and big data; and

  • The emergence of DIY abilities in regard to chemical, biological, and informational weapons

— to offer a no more than six random samples.

The clock on the cover of the Bulletin of Atomic Scientists now (January 23, 2020) stands at 100 s to midnight, closer than it has been since before the first US-USSR suspension of atmospheric nuclear testing in 1959, in fact closer than at any point since its creation in 1947. The bottleneck of possibility through which we must pass into the future seems only and ever to narrow. In the words of the Bulletin’s press release:

Humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond. The international security situation is dire, not just because these threats exist, but because world leaders have allowed the international political infrastructure for managing them to erode.

And this was before the covid-19 pandemic outbreak. Is it possible that we are not listening to the Kassandra’s among us such as Gunther Anders (1982) and Jean-Pierre Dupuy (2013) out of fear regarding the changes we would be called upon to contemplate? Are we not letting the little fear trump a bigger one?

To conclude: The ethics of technology has a big picture historical heritage that deserves to be recognized if not recovered. The ethical stance that in the sixteenth century sponsored the rise of modern technology did not shy from making bold claims in ways that engaged political power and contributed to world-historical transformations. Such was equally the case in the nineteenth century when industrial technology presented the social order with injustices small and large; classic socialism included an ethics of technology writ large as did mid-20th environmentalism. Now in the early twenty-first century science, engineering, and technology are bringing forth an unprecedented, multidimensional mutation of the human condition. Faced with an iceberg of issues, we must not let ourselves be accused of being content with rearranging deck chairs on the Titanic. Given the icy mass below the surface, it may well be that we can do little more than admit frankly that we are in uncharted waters, that catastrophe awaits us. But we should be looking over the edge into the darkness rather than keeping company with the stewards of a fraught if not doomed voyage. Lucidity with regard to our ignorance and its dangers is preferable to averting our eyes. This is the case, no matter what happens. On a ship bound for the abyss, looking into the abyss may still bring some small measure of enlightening consolation from philosophy.

If we don't think big, it will be left to non-philosophers to do so. Philosophers must unite to throw off their chains and reclaim not just their own interests but those of their non-philosopher companions in the terrestrial cosmopolis.