Sociology has so far not yet had a great deal to say about the era of “tech” (formerly “high-tech”)—that is, of an age dominated by such “technoscientific” developments as robotics, artificial intelligence, machine learning, bioengineering, and the like. Perhaps in part this is a result of the age structure of the discipline, which does not lend itself to a high-profile focus by established scholars on the social impact of newfangled technologies that they don’t necessarily understand well. Younger people who grew up with these technologies and are presumably more familiar with them—so-called “digital natives,” although there is of course much variation in uptake even among them--are more likely to be engaged and comfortable with them than older digital non-natives. Perhaps, however, the lack of attention is a product of doubts about the true nature and significance of the social transformation being ushered in by the new technologies: have we not seen this sort of thing before? Is it really such a big deal, given that capitalism has always fostered (and indeed depended on) technological change and “creative destruction”? Perhaps, finally, the reason is that technological development has never received a great deal of attention in the discipline, and the new technologies are thus thought to be nothing to get terribly excited about. Needless to say, these proclivities in the discipline do not position it well to respond to the tidal wave of social change currently underway as a result of recent technological developments.

Harvard professor and former Forbes magazine writer Daniel Bell was a notable exception to this technological agnosticism. In the watershed year 1973, Bell wrote in The Coming of Post-Industrial Society about the massive shift that was already then underway in advanced economies from manufacturing to services, from factories to offices, from blue collars to white collars, from machine power to brain power, from the production of goods to the processing of information. Bell forecast a trend in the economies of the richer parts of the world away from industrial processes towards intellectual processes and noted that the fastest-growing “class” in the social structure was not, as Marxism had it, the working class, but rather the professional and technical class. Since the production of goods and the production of knowledge were fundamentally different from one another, Bell argued, society would be utterly transformed as these trends unfolded. Paid employment would become increasingly open to women because they would no longer be precluded from participation by the physical requirements of much industrial labor. “Meritocracy,” the supposed distribution of social rewards on the basis of one’s achievements rather than on one’s ascribed status, would become the dominant ethos, he argued, superseding the “old money” pattern of bountiful inheritance (for those lucky enough to enjoy it). All of this, Bell concluded, would constitute “a completely new and unparalleled state of affairs” (Bell 1976 [1973], p. xvii) in world history.

Bell’s “venture in social forecasting” has proven remarkably prescient, and it is in his shadow that sociologists who write about these issues must toil. C. Wright Mills had also intuited the general trend when he wrote White Collar in 1956, but Mills was perhaps too early to diagnose properly where things were headed. Mills was part of a larger group of social scientists who were struck by the relative marginalization of the industrial working class and the advance of scientific and technical occupations and outlooks in early and mid-twentieth century capitalist development (Mills 1956 [1951]; Michels 1968; Lederer 1912). Meanwhile, Norbert Weiner, whose ideas presaged the development of artificial intelligence, was already writing about “cybernetics” in the late 1940s (Wiener 1948). Manuel Castells would be among the first sociologists to write about the social implications of new electronic technologies in the 1990s; his three-volume work The Information Age was monumental in scope but never had an impact equivalent to its heft, at least in the United States (Castells 1996–1998).

Bell foresaw in substantial part the direction that developments were taking nearly 50 years ago, and the consequences of the digitalization of modern life that he first diagnosed continue to unfold in the present. The world that these technologies have helped create might well be said to amount to a new “Axial Age,” succeeding the one dominated by human and animal power as well as the industrial age with its extraordinary contributions to human welfare and longevity, as well as to ecological precarity (Torpey 2017). But the implications of technological transformation remain unclear at best. One finds in contemporary discussion suggestions that we may face a long period of stark and pervasive inequality, as well as that we might attain the realm of freedom that Marx envisioned as the foundation of communist society. Utopia and dystopia appear to stand cheek by jowl. How shall we make sense of this new epoch?

In what follows, I propose to examine several aspects of the tech era that seem most characteristic of the age and to discuss how sociologists might think about and study them. These concerns will govern the way I, too, think about the changes being wrought (if only in part) by novel electronic and biological technologies. While the new technologies are effecting major changes in everyday life and in the economy, we should be careful not to fall into the trap of “technological determinism,” whereby technology forces us to take certain paths in human life and social organization. Technologies are human creations and are thus subject to human choices, even if they may create circumstances that are not unproblematically under human control (such as climate change). The historian Melvin Kranzberg has made famous the insightful dictum, “Technology is neither good nor bad; nor is it neutral” (Kranzberg 1986).Footnote 1 Like the idea of “path dependency” more generally, once we choose a certain course of action, we may foreclose a return to a previous point where we might “start over.” There really may be no going back from the technological standpoint we have now attained (although such reversals have certainly happened in history). It should be emphasized that there are opportunities as well as hazards that need to be examined when assessing the tech era; while there are major causes for concern about the social consequences of the new technologies, they also promise to improve human life in extraordinary ways and perhaps even to usher in a more attractive, desirable form of social life. Science fiction is in certain respects a good guide to thinking about the changes that the new technologies will bring; after all, many of those who have created the new world of “high-tech” got their ideas from science fiction as kids (Geraci 2010, p. 4). Contrary to appearances, however, science fiction is not just about gadgets like Dick Tracy’s then-fantastic wrist-radio; it creates a world, whether a utopia or a dystopia. The creators of these technologies have simply had the chance to realize their vision in a most profound way: their inventions are revolutionizing social life today.

The aspects of the tech era that I want to examine include the following. First and foremost is the power of the big tech companies, which have increasingly come to dominate the economic landscape, and their creators, owners, and investors. Where did these firms come from? How did they come to control the industries in which they predominate? Who are their leaders? What political and social views do their owner-creators hold? What role did government investment play in the development and commercialization of these technologies? How do governments regulate them? As one-time “start-ups” have become multibillion-dollar enterprises with thousands of employees, what sorts of labor conflicts have emerged? All this might be thought of as the “production” side of tech.

On the “consumption” side lies the transformation of everyday life as a result of the spread of the new technologies. This includes the widespread availability of portable computing power that would have been almost unimaginable to most people only 50 years ago. It involves smart phones, smart houses, smart watches, smart everything. Using algorithms designed by software developers and data from your previous purchases, they may predict your shopping needs and order the things you normally buy in advance of you even wondering what you might want or need. Voice-activated personal assistants such as Alexa and Siri will look up phone numbers, find directions, and make dinner reservations. The “internet of things” links devices and appliances to the internet and revises their settings in accordance with an analysis of the data so acquired. Soon, it is said, driverless vehicles will cover the roads, reducing the risks associated with human error and intoxication that yield some 40,000 deaths in the United States every year. At the same time, our connectedness is the fundamental source of profit to those who create online products; our every keystroke is electronically harvested, bundled, and sold to third parties seeking to target advertising more efficiently and effectively. This new techno-economic configuration has appropriately been dubbed “surveillance capitalism.”

Here we begin to encounter the dark side of digital technologies. Concerns about privacy have gradually grown louder as people have become more aware of the extent to which their once-private lives are now available for scrutiny by others. Users of these digital technologies have also become more concerned about the ways in which their purveyors turn users’ activities into a source of information about them that can, among other things, be used to nudge them to stay glued to their platforms. We now have, in other words, an “attention economy” (Simon 1971, pp. 40–41) that is the chief source of revenue for the platforms in question. The fixation on the platforms and their devices, meanwhile, has led to concern about the nature and quality of social ties in the face of these new technologies. Worry arises about the extent to which social media constitute a distraction from and undermining of a more intellectually serious and factually based public sphere. In turn, the prospect of regulating the intrusions of social media and internet-based platforms has grown more urgent. Under what circumstances can such measures be successful, given the enormous power of the tech companies—to which Denmark recently felt it necessary to send an ambassador (Satariano 2019)?

A potentially even darker side of digital technologies emerges from public discussions about artificial intelligence, the central tool driving so much contemporary technological change. One prevalent concern is that work will disappear, as robots and other automated processes replace truck drivers, cashiers, and janitors, not to mention surgeons, journalists, and professors. Will the new technologies annihilate whole occupational categories, or—a more optimistic view—will they simply eliminate drudgery and free people to become many-sided creative beings? The impact on warfare is another noteworthy aspect of the new technologies. Soldiers surely find appealing the idea that robots might face enemy fire in their stead. But is it morally acceptable to conduct warfare without risking one’s life, perhaps with no human being in the chain of decision-making regarding whether to kill another human being? These issues also arise in the realm of policing, where facial recognition technology has become widely used as a law enforcement tool. The producers of these technologies have sometimes faced opposition from their employees as a result; IBM and Amazon recently announced that they would not sell theirs to the police, at least for the time being. How reliable are these technologies and what risks do they pose to civil liberties?

Finally, the digital technologies of the contemporary era offer much fodder for those of a utopian as well as of a dystopian disposition. Some foresee a world stripped of meaningful and remunerative work, governed by opaque, inscrutable, and prejudiced algorithms, which decide without our input who will be promoted or demoted, who will enjoy opportunities and who will not, and even who will live and who will die. On the utopian side, however, there are those who see the new technologies as portending the arrival of what Karl Marx regarded as the realm of freedom—a post-scarcity society in which people are freed of the necessity to make a living and hence emancipated to realize their full human potential. Supporters of a universal basic income tend to fall in this camp. In this sense, today’s techno-utopians are often in line with some of the early tech geeks who envisioned a world beyond oppression and compulsion. Is such a world possible? If so, what political changes will be required to achieve it?

This is obviously a broad, multi-dimensional agenda, but it reflects the substantial changes taking hold today in tandem with technological change. Although this article does not purport to be a literature review, I note a number of sources that have made important contributions in this or that area. My principal aim is simply to suggest how many significant developments are afoot that sociologists should be studying.

The production side of tech

There is a familiar origin myth about the rise of “tech.” The large tech companies that now shape much of the American and global economic landscapes are widely said to have had their beginnings in the garages of tinkerers in the former fruit-growing boroughs of Santa Clara County, California, just south of San Francisco on the Peninsula. Engineers with backgrounds in electrical and chemical engineering were drawn to the area because of its beauty and its perceived freedom from traditional norms and constraints. First, with transistor pioneer (and, it would turn out, biological racist) William Shockley, they created Fairchild Semiconductor, the first major company to produce semiconductors for computers. Then, as a few of the more adventurous men tired of Shockley’s whims and vanity, they decided to strike out on their own. They then created Intel, which would grow to be one of the world’s largest semiconductor manufacturers. Soon thereafter, Steve Jobs, Steve Wozniak, and Ronald Wayne founded a new company that would make use of those semiconductors, Apple Computers, in Jobs’s garage in Los Altos. Others soon followed these early adopters, and the former rows of fruit trees came to be re-christened “Silicon Valley” (Berlin 2005; Isaacson 2014).

The problem with this origin myth, charming and heroic though it may be, is that it leaves out a critical part of the story. As with many new technologies, they were originally developed not by lonely boy geniuses but by the government—the only actor with the wherewithal, immunity from failure, and strategic vision to create them. Much of the story of the origins of Silicon Valley is, in fact, a Cold War story. In response to the Soviets’ launching of Sputnik, President Dwight Eisenhower created the Advanced Research Projects Agency, which later added the modifier “Defense” to its name. DARPA would in time become the key promoter of the technologies that would come to be associated with the tech era. In particular, “the development of the features that make the iPhone a smartphone rather than a dumb phone was publicly funded” (Mazzucato 2015, p. 6), much of it by the military. These features include global positioning systems (GPS, originally developed for military purposes), touchscreen technology, the voice-activated assistant SIRI, and the Internet itself. The Internet, originally known as ARPAnet, was not intended to be a worldwide means of communication but a way for military leaders to communicate in case of a disaster. In short, despite the (perhaps especially American) tendency to see these technologies as simply the ingenious products of wonky entrepreneurs, the major technologies defining the current era are, in reality, first and foremost products of government planning and funding. To be sure, there were counterculture types who saw these new technologies as the birth of a new, better, and more peaceful world (Turner 2006). But without an “entrepreneurial state” providing initial investment and encouragement, many of these newfangled innovations would never have gotten off the drawing board. We need a better account of the role states play in economic development—shaping and creating markets, not just regulating or fixing them. Rather than only studying the heroic “innovators,” we need to understand how states promote technological advance by investing in the research and development of technologies that are subsequently commercialized by others who reap a disproportionate share of the rewards while taxpayers are left only with the costs (Block and Keller 2011).

Fruit groves in the Santa Clara Valley—or, in the case of biotech, old industrial warehouses in the “East Bay” across from San Francisco—were not necessarily the obvious places to put new, high-tech labs and manufacturing facilities requiring super-clean spaces. These operations emerged where they did because they benefited from the “network effects” afforded by the regions where they were located. Those effects arose from the knowledge resources available in the San Francisco Bay Area, home to two of the world’s leading universities, Stanford and the University of California at Berkeley. In addition to having many of the leading scientists developing the new techniques, people at these institutions also saw opportunities for the commercialization of academic scientific research that would begin to transform academic culture profoundly. Unlike Berkeley, however, then—“endowment-poor” Stanford established a policy of taking ownership of all intellectual property generated by its faculty (Berlin 2005, p. 57; Shapin 2008, p. 211). The result was that Stanford reaped millions in income from its faculty’s research endeavors. Gradually, such practices spread around the country and made scientific research a “profit center” that dramatically enhanced its prestige and importance on once-sleepy university campuses. Meanwhile, other outposts of the tech world emerged around other centers of “knowledge production”—Seattle (where Microsoft settled near the University of Washington), Boston’s Route 128 (near Harvard and MIT), and “Silicon Alley” in New York, with its numerous institutions of higher education and easy access to capital markets (Zukin 2020). Eventually, other countries as well would see centers of high-tech development arise, such as Shenzhen, just across the river from Hong Kong in southeastern China. In the meantime, China has of course acquired global significance in the tech supply chain. Still, such developments did not happen everywhere that they might have. The once-influential Bell Labs could have partnered with nearby Rutgers and Princeton but chose not to do so—and eventually went the way of RCA, a pioneering giant in radio and television that could not make the curve when the course of technical and economic development changed.

Over time, as the start-ups grew into massive corporations, the concentration of high-tech workers and the ancillary businesses that appeal to a well-educated crowd have had important consequences for the spatial organization of inequality in the United States. The San Francisco Bay Area has become infamous for the exorbitant real estate prices now faced by the populace and for a serious and not-unrelated homelessness problem. In response to the surge in homelessness, Salesforce founder and billionaire philanthropist Marc Benioff recently donated $30 million to the University of California at San Francisco to study the problem and propose solutions (Walker 2018). More generally, the rise of the tech economy has helped sharpen the differences between rural and urban. As both farming and manufacturing recede as sources of employment and urban places have drawn in the better-schooled and better-heeled, the urban-rural divide increasingly shapes politics in the United States and elsewhere. Historically, the United States has been divided sharply between the states of the former Union, which fared comparatively well in the shift toward a manufacturing economy, and those of the old Confederacy, which tended to be the poorest and least educated parts of the country. Now, increasingly, rural vs. urban status shapes the prospects of those living in them as much if not more than being in the South or the North. Those in rural places seem to have fewer chances to share in the prosperity being generated by the new tech economy; the exceptions seem to be those places in the countryside that enjoy some of the features of the urban tech hotspots, with an institution of higher education often the core of the difference from the surrounding rural landscape. In part because of the distorting role of the Electoral College in choosing American presidents, there are good reasons to think that this country-city gradient played a significant role in the election of Donald Trump in 2016. Moreover, the countryside in many developed countries (e.g., Italy) is being depopulated, leaving behind the least-educated, oldest, and perhaps the most politically alienated elements of the populace. The political consequences of deindustrialization and now the apparent abandonment of the countryside are promoting political backlash—against immigrants, non-whites, non-Christians, and other “others”—on a potent scale. Indeed, the split between the city and the country may be one of the most important fractures dividing advanced societies in the years to come. The relationship between the rise of the tech economy and the urban-rural split is therefore a key topic for scholarly research.

Given the large consequences for themselves and for the communities they consider, decisions about where companies locate their operations have also emerged as controversial issues. The most notable recent example concerned Amazon’s search for its second headquarters. After a months-long search, the enormous company initially chose to divide its new secondary headquarters between Virginia and New York. Then, as opposition flared, Amazon suddenly withdrew the New York offer. A major bone of contention was the matter of subsidies provided by the City of New York to try to lure the company to the city. As has long been true of industrial location decisions, opponents argued that these subventions encouraged a “race to the bottom” among the cities competing for the prize. Noting rapidly rising housing prices in tech-heavy San Francisco and Seattle, activists and politicians voiced familiar concerns about gentrification and the displacement of longtime, lower-income residents. Others criticized Amazon’s sale of facial recognition technology to ICE (Immigration and Customs Enforcement) as well as poor treatment of warehouse workers. On the other side, many.

in the vicinity of the proposed headquarters, including many residents of the neighborhood’s public housing, were eager to have the expected better-than-average jobs that beckoned and saw the impending move as promising improved urban spaces, amenities, and transportation. Ultimately, despite their initial decision to go to New York, Amazon simply reversed itself and took the entire shop to Virginia when it felt that the city was not sufficiently grateful to have been chosen. Controversies such as this one, which are hardly new in the annals of industrial location decision-making, will continue to arise wherever employers propose to bring new high-tech jobs. Accordingly, scholars in coming years must attend to the pros and cons of these corporate decisions, which may be hugely consequential for the well-being of individuals, neighborhoods, and entire cities.

As the major American tech companies became giant, trillion-dollar enterprises, their relationship with government has become increasingly fraught. Some doubt that the state can regulate these enormous firms; it has recently been suggested by a knowledgeable commentator, for example, that Facebook is simply “too big to fight,” (Warzel 2019) and that in any case government is too slow-moving and ill-informed about the tech world to regulate it effectively. Others argue that the central mode of business regulation in the United States—antitrust law—is out-of-date and has little relevance to online products and services (Wu 2018). American antitrust law premises that businesses are operating outside the law if they are not giving consumers the lowest prices, but that is not how “surveillance capitalism” works. The data generated by users, not payment for services, is the chief currency of surveillance capitalism, and that data is gathered, packaged, and marketed by the platforms that can gather data. The data is sold chiefly to advertisers, who can then more precisely target their ads at people who will be more likely to absorb and act on the message of the ad in question (Zuboff 2019).

Meanwhile, voices are being raised to assert that the data actually belongs to those who “produce” it—that is, the users of devices and platforms—and that they should have a right to determine whether their data is vacuumed up by the digital companies. Indeed, some even argue that they should be compensated for producing that data by the simple act of using their devices. So far, there is little regulation of these practices in the United States, other than a formulaic and largely unreadable request that users consent to the collection of their data. If users decline their consent, they are free not to use the app or platform—but of course this hardly constitutes a “choice.” Not satisfied with this situation, a number of European countries have adopted more vigorous legislation in the form of the General Data Protection Regulation (GDPR). The GDPR affords greater data protection to users, who are offered the opportunity to determine whether or not they wish to have their data collected and, if not, they can use a stripped-down version of the app or website. The State of California has since adopted a similarly more restrictive law regarding data protection, the California Consumer Privacy Act, which allows users to decline the sale of their data by third parties.

In an attempt to update government’s orientation toward this part of the economy, former tech entrepreneur and Democratic presidential aspirant Andy Yang has proposed that we need a “Department of the Attention Economy” (Warzel 2019) to deal with the issues generated by new online technologies. Such an agency would surely bring greater scrutiny to the tech companies’ relentless efforts to grab and to keep hold of our attention. Clearly, the struggle over the regulation of data collection and use by online platforms and service providers will be a crucial site of contestation and hence of research in years to come, at least insofar as user data continues to be the life-blood of surveillance capitalism. Under what conditions do people find these technologies well-regulated or the opposite? Will the heretofore largely unregulated character of these industries in the United States seem, down the road, a remarkable “Wild West” period, incomprehensible to those who come later? Why do Europeans seem to be more inclined to protect their consumers’ data? Is this proclivity really a product of their historic experience of totalitarianism, which lies increasingly far back in the past and is no longer part of most people’s living memory?

Caught among the government, the tech titans, and the user-consumers are the employees of big tech. Here lies one of the crucial elements of a sociological agenda concerning the tech world. Our attention is perhaps most frequently trained on the outsized fortunes captured by the big winners of the tech revolution, while the foot-soldiers of the transformation are less heralded or noticed—at least until they rise up in anger. In recent months and years, as big tech has faced a growing backlash against a variety of its practices, reports have appeared of unionization drives, a walk-out at Google over claims of sexual harassment, petitions criticizing the tech companies’ sale of surveillance and facial recognition equipment to law enforcement agencies and authoritarian states, and employee concerns about military contracting. Corporate leaders respond to these claims by insisting that they will not sell surveillance technology to countries where it will be used in a manner inconsistent with the protection of human rights, or that the company in question supports American values and hence regards it as a patriotic duty to ensure that our armed forces have the best equipment available with which to defend those values. This argument presupposes, however, that the only thing the American military does is “defend American values,” a claim that many would surely challenge. Moreover, the tech companies, like most businesses, tend to be opposed to unionization (Broockmann, et al. 2019), and are likely to continue to insist—as Intel did from the very beginning—that unionization is not a productive choice for tech company operations. (Intel promoted the distribution of stock options to employees both to discourage unionization efforts and to provide an incentive for employees to promote the good of the company [Berlin 2005, pp. 115–116, 235–238].) Electronics companies in Silicon Valley also were among the first businesses to replace full-time with temporary workers, on all levels (Hyman 2018). One striking feature of the employee opposition to certain kinds of business endeavor, such as military contracting, is that these employees are resisting a line of work that might well keep them in their jobs—a possible case of opposing their own material interests in favor of their “ideal interests.” It also suggests, however, a certain blindness on their part to the fact that Google, for instance, would not exist were it not for military investment in the technologies from which they have thrived (see above on the government sponsorship of the research that produced GPS, the Internet, etc.).

The fact that both the tech elite and their employees tend to be highly educated, if perhaps not all equally so, is a reminder of the extent to which labor-management relations in the tech world involve two segments of what Alvin Gouldner called the “New Class” of professionals and scientific-technical intellectuals. In this scenario, the “old moneyed class” has not disappeared but is not the immediate antagonist of the rising “New Class.” Rather, the New Class is often intimately connected to those in the more traditional parapets of the capitalist economy, such as finance and law. Yet the New Class has now grown large and prominent enough to have factions that are at war with themselves. Much of the “proletariat” in tech industries is located overseas, especially in China, and may be constrained by unfriendly local laws as much as by their employers’ demands. An analysis of labor relations in tech will be crucial to understanding the field, but they cannot be understood along the traditional lines of class analysis pitting a property-less proletariat against a property-owning bourgeoisie. Many of the employees of the tech giants enjoy stock ownership stakes in the success of the enterprise, and hence share the interests of their bosses, although this is certainly not true of many of those around the world who produce parts of the supply chain that goes into computers, iPhones, tablets, and other devices. Nor is it likely to be true of giggers or of “ghost workers” (Gray and Suri 2019) who monitor content on social media platforms to make sure that company policies are not violated. In all events, labor-management relations must surely be a key element of the sociological agenda for understanding the tech era.

The consumption side: The tech transformation of everyday life

The products and services created by the new technologies are revolutionizing everyday life. One must now check more closely than before if one doubts that the person walking down the street talking to herself is of sound mind; it usually now turns out that the person is merely chatting invisibly with another person, perhaps half-way around the globe. People are more or less obsessed with their devices, and these are now small enough–despite having much greater computing capacity than room-sized computers of a few short decades ago–to put in your pocket or carry in the palm of your hand. Our devices can now hold all kinds of “apps” that permit us to watch or produce video content, listen to or create podcasts, talk on the phone, do our taxes, locate a path to our destination, and much else besides. A “smartphone” is linked to the Internet, so that many more activities are possible than would be the case with a mere “telephone.” Indeed, Chief Justice of the Supreme Court John Roberts recognized this fact in a decision finding that law enforcement authorities need to get a warrant to search someone’s cell phone. The government’s claim that searching a cell phone is “materially indistinguishable” from searches of ordinary material items, he wrote, is “like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from Point A to Point B but little else justifie[s] lumping them together” (Riley v. California 2014, p. 17). The court struck down the government’s argument, as well it should have. These devices now contain vast amounts of information about the user, which an old-fashioned telephone simply could not have held.

In the meantime, more and more products and environments are made “smart” with sensors tracking temperature and humidity, volume levels, alarms and reminders, and the like. We enjoy the convenience of smart phones, smart toasters, smart houses, smart refrigerators, and so on. Alexa, Siri, and other smart assistants dial the phone, look up (“Google”) information we seek, turn on the lights and the coffee maker, and do other automated or voice-activated tasks. Smart devices will soon automatically order toilet paper, breakfast cereal, and other necessities on the basis of their analysis of data from one’s previous purchases. More futuristically, we are told, our cars will be driverless and much safer, reducing the rather serious risks of death and injury associated with often-heedless human operation. In the not-too-distant future, few are likely to own their own cars in any case, as cars will be more or less immediately available from commercial ride-hailing services that will assume the burdens and costs of ownership instead. Finally, we can take advantage of the growing Internet of Things (IoT), whereby our devices and appliances are “online” and take information from the Internet that shapes the way they function. In short, everyday life is increasingly “virtual,” remotely operated, or programmed at a distance from the device itself, a fact that has made life during the coronavirus pandemic much more connected than it would otherwise have been. Still, our relationship to everyday things grows increasingly abstract and virtual rather than “analog” and tactile. How does this affect our orientation to the physical world or to nature?

Our relations with others are also modified by the new digital technologies. People communicate more and more by written means (text) rather than by phone—for a century the device of choice for annihilating interpersonal distance. Indeed, they may “text” (now a verb) each other even when they are in their immediate presence. Social psychologist Sherry Turkle dismisses this behavior as being “alone together” (Turkle 2011). Less censorious commentators, such as danah boyd, emphasize the similarity and continuity of these devices with their predecessors such as the telephone, and, at least when it comes to teenagers, advise calm and restraint in judging youths’ use of their devices. As someone who seems to have found them a valuable means of escape from her own teenage boredom, boyd insists on the emancipatory aspects of the new communication technologies and urges us to take them in stride as a necessary option for young people straining to realize their own autonomy (boyd 2014). Others such as Claude Fischer suggest that Americans surveyed between the 1970s and the 2000s tend to regard their relationships as little altered, raising doubts about claims that new technologies are leading to increased isolation and anxiety (Fischer 2011). Given the traditional sociological concern with the social foundations of solidarity, the consequences of the new digital technologies for social relationships will surely continue to be an important subject of scholarly concern for a long time to come.

Similar issues arise with regard to so-called “social media.” Such facilities allow people to communicate with larger numbers of people rather than one-to-one, as would be characteristic of a letter or, by extension, an email targeted at only one person. Communications on social media are “posted” to the electronic equivalent of bulletin boards, where any passer-by—at least those allowed entry by the person posting—can look at them. Such communications are thus generally somewhere between a letter and the indiscriminate “broadcasting” associated with radio and television, although of course people may want their posts to reach as many “readers” on, say, Facebook or Twitter as they possibly can. The posting of such material may “go viral,” garnering “likes” and commentary from many strangers who find the post appealing, appalling, or otherwise affecting. This is in part because the algorithms that drive Facebook are geared to create a “filter bubble”: that is, user data is analyzed with the aim of understanding what a given individual tends to engage with, and the results of that analysis are deployed to keep the user constantly coming back for more content that further confirms that user’s preferences. There is a reason these new communication platforms are said to be part of a new “attention economy.” As previously noted, the data regarding users’ attention is sold to advertisers, who target their ads at those whose attention is thought likely to be captured by certain kinds of products and services. The problem is that the constant recurrence to the familiar and the previously “liked” creates a field within which the user’s attention tends to become confined. Hence, whereas the original vision of the Internet foresaw a “democratization” of information—its accessibility to everyone—social media such as Facebook tend to promote tribalism and extremism because the algorithms on which they run tend to reward content that reinforces the user’s pre-existing proclivities. Given the normal difficulties that people have transcending their innate biases (Kahneman 2011), a media ecosystem such as this is unavoidably antagonistic to robust democratic debate and dialog. Mark Zuckerberg’s reveries about “bringing the world closer together” notwithstanding, Facebook has stoked the fires of nationalism and ethnic hatred in a variety of places around the world where it is often regarded as “the news” rather than what it actually is—a highly selective, individually curated series of messages accompanied by ads that may or may not be efforts by malevolent forces to influence political behavior (Vaidhyanathan 2018).

Still, Facebook does provide a medium through which families and other groups can communicate easily across large spaces and hence may contribute to sustaining networks of solidarity. It has also made it easier for political activists to find, connect, and plan with each other from a distance, much reducing the “transaction costs” of such activism. At the same time, like “checkbook activism,” that facilitated by the Internet may be “thinner” than activism arising from old-fashioned, gumshoe organizing. Online activism, in other words, has the vice of its virtues: it makes it easy to connect with like-minded souls, but those connections may or may not be deep, lasting, or serious. Overall, such connections may be more ephemeral than those generated by in-person relationships. Moreover, much of the content conveyed via social media is unverifiable and of questionable accuracy, which may contribute to broader processes of undermining public trust (Tufekci 2017). More “democracy” may also mean that conspiratorial and fringe views get more attention than they otherwise might have done. For example, the historian Kathleen Belew has observed that the white nationalists about whom she has written have been taking advantage of the new digital technologies since the 1980s, long before “social media” came into widespread use. These technologies have contributed to their organizing successes, especially since the White House came to be occupied by a figure who would refrain from criticizing them and sometimes even re-tweet their messages. Trump’s tweeting has pushed his platform of choice to decide whether they are “publishers” with a responsibility for content or merely a bulletin board for posting ephemera. Twitter decided it had to call attention to Trump’s mendacity, whereas Facebook has opted for a hands-off policy that eschews what some (not least Trump himself) regard as “censorship.” Far from bringing the world together, then, it has come to be widely thought that social media are an important source of the fractures in contemporary American and global political life. The question for scholars will be to determine which it is that social media promote: anti-democratic tribalism, improved organizing tools, or enhanced opportunities for communication and solidarity?

Similarly, the new digital technologies generate questions about the relationship between them and the self. The constant exposure of the self to others, whether merely textually or photographically as well, inspire a preoccupation with self-presentation that would surely engage a figure such as Erving Goffman. The importance of self-presentation in the tech era goes well beyond interpersonal interaction, however. One’s biographical ephemera and one’s professional qualifications can be made easily accessible to all (or at least to many) who wish to see them, and this is concerning in an age in which the individual is increasingly on his or her own in a sea of constant self-promotion. “Market fundamentalism” scarcely begins to describe the sense that persons are permanently undergoing a job interview, fashioning what they hope will be appealing online selves and perhaps becoming Instagram-famous, which in turn may (or may not) lead to lucrative opportunities. Alternatively, the non-stop scrutiny facilitated by the new technologies can lead to disaster: the “Nosedive” episode of the British TV series “Black Mirror” shows a dystopian world in which people are rating one another in every interaction, with catastrophic consequences for the anti-heroine of the episode. Combined with the rise of a “gig economy” and widespread fears of the elimination of various occupations, the spreading sense of precarity and fragility seem likely to cultivate an orientation to the world distinctly different from that associated with the mid-twentieth century ethos of career-long, seniority-rewarding employment culminating in a defined benefit pension. Clearly the effects of the new technologies on the “presentation of self in everyday life”—and in the ever-present world of work—must constitute a major concern for sociologists exploring these issues.

Artificial intelligence, the guts of the attention economy and of many of the new digital technologies, represents a powerful conundrum for social analysts. It is the technology that makes possible many of the automatic processes in today’s world, yet it is regarded with considerable skepticism and suspicion even by many of those involved in developing the tools. AI has advanced to include such techniques as machine learning, neural networks, and other modes of automating the process of analyzing enormous amounts of data and changing a device’s course on the basis of that analysis. It is techniques such as this that allowed Google’s AlphaGo to defeat the world’s best player of what may be the world’s most complex board game, Go, in 2017. The computer is programmed to analyze patterns in the data it is fed and to adjust its decision-making accordingly. This is the technology that will make self-driving cars possible. It is also being used in surveillance technology to identify criminals (and others). Some argue, however, that such technology is merely going to replicate biases in the data from which the computers in question will be conducting their analyses. If there is an oversampling of African-American males in a database, for example, that may skew the analyses relying on the database. It has also been found that computers cannot reliably “read” the faces of dark-skinned persons or of women. If that is the case, the possibility of false positive matches grows accordingly. The use of artificial intelligence in law enforcement contexts may then repeat or continue previously existing biases in that realm (Noble 2018). “Algorithmic injustice” has come to be a target of protesters in the massive demonstrations that arose in the aftermath of the brutal murder of George Floyd. The relationship between artificial intelligence and social justice will clearly constitute a crucial area for exploration in the years to come. One element of that relationship will surely be the proportion of black and Latinx people in the jobs that produce the algorithms in question, which at present is lamentably small.

Many questions thus arise for scholars as a result of computers’ growing capacity for automatic (“autonomous”) operation. What role will there be for humans if technology can figure out the answers to questions automatically? What if technology can talk to older patients in need of companionship? Would people be satisfied to interact with robots if no one else is available (as is increasingly the case in rapidly aging Japan)? There is little reason to ask such questions in the abstract; these capabilities already exist and are being put to use to accompany the homebound, to remind the forgetful to take their pills, and to order food for invalids. We need to know a lot more about the way people and intelligent machines interact if we are going to assess this matter successfully. This is the realm of “digital ethnography,” which will surely attract the attention of sociologists operating in this mode. However distasteful these human-robot interactions may seem in certain contexts, the likelihood is that they are going to continue to grow, not least as populations around the world age—in part as a result of enhancements in life expectancy resulting from advances in medical and biotechnologies.

But can artificial intelligence replicate human intelligence—or, worse, will an “artificial general intelligence” (AGI, sometimes called “superintelligence”) with the equivalent of (or something superior to) human consciousness enslave us? Will we, as computer geek and sci-fi writer Ray Kurzweil once suggested, eventually upload our brains to a computer, thus creating “the Singularity” and finding ourselves subordinated to computer overlords (Kurzweil 2005)? These kinds of issues may be fantasy; many computer aficionados think the idea of an AGI is over-hyped. Yet venture capitalist and tech billionaire Sam Altman has shifted his efforts from getting rich to achieving a “safe” AGI (OpenAI), while Bill Gates, Elon Musk, and many less well-known observers are concerned about the possibility that artificial intelligence may eventually escape our control and have warned of the need for safeguards (The Hastings Center n.d.). The idea that computers will develop human-like consciousness is much doubted; the philosopher John Searle argued against it some 30 years ago and continues to do so (Markoff 2015, pp. 180–182). Perhaps what is most notable is the fact that the development of artificial intelligence has been accompanied by as much hand-wringing as the development of nuclear weapons. The task for social scientists will be to sort out how seriously to take these claims,Footnote 2 what the debate is about, and how these technologies will have to be regulated in order for technology to develop in keeping with human aims rather than those of other interested parties (Jasanoff 2016).

A more tangible area in which the development of artificial intelligence is causing great concern is, of course, the realm of employment and work. Talk of driverless trucks portends redundancy for some 1.5 million people (mostly men) who make their living driving commercially. Taxi, Uber, and Lyft drivers would face a similar fate. It is thus hardly surprising that many people in many occupations feel uncertainty about their future employment prospects. There are also predictions that many jobs seemingly immune to automation, such as those of lawyers, journalists, and surgeons, will also be affected. Then there are those who have become part of the “gig economy,” still a relatively marginal aspect of overall employment but a development exacerbating and accelerating the demise of the mid-twentieth century “career” a la IBM or General Motors. Yet economist David Autor responds to the concerns about a collapse of the job market by asking, “Why are there still so many jobs?” (Autor 2015). His answer, like that of others, is that some occupations disappear without that necessarily spelling unemployment for their occupants. When ATMs displaced tellers, he notes, those tellers were put to work in the bank doing other, more profitable things than dispensing cash, such as selling mortgages. The history of capitalism, it is said, is a history of displacement and re-employment, often at a higher level of skill and complexity. This may well be the case this time around as well. Yet in the meantime, there is much public discussion of the precariousness of work, the potential loss of employment, and the deterioration of once-stable working lives. The opioid crisis in the United States, which has claimed tens of thousands of lives over the past two decades or so in what Case and Deaton have called “deaths of despair” (Case and Deaton 2020), may have been a harbinger of this transformation.

Others have seen in these technological developments an optimistic outcome, however. More than 50 years ago, the sociologist Robert Blauner wrote of his experience in a factory near Berkeley that “the shift from skill to responsibility is the most important trend in the evolution of blue-collar work” (Blauner 1964, p. 169), and saw this as a positive step away from unskilled drudgery and toward more rewarding work. Many theorists of the “new class” and other post-industrial scenarios (such as the now-forgotten French thinker Andre Gorz) foretold a future without a working class in the traditional industrial sense and benefiting from the labor-saving technologies developed in the course of the twentieth century. Indeed, one might argue that artificial intelligence and new automated technologies have ushered in the “realm of freedom” of which Marx spoke in his analyses of capitalism and his vision of communism. If the problem of scarcity is resolved, people can stop wasting their time obtaining their daily necessities and instead begin to achieve their full capacity as many-sided human beings capable of great achievements. Aaron Bastani recently dubbed this scenario “fully automated luxury communism.” The hitch is whether or not the political will exists to transform the utopian potentials of drudgery-reducing, artificially intelligent processes into the realization of human freedom, understood as a release from material want; his optimistic projections about solar energy will have to pan out as well (Bastani 2019). To put it differently, the social organization of production must be transformed into a system that redistributes “from each according to his ability to each according to his need.” Only under these circumstances is the realm of freedom attainable, according to Marx (and Bastani).

One policy that might make the realm of freedom plausible, and might relieve workers of the anxiety connected to the purported looming elimination of their jobs, is a universal basic income (UBI). Long a subject of discussion in left-wing circles (van Parijs et al. 2001; van Parijs and Vanderborght 2017), where the “de-commodification” of work was the order of the day, it became an acceptable topic of conversation among Democratic presidential hopefuls and Silicon Valley moguls worried that the unemployed masses will soon be chasing the tech elite with “pitchforks” (Hanauer 2014) because their jobs and livelihoods have vanished. Some believe this is why the Silicon Valley venture capital firm Y Combinator has undertaken a UBI experiment in nearby Oakland, but this would not explain why the policy has been put on the ballot in Switzerland and test-driven in Finland (where it failed in both cases). The debate over the pluses and minuses of a basic income guarantee gained further traction in the early days of the Covid-19 pandemic, as it appeared that governments in the wealthy parts of the world were effectively providing an income floor to compensate for lost incomes from employment. The discussion of a universal basic income requires much broader airing, as do the other utopian possibilities opened up by new artificially intelligent technologies and by the more progressive responses to the coronavirus pandemic.

Another worrisome aspect of the advance of artificial intelligence has to do with the conduct of warfare. Beginning with the emergence of air power in the early twentieth century, the bloody business of war has shifted from a matter of face-to-face “ferocity” to a matter of callous “indifference” to the far-off victims of violence (Collins 1974). With the development of autonomous weapons, the possibility has emerged for human beings not to be involved in the process of exercising military violence at all. Weapons may have varying levels of autonomy, but the reality now is that weapons can be programmed in such a way that no soldier need take part in their path to violence. To obviate that possibility, the Obama administration adopted a policy requiring that at least one actual person must participate in the chain of decisions leading to the use of lethal military force (Scharre 2018). What is the morality of automated military violence?

At the same time that war has grown more subject to automation, the use of force has been increasingly ringed around with legal, political, and ethical constraints that sharply circumscribe the use even of precision weapons. This trend is well illustrated by the film “Eye in the Sky,” which portrays the debates among high-level British and American officials trying to determine whether they should “take out” two suicide bombers somewhere in Kenya. After a debate among a transatlantic team of decision-makers, a drone missile strike is called in, but only once a little girl in harm’s way can be conclusively said to face only a 45% chance of getting killed. Perhaps inevitably, things go wrong and the little girl is killed. The contradictory requirements of the political adviser, the legal adjutant, and the military officer, as well as of the Americans and the Brits undertaking this joint operation, caused delays that may have been the reason for her death. In short, the development of precision, autonomous weapons has gone hand-in-hand with an intensified legalism brought about in part by the rise of human rights as an influential idea. It should also be recalled, however, that in theory robots and automated weapons may perform extremely well and save lives. But the “fog of war” may also lead to malfunctions and errors, just as it does in the case of human soldiers. Automated weapons have their advantages, but they have their drawbacks as well. How are these new automated weapons shaping the experience of soldiers—and their numbers—in the contemporary military landscape?

As a result of all the new wealth sloshing around successful tech entrepreneurs, the rise of the new tech economy has also generated striking changes in a once-sleepy if well-heeled domain—that of philanthropy. At the very top, for such figures as Bill Gates, Laureen Powell-Jobs (Steve Jobs’s widow), and Jeff Bezos (and his ex-wife), the titans of tech face a very first-world problem: how to give away all the money they have made. One difficulty is that the hard-charging businessmen (the richest 100 people in tech are almost all men) who created the biggest fortunes tended in the early part of their lives to focus maniacally on building their businesses and making money; for most of them, the task of giving away what they had amassed would only come later. Once they slow down enough to realize how much money they have, however, they typically begin to think about charitable giving. Yet, as analytically oriented data wonks, they are not generally inclined simply to donate to old-school charities such as churches and humane societies. They tend to think critically about how they should give their money away; having grown rich by “disrupting” previous ways of doing things, they often say that they want their philanthropy to be “transformative.” They seek out problems where their charitable contributions can have a big impact—seeking a cure for cancer, promoting charter schools as a solution to the alleged ills of American public education, finding new approaches to understanding the human brain. With all the money they have, they also tend to think big—very big—when making philanthropic contributions. This has exposed the new philanthropists to criticism from those who see these donors as gaining undue influence in a putatively democratic society. For all his contributions to research on and the design of educational policy, education analyst Diane Ravitch has dubbed Bill Gates “the nation’s superintendent of schools” (Ravitch 2006). But he was not, of course, elected to that office, for in fact there is none. However well-intentioned, donors with as much money as Gates and others can shape the school system in ways simply unimaginable to the ordinary voter. The same is true for medicine, the environment, the arts, and the other areas into which the tech elite tend to donate their money. Perhaps surprisingly, they do not typically give money to the poor or to organizations seeking to alleviate poverty (Broockman et al. 2019).

Another (decidedly “First World”) problem with giving away these enormous fortunes is that, since they are generally not sitting in anyone’s mattress but rather are invested in profit-making vehicles, they tend to grow faster than the money can be given away. This sort of conundrum has raised questions about how the tech moguls ended up with so much money in the first place. Here questions regarding taxation of the rich rear their heads--questions that have been aired with regard to the rich more generally in recent years (Saez and Zucman 2019). Why are income tax rates not more progressive, taxing more heavily those at the top of the income scale who have captured a vastly disproportionate share of recent economic growth? Why are capital gains taxes so low? Perhaps we should have a wealth tax, as Democratic presidential candidate Elizabeth Warren suggested? Such questions are a reminder about the extent to which tax rates have fallen since the days of the 90% top marginal tax rates during the Eisenhower era. Since the early 1980s, with the exception of modest upticks during the Clinton and Obama years, the highest marginal tax rates in the United States have fallen steadily and are now, in the aftermath of the Trump tax changes, less than half those of the 1950s. Above all, taxes have become more regressive as payroll and sales taxes have grown as a percentage of the incomes of low-wage workers (Saez and Zucman 2019, pp. 15–18).

Also promoting philanthropic giving is the charitable tax deduction, which forces taxpayers to subsidize the “plutocratic voices” that speak when wealthy philanthropists (and, for that matter, all charitable givers) make charitable contributions. Beyond the tax write-off, the modern legislation for the charitable foundation was enshrined in law during World War I, when John D. Rockefeller sought to create a foundation to spend some of the money he had accumulated while running Standard Oil. The idea of a foundation that would exist in perpetuity was at the time the subject of fierce controversy and has come under fire again as part of the broader criticism of inequality in recent years (MacFarquhar 2015). Why a dead person should be able to control—for generations to come—the assets that he or she left behind by laying out a charter determining the activities of a foundation is a puzzle, indeed. The elimination of such control from beyond the grave was one of the reasons for the abolition of “entailed estates” at the time of the American Revolution (Beckert 2007), just as King Edward I had already established Statutes of Mortmain (“dead hand”) in the thirteenth century to forestall the accession of a deceased subject’s lands into Church ownership (where they would escape taxation). In large part because of the perpetuity feature, Stanford political scientist Rob Reich has argued that the modern philanthropic foundation in the United States “is perhaps the most unaccountable, nontransparent, peculiar institutional form we have in a democratic society” (Reich 2018, p. 144). Tech philanthropy should be a major focus of future research, as it is growing in its impact on society. Some of the tech philanthropists can outspend the American government on the things they want to do, and the Bill and Melinda Gates Foundation is the second largest funder of the World Health Organization after the US government itself. It is not at all clear that these arrangements comport well with democratic modes of decision-making.

The critique of the philanthropic activities of the tech elite is part of a broader contemporary attack on the privileges of wealth more generally. After all, it is not entirely easy to swallow the oft-repeated notion that the tech moguls “want to make the world a better place” when they are getting so fabulously rich. Given that the tech elite tend, at least rhetorically, to be enthusiastic supporters of democratic political organization and the meritocratic distribution of social rewards, they are in an awkward position. Their wealth unavoidably affords them influence far beyond that of ordinary people, and it also provides them with ways that remove their own children from any reasonable standard of “meritocratic” competition—private (or at least lavishly subsidized public) schools, extracurricular activities, test prep courses, exotic vacations, fancy summer camp experiences, and the like.

Yet one finds among commentators a range of opinions in regard to the notion that the tech moguls wish to “make the world a better place.” Perhaps the most accepting is that of Steven Shapin, who found in his research on modern technoscientific entrepreneurs that professions of concern for the community and action on its behalf were more likely to be found in the technoscientific community than in that of an average academic history or social science department (Shapin 2008, p. 312). Viewing this group from a more sociological than social-psychological perspective, Alvin Gouldner argued in his treatise on the “new class” that the humanistic and scientific-technical elite represented the best hope we have in a contest between them and the old-money bourgeoisie—a contest that he thought had come to replace that between the bourgeoisie and the proletariat in the context of the rise of post-industrial society (Gouldner 1979). Meanwhile, the philanthropy analyst David Callahan argues that the tech philanthropists constitute a wing of the new “liberal rich,” a growing segment of the rich whose values are more liberal than those of wealthy people associated with such businesses as resource extraction, real estate, and retail sales (Callahan 2010). Most pessimistically, the journalist Anand Giridharadas sees little but hypocrisy in the claims about making the world a better place. In his jaundiced view, the rich are happy to support reforms as long as these do not undermine or threaten their own privileged positions (Giridharadas 2018). This is of course rather harsh, given that the liberal rich could be, well, more conservative, like so many of their stratum have traditionally been. Still, there must be some assessment of the value of their philanthropic donations against their contribution to the ongoing shrinkage of public engagement, to which philanthropic largesse may add by helping to convince ordinary people that their efforts to improve society are insignificant compared to those of the wealthy.

Evaluation of tech’s contributions to human well-being should be an important area of future scholarship on the tech world. Such an evaluation must include the labor-saving, up-skilling, convenience-promoting, and productivity-enhancing (and hence goods-cheapening) aspects of their innovations. We have already noted the utopian possibilities associated with the tech era; if one combines an analysis of the relative cost of life’s necessities compared to, say, 100 years ago with an optimistic assessment of the innovations currently being made possible by artificial intelligence, one might well imagine that—under the right political circumstances—we are reaching a historical juncture akin to the one that Marx had in mind when he discussed “communism.”

But there are many dystopian possibilities as well. A form of capitalism whose business model relies on surveillance, as Shoshana Zuboff has suggested, suggests a society in which Marx’s exploitative capitalism has combined with Foucault’s Panoptic nightmare in a world from which there is truly no escape. The prospect of stark and growing inequalities based on returns to capital that systematically outstrip those to labor, as Thomas Piketty has suggested are occurring (Piketty 2014), constitutes a profoundly regressive scenario. In this vision, the few who can code and thus make themselves useful in the digital economy lord it over the poorly skilled who provide their overlords with food and other necessities, delivered by insecure giggers. This scenario is especially worrisome in a context in which the political scientist Martin Gilens has found that, except in occasional circumstances, politicians effectively pay little or no attention to the preferences of the non-wealthy majority of their constituents (Gilens 2012). Pervasive and egregious inequalities are likely to solidify the one-sided distribution of political power, locking ordinary people into a nominally meritocratic oligarchy in which they cannot ascend. The anti-democratic implications of social media and of the trolls, bots, and other gremlins that distort reality and distract and disengage viewers are having anything but the democratizing effects originally prophesied for them. The notion of an Artificial General Intelligence that overtakes human intelligence and subordinates human beings to its own ends, which escape those of their maker, has obvious resonances with the Golem of Jewish folklore. The possibility of manipulating the genomes of organisms using new genetic editing techniques such as CRISPR and the prospect of cloning entire beings recalls Mary Shelley’s Frankenstein monster. If much of this sounds like a combination of Huxley’s Brave New World and Orwell’s 1984, that is because their authors were uncommonly prescient about the trends afoot when they published their respective books in the middle decades of the twentieth century.

Conclusion

Which scenario will it be? Social scientists will need to examine closely the ways in which social solidarity is enhanced or reduced by social media, whether the power of social movements is strengthened as a result of online connectivity, and the extent to which economic and social inequality are promoted by the new tech-dominated world and its patterns of development. There is much to celebrate, as well as much to be concerned about—which perhaps has always been true. Yet like the prospect of climate change-induced changes in our way of life, those associated with tech innovations have a particularly bone-rattling character. Imagine a post-scarcity life of leisure in which one can finally skip the need to produce a meal and instead concentrate one’s energies and attention on the concerns nearest to one’s heart. Then, by contrast, imagine a world in which people are artificially created and endowed with artificial intelligence, everything one does is observed and recorded, and Donald Trump refuses to leave office because of alleged irregularities in a closely fought re-election campaign—and all against the backdrop of a global pandemic that threatens illness and death for those who contract the virus. That is the confusing, risk-laden, ambiguous world we now confront. Social scientists must examine which of these tendencies and which of these futures will prevail.