Skip to main content

Advertisement

Log in

The Missing Ingredient in the Case for Regulating Big Tech

  • General Article
  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Having been involved in a slew of recent scandals, many of the world’s largest technology companies (“Big Tech,” “Digital Titans”) embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, cannot be relied on voluntarily to conform with what justice demands, especially when faced with powerful incentives to pursue their own self-interest instead. Consequently, the critics have recommended a suite of laws and regulations to force the tech companies to better comply with the requirements of justice. At the same time, they have paid little attention to the possibility that individuals acting within the political context, e.g. as lawmakers and regulators, are also imperfect and need not be wholly compliant with what justice demands. This paper argues that such an omission is far from trivial. It creates a heavy argumentative burden on the part of the critics that they by and large fail to discharge. As a result, the case for Big Tech regulation that emerges from the recent literature has substantial lacunae and more work needs to be done before we can accept the critics’ calls for greater state involvement in the industry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See e.g. Floridi et al. (2018), Loi et al. (2020), Yeung et al. (2020) for discussions of many of these codes.

  2. Yeung et al. worry that the codes are not built on any coherent normative foundations and lack established mechanisms for resolving conflicts between different values. Greene et al.’s criticism is more difficult to pin down—they claim that AI ethics is committed to “ethical universalism” thus excluding relativist approaches, that it doesn’t explore the possibility that the new technologies should be banned, and that it ignores such issues as prison abolitionism and workplace democracy. While it’s clear from the paper’s tone that Greene and colleagues strongly disapprove of these assumptions and omissions, they don’t offer explicit arguments as to what is wrong with them or why alternative approaches are preferable.

  3. A note on terminology: I use the terms “Big Tech” and the less popular “Digital Titans” (also used by Yeung et al.) interchangeably to refer to large, mostly US-based technology companies, such as Amazon, Apple, Alphabet, Facebook, and Microsoft.

  4. In making their case against the unregulated or self-regulated status quo, the critics of AI ethics pay heed to the following principle, articulated by Brennan (2007): “the limits to human benevolence, to civic virtue, are a fundamental constraint in the pursuit of normatively desirable ends. Moral reasoning on its own can never be taken to be compelling for action.” However, as we shall see, once they enter into the analysis of their preferred public policy solutions, they abandon Brennan’s dictum, which continues to caution against just such an approach: “any normative social theory that simply assumes compliance [with morality] is therefore seriously incomplete at best and at worst can encourage action that is perverse in its consequences. Misspecifying the constraint of human moral frailty is no less an error than misspecifying other kinds of constraints.”.

  5. This is not to accuse state agents of some special venality. They may well think that their re-election or bigger budgets would serve the public interest.

  6. European Union’s GDPR law has been found to have “worsened one of the main problems experienced in digital markets today, which is increased market concentration and reduced contestability. In addition, the GDPR seems to have given large platforms a tool to harm rivals by reducing access to the data they need to run their business … [Moreover] the costs of implementing the GDPR benefit large online platforms, and … consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors” (Geradin et al., 2020).

  7. Majorities of investors interviewed by Le Merle et al. (2011) expressed strong reservations about investing in a regulatory environment with the opt-in system of data collection and a “Do Not Track” registry.

  8. European Commission’s (2021) proposed rules for “ex ante conformity assessment” for “high-risk” AI systems – roughly, assessments of risk and benefits prior to entering the market—may have adverse impacts on innovation, imposing delays, compliance costs, and incentivizing exit from the European market entirely (Borggreen, 2020).

  9. As Narayanan and Lee-Makiyama (2020) find, “[t]he economic impacts of shifting from ex-post to ex-ante [regulation] in the online services sector as stipulated by the proposals of Digital Services Act [will lead] to a loss of about 85 billion EUR in GDP and 101 billion EUR in lost consumer welfare based on a baseline value of 2018. Also, it will reduce the labour force by 0.9%.”.

  10. “Total capital invested in [technology companies based in] North America… is approaching nearly 5 × the level of investment in Europe” (The State of Eurpean Tech, 2020). This is despite the EU having around 100 M more people and accounting for about the same share of global GDP as the United States. Since financing, especially in the form of venture capital, leads to increases in economic growth (Samila & Sorenson, 2011) and consumer welfare (Agmon & Sjögren, 2016), this is indicative of EU policy’s adverse effects on individual consumers.

  11. “Our report finds tech founders are calling for simplified employment regulations, while Politico data suggests [EU] policymakers' attention is elsewhere: they are less focussed on the Digital Single Market than two years ago, and more focussed on the creation of a digital tax and the activities of big US tech firms” (The State of European Tech, 2019).

  12. Thierer and Haaland (2021) document expensive failures of state-backed projects like the Quaero search engine and the Minitel network, funded generously by French and German governments and promoted as homegrown alternatives to Google and the Internet itself, respectively. I explain in more detail in Footnote 20 below why one could expect policies of this nature to prevail.

  13. Of course, the state is special in certain other ways: crucially, it has powers that no other institution possesses.

  14. Nor do the authors ever make the case that the regulators will be better people than those they regulate.

  15. Indeed, from among the authors discussed here, Cath et al. are perhaps the only ones to point to an asymmetry between market actors and government officials.

  16. One could object to this argument by pointing out that leaving social media and other digital services imposes a serious cost on the users, that could prevent them from holding Big Tech accountable by exiting. However, holding policymakers accountable comes with its own costs as well. It takes time and effort to inform oneself about the relevant issues to a large enough extent that enables the voter to assign accountability for the effects of various policies. Most voters in fact do not seem to be well informed (Somin, 2015). Furthermore, voters choose between bundles of policies—it’s therefore possible that a policy failure on Big Tech will be outweighed, in the voters’ minds, by the candidate’s successes in other fields.

  17. This is a shorthand. Either behavioral asymmetry should be justified, or it should be shown why behavioral symmetry won’t be a problem.

  18. This is not unique to proposals for more regulation. Rather, the burden arises for Balkin (and Cath et al.) in virtue of proposing a departure from the status quo. Such departures cannot be justified merely by showing problems with the status quo. They should be justified, in addition, by at least some reason why the sources of problems identified within the status quo (i.e., self-interest winning out over pursuit of justice) will cease to be sources of problems when the proposed changes are implemented. The same constraint would of course be imposed on anyone advocating for less regulation.

  19. Another way of putting this goes as follows: according to Balkin, Big Tech should be regulated in virtue of the negative externalities produced by its use of algorithms. However, bad legislation and bad regulation likewise produce costs to third parties (e.g. tariffs raise the prices of goods for the average consumer). It is possible that the regulations governing Big Tech will also be of such nature, benefitting special interests at the cost to the average citizen, as some of EU’s regulatory efforts have. If Balkin demurs, he must explain by what mechanism such “negative externalities” can be avoided in legislation-crafting, and why the mechanism cannot work in the market. He fails to meet this burden on both counts.

  20. As Holcombe (2016) explains, “[The] political marketplace is a real market, and votes are the currency.

    that is exchanged. Because it is a small group with low transaction costs, legislators are able to bargain to pass the legislation that they value most highly. Interest groups can buy their way into the low-transaction cost group by offering campaign contributions, political support from a sizeable number of voters the group represents, or other benefits for legislators.” In contrast, the electorate as a whole is a high-transaction cost group, hence finding it more difficult, if not impossible, to bargain effectively with the legislators. As a consequence, legislators (understood as rational voting power maximizers, rather than as indefatigable servants of the people) have an incentive to favor legislation promoted by the lobbyists for interest groups, rather than the legislation promoting the common good (though it might be that some of the former will in fact promote the latter).

  21. For demonstration that having good-sounding laws is not sufficient for their just implementation, one need look no further than the European Union itself and its member states’ failures to live up to the rhetoric of EU’s documents. This is especially visible when it comes to protecting the rights of migrants and minorities (Human Rights Watch, 2020; Kingsley & Shoumali, 2020; WeReport, 2018). Other than—in some cases—eliciting verbal condemnations, the violations of human rights described in the just cited sources have neither been stopped nor meaningfully punished by EU’s institutions.

  22. Interestingly, Dignam, Yeung et al., and Citron & Pasquale also assume that the main objection to their proposals is that they could stifle innovation. However, rather than take on actual empirical research that seems to show this to be the case [e.g. Grajek and Röller (2012)], they simply cite empirical work consonant with their own view.

  23. A similar problem plagues Scherer’s (2015) proposal to have the AI industry regulated by “an agency staffed by AI specialists with relevant academic and/or industry experience.” Scherer does not consider the potential for regulatory capture, despite the proposed agency’s considerable powers. This is especially jarring since just a few pages earlier Scherer does engage in institutional criticism (including the mention of misaligned incentives, but only on the part of lawyers rather than government officials) of the currently existing legal framework’s capacity to regulate AI. In this respect, Scherer’s paper shows an interesting similarity to the work by Wachter & Mittlestadt (2019) and Smuha’s (2020). All these articles engage in the criticism of currently existing legal frameworks to properly regulate (some aspect of) Big Tech, but they also—to my mind—stop short of showing how their new proposals will avoid these and other shortcomings.

  24. I am not advocating for PoL. I am simply saying that, given our epistemic situation, it’s a principle that could govern our policy recommendations.

References

Download references

Acknowledgments

I am grateful to audiences at the University of Granada Workshop on Disruptive Technologies and Western University Political Theory Workshop for useful comments on previous versions of this paper. I’m also grateful to Sona Ghosh and Niels Linnemann for illuminating discussions of many issues contained in this paper. The referees for this journals provided me with a range of very useful comments and suggestions for which I am likewise grateful.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartlomiej Chomanski.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chomanski, B. The Missing Ingredient in the Case for Regulating Big Tech. Minds & Machines 31, 257–275 (2021). https://doi.org/10.1007/s11023-021-09562-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-021-09562-x

Keywords

Navigation