Abstract
Having been involved in a slew of recent scandals, many of the world’s largest technology companies (“Big Tech,” “Digital Titans”) embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, cannot be relied on voluntarily to conform with what justice demands, especially when faced with powerful incentives to pursue their own self-interest instead. Consequently, the critics have recommended a suite of laws and regulations to force the tech companies to better comply with the requirements of justice. At the same time, they have paid little attention to the possibility that individuals acting within the political context, e.g. as lawmakers and regulators, are also imperfect and need not be wholly compliant with what justice demands. This paper argues that such an omission is far from trivial. It creates a heavy argumentative burden on the part of the critics that they by and large fail to discharge. As a result, the case for Big Tech regulation that emerges from the recent literature has substantial lacunae and more work needs to be done before we can accept the critics’ calls for greater state involvement in the industry.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Yeung et al. worry that the codes are not built on any coherent normative foundations and lack established mechanisms for resolving conflicts between different values. Greene et al.’s criticism is more difficult to pin down—they claim that AI ethics is committed to “ethical universalism” thus excluding relativist approaches, that it doesn’t explore the possibility that the new technologies should be banned, and that it ignores such issues as prison abolitionism and workplace democracy. While it’s clear from the paper’s tone that Greene and colleagues strongly disapprove of these assumptions and omissions, they don’t offer explicit arguments as to what is wrong with them or why alternative approaches are preferable.
A note on terminology: I use the terms “Big Tech” and the less popular “Digital Titans” (also used by Yeung et al.) interchangeably to refer to large, mostly US-based technology companies, such as Amazon, Apple, Alphabet, Facebook, and Microsoft.
In making their case against the unregulated or self-regulated status quo, the critics of AI ethics pay heed to the following principle, articulated by Brennan (2007): “the limits to human benevolence, to civic virtue, are a fundamental constraint in the pursuit of normatively desirable ends. Moral reasoning on its own can never be taken to be compelling for action.” However, as we shall see, once they enter into the analysis of their preferred public policy solutions, they abandon Brennan’s dictum, which continues to caution against just such an approach: “any normative social theory that simply assumes compliance [with morality] is therefore seriously incomplete at best and at worst can encourage action that is perverse in its consequences. Misspecifying the constraint of human moral frailty is no less an error than misspecifying other kinds of constraints.”.
This is not to accuse state agents of some special venality. They may well think that their re-election or bigger budgets would serve the public interest.
European Union’s GDPR law has been found to have “worsened one of the main problems experienced in digital markets today, which is increased market concentration and reduced contestability. In addition, the GDPR seems to have given large platforms a tool to harm rivals by reducing access to the data they need to run their business … [Moreover] the costs of implementing the GDPR benefit large online platforms, and … consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors” (Geradin et al., 2020).
Majorities of investors interviewed by Le Merle et al. (2011) expressed strong reservations about investing in a regulatory environment with the opt-in system of data collection and a “Do Not Track” registry.
European Commission’s (2021) proposed rules for “ex ante conformity assessment” for “high-risk” AI systems – roughly, assessments of risk and benefits prior to entering the market—may have adverse impacts on innovation, imposing delays, compliance costs, and incentivizing exit from the European market entirely (Borggreen, 2020).
As Narayanan and Lee-Makiyama (2020) find, “[t]he economic impacts of shifting from ex-post to ex-ante [regulation] in the online services sector as stipulated by the proposals of Digital Services Act [will lead] to a loss of about 85 billion EUR in GDP and 101 billion EUR in lost consumer welfare based on a baseline value of 2018. Also, it will reduce the labour force by 0.9%.”.
“Total capital invested in [technology companies based in] North America… is approaching nearly 5 × the level of investment in Europe” (The State of Eurpean Tech, 2020). This is despite the EU having around 100 M more people and accounting for about the same share of global GDP as the United States. Since financing, especially in the form of venture capital, leads to increases in economic growth (Samila & Sorenson, 2011) and consumer welfare (Agmon & Sjögren, 2016), this is indicative of EU policy’s adverse effects on individual consumers.
“Our report finds tech founders are calling for simplified employment regulations, while Politico data suggests [EU] policymakers' attention is elsewhere: they are less focussed on the Digital Single Market than two years ago, and more focussed on the creation of a digital tax and the activities of big US tech firms” (The State of European Tech, 2019).
Thierer and Haaland (2021) document expensive failures of state-backed projects like the Quaero search engine and the Minitel network, funded generously by French and German governments and promoted as homegrown alternatives to Google and the Internet itself, respectively. I explain in more detail in Footnote 20 below why one could expect policies of this nature to prevail.
Of course, the state is special in certain other ways: crucially, it has powers that no other institution possesses.
Nor do the authors ever make the case that the regulators will be better people than those they regulate.
Indeed, from among the authors discussed here, Cath et al. are perhaps the only ones to point to an asymmetry between market actors and government officials.
One could object to this argument by pointing out that leaving social media and other digital services imposes a serious cost on the users, that could prevent them from holding Big Tech accountable by exiting. However, holding policymakers accountable comes with its own costs as well. It takes time and effort to inform oneself about the relevant issues to a large enough extent that enables the voter to assign accountability for the effects of various policies. Most voters in fact do not seem to be well informed (Somin, 2015). Furthermore, voters choose between bundles of policies—it’s therefore possible that a policy failure on Big Tech will be outweighed, in the voters’ minds, by the candidate’s successes in other fields.
This is a shorthand. Either behavioral asymmetry should be justified, or it should be shown why behavioral symmetry won’t be a problem.
This is not unique to proposals for more regulation. Rather, the burden arises for Balkin (and Cath et al.) in virtue of proposing a departure from the status quo. Such departures cannot be justified merely by showing problems with the status quo. They should be justified, in addition, by at least some reason why the sources of problems identified within the status quo (i.e., self-interest winning out over pursuit of justice) will cease to be sources of problems when the proposed changes are implemented. The same constraint would of course be imposed on anyone advocating for less regulation.
Another way of putting this goes as follows: according to Balkin, Big Tech should be regulated in virtue of the negative externalities produced by its use of algorithms. However, bad legislation and bad regulation likewise produce costs to third parties (e.g. tariffs raise the prices of goods for the average consumer). It is possible that the regulations governing Big Tech will also be of such nature, benefitting special interests at the cost to the average citizen, as some of EU’s regulatory efforts have. If Balkin demurs, he must explain by what mechanism such “negative externalities” can be avoided in legislation-crafting, and why the mechanism cannot work in the market. He fails to meet this burden on both counts.
As Holcombe (2016) explains, “[The] political marketplace is a real market, and votes are the currency.
that is exchanged. Because it is a small group with low transaction costs, legislators are able to bargain to pass the legislation that they value most highly. Interest groups can buy their way into the low-transaction cost group by offering campaign contributions, political support from a sizeable number of voters the group represents, or other benefits for legislators.” In contrast, the electorate as a whole is a high-transaction cost group, hence finding it more difficult, if not impossible, to bargain effectively with the legislators. As a consequence, legislators (understood as rational voting power maximizers, rather than as indefatigable servants of the people) have an incentive to favor legislation promoted by the lobbyists for interest groups, rather than the legislation promoting the common good (though it might be that some of the former will in fact promote the latter).
For demonstration that having good-sounding laws is not sufficient for their just implementation, one need look no further than the European Union itself and its member states’ failures to live up to the rhetoric of EU’s documents. This is especially visible when it comes to protecting the rights of migrants and minorities (Human Rights Watch, 2020; Kingsley & Shoumali, 2020; WeReport, 2018). Other than—in some cases—eliciting verbal condemnations, the violations of human rights described in the just cited sources have neither been stopped nor meaningfully punished by EU’s institutions.
Interestingly, Dignam, Yeung et al., and Citron & Pasquale also assume that the main objection to their proposals is that they could stifle innovation. However, rather than take on actual empirical research that seems to show this to be the case [e.g. Grajek and Röller (2012)], they simply cite empirical work consonant with their own view.
A similar problem plagues Scherer’s (2015) proposal to have the AI industry regulated by “an agency staffed by AI specialists with relevant academic and/or industry experience.” Scherer does not consider the potential for regulatory capture, despite the proposed agency’s considerable powers. This is especially jarring since just a few pages earlier Scherer does engage in institutional criticism (including the mention of misaligned incentives, but only on the part of lawyers rather than government officials) of the currently existing legal framework’s capacity to regulate AI. In this respect, Scherer’s paper shows an interesting similarity to the work by Wachter & Mittlestadt (2019) and Smuha’s (2020). All these articles engage in the criticism of currently existing legal frameworks to properly regulate (some aspect of) Big Tech, but they also—to my mind—stop short of showing how their new proposals will avoid these and other shortcomings.
I am not advocating for PoL. I am simply saying that, given our epistemic situation, it’s a principle that could govern our policy recommendations.
References
Agmon, T., & Sjögren, S. (2016). Venture capital and the inventive process. Palgrace Macmillan.
Balkin, J. M. (2017). 2016 Sidley austin distinguished lecture on big data law and policy: The three laws of robotics in the age of big data. Ohio St. L.J., 78, 1217–1242.
Borggreen, C. (2020). AI Fortress Europe? Retrieved from https://www.project-disco.org/innovation/020320-ai-fortress-europe/.
Brennan, G. (2007). Economics. In R. E. Goodin, P. Pettit, & T. Pogge (Eds.), A companion to contemporary political philosophy (pp. 118–152). Blackwell.
Brennan, G., & Buchanan, J. (2008). The reason of rules. Cambridge University Press.
Brock, W. A., & Magee, S. P. (1978). The economics of special interest politics: The case of the tariff. The American Economic Review, 68(2), 246–250.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505–528. https://doi.org/10.1007/s11948-017-9901-7.
Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Wash. l. Rev., 89, 1–34.
Claypool, R. (2019). The FTC’s Big Tech Revolving Door Problem. Retrieved from https://www.citizen.org/article/ftc-big-tech-revolving-door-problem-report/.
Dal Bó, E. (2006). Regulatory capture: A review. Oxford Review of Economic Policy, 22(2), 203–225.
Dignam, A. (2020). Artificial intelligence, tech corporate governance and the public interest regulatory response. Cambridge Journal of Regions, Economy and Society, 13(1), 37–54.
Dunn, W. N. (2018). Public policy analysis : An integrated approach (6th ed.). Routledge, Taylor & Francis Group.
European Commission. (2021). New rules for Artificial Intelligence—Questions and Answers. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Rossi, F. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Freiman, C. (2017). Unequivocal Justice. Routledge.
Friedman, M. (1962). Capitalism and freedom. University of Chicago Press.
Geradin, D., Karanikioti, T., & Katsifis, D. (2020). GDPR Myopia: How a well-intended regulation ended up favouring large online platforms—the case of ad tech. European Competition Journal. https://doi.org/10.1080/17441056.2020.1848059.
Grajek, M., & Röller, L.-H. (2012). Regulation and investment in network industries: Evidence from European Telecoms. The Journal of Law and Economics, 55(1), 189–216. https://doi.org/10.1086/661196.
Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Paper presented at the Proceedings of the 52nd Hawaii International Conference on System Sciences.
Hagendorff, T. (2020). The ethics of Ai ethics: An evaluation of guidelines. Minds and Machines, 1–22.
Hee Park, J., & Jensen, N. (2007). Electoral competition and agricultural support in OECD countries. American Journal of Political Science, 51(2), 314–329.
Holcombe, R. G. (2016). Advanced introduction to public choice. Edward Elgar Publishing.
Human Rights Watch. (2020). World Report 2019. Retrieved from https://www.hrw.org/world-report/2019/country-chapters/european-union.
Jain, P., Reese, A. M., Chaudhari, D., Mentzer, R. A., & Mannan, M. S. (2017). Regulatory approaches—safety case vs US approach: Is there a best solution today? Journal of Loss Prevention in the Process Industries, 46, 154–162. https://doi.org/10.1016/j.jlp.2017.02.001.
Kingsley, P., & Shoumali, K. (2020). Taking Hard Line, Greece turns back migrants by abandoning them at sea. The New York Times. Retrieved from https://www.nytimes.com/2020/08/14/world/europe/greece-migrants-abandoning-sea.html.
Le Merle, M., Sarma, R., Ahmed, T., & Pencavel, C. (2011). The impact of E.U. internet privacy regulations on early-stage investment: A quantitative study. Retrieved from https://static1.squarespace.com/static/5481bc79e4b01c4bf3ceed80/t/548774c6e4b04f2372f29d46/1418163398470/Impact-EU-Internet-Privacy-Regulations-Early-Stage-Investment.pdf.
Lessig, L. (2011). Republic, lost : How money corrupts Congress–and a plan to stop it (1st ed.). Twelve.
Loi, M., Heitz, C., & Christen, M. (2020). A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and Big Data. Paper presented at the 2020 7th Swiss Conference on Data Science (SDS).
McNamara, A., Smith, J., & Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision making in software development? Paper presented at the Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering.
Metzinger, T. (2019). Ethics washing made in Europe. Der Tagesspiegel. Retrieved from https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html.
Müller, V. (2020). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 ed.).
Narayanan, B., & Lee-Makiyama, H. (2020). Economic Costs of Ex ante Regulations. Retrieved from https://ecipe.org/wp-content/uploads/2020/10/ECI_20_OccPaper_07_2020_Ex-ante_Regulations_LY06.pdf.
Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.
Samila, S., & Sorenson, O. (2011). Venture capital, entrepreneurship, and economic growth. The Review of Economics and Statistics, 93(1), 338–349.
Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. l. & Tech., 29, 353–400.
Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 1–14.
Somin, I. (2015). Rational ignorance. Routledge International Handbook of Ignorance Studies, 274–281.
Sujan, M. A., Habli, I., Kelly, T. P., Pozzi, S., & Johnson, C. W. (2016). Should healthcare providers do safety cases? Lessons from a cross-industry review of safety case practices. Safety Science, 84, 181–189. https://doi.org/10.1016/j.ssci.2015.12.021.
The State of European Tech 2019. (n.d.). Retrieved from https://2019.stateofeuropeantech.com/.
The State of Eurpean Tech 2020. (n.d.). Retrieved from https://2020.stateofeuropeantech.com/.
Thierer, A., & Haaland, C. (2021). The future of innovation: Can European-style industrial policies create tech supremacy? Discourse. Retrieved from https://www.discoursemagazine.com/economics/2021/02/11/can-european-style-industrial-policies-create-technological-supremacy/.
Tullock, G. (2005). Public goods, redistribution, and rent seeking. Edward Elgar.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI survey: Privacy, data, and business. Columbia Business Law Review, 2019(2), 494–620.
Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping. In E. Bayamlioğlu, I. Baraliuc, & L. Janssens (Eds.), Being Profiled: Cogitas Ergo Sum (pp. 84–88). Amsterdam University Press.
WeReport. (2018). Fourteen migrants dead at ceuta’s border after police rubber-bullet shooting. Retrieved from https://wereport.cat/fourteen-migrants-dead-at-ceuta-frontier-after-police-rubber-bullet-shooting/.
Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by Human rights-centered design, deliberation, and oversight: an end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI. Oxford University Press.
Acknowledgments
I am grateful to audiences at the University of Granada Workshop on Disruptive Technologies and Western University Political Theory Workshop for useful comments on previous versions of this paper. I’m also grateful to Sona Ghosh and Niels Linnemann for illuminating discussions of many issues contained in this paper. The referees for this journals provided me with a range of very useful comments and suggestions for which I am likewise grateful.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chomanski, B. The Missing Ingredient in the Case for Regulating Big Tech. Minds & Machines 31, 257–275 (2021). https://doi.org/10.1007/s11023-021-09562-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-021-09562-x