Abstract
The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts of democracy and the Rule of Law. To provoke further consideration and illustrate that initial intuitions regarding democratic innovation may not always be accurate, I pose and explore four ways that AI and other forms of technology could be used to augment the representative democratic process. The augmentations range from voting online to the wholesale replacement of the legislature’s human representatives with algorithms. After first noting the intuition that less invasive forms of augmented democracy may be less objectionable than more extreme forms, I go on to critically assess whether the augmentation of existing systems satisfies or enhances ideas associated with democracy and the Rule of Law (provided by Dahl and Fuller). By imagining a (not too far-fetched) future in a (not too far-removed) democratic society, my conclusion is that, when it comes to democracy and the Rule of Law, intuitions regarding technology may lead us astray.
Similar content being viewed by others
Notes
In 2020 alone, a number of articles were published both on which the argument in this article is based and which are extended by the argument in this article (Araujo et al. 2020; Boyles and Joaquin 2020; Cristianini and Scantamburlo 2020; de Fine Licht and de Fine Licht 2020; Hagendorff and Wezel 2020; Leyva and Beckett 2020; Naudé and Dimitri 2020).
For a detailed and fascinating explanation of the Australian Ballot and its history, see (Brent 2006).
There can be a bi-directional effect between science fiction and design scenarios—so the ‘not (completely) science fiction’ is not intended to be a negative; instead, the—sometimes considerable—extensions of existing technologies can be useful. See, for example, (Thibault et al. 2020). (I am grateful to the anonymous reviewer for making this point clear.) There are, however, some ideas that would fall far outside what AI currently can do: (Hagendorff and Wezel 2020).
For a contemporary assessment of the use of AI and government decision making that takes a less hypothetical approach, see (Zalnieriute et al. 2019).
There are many suggestions that AIs will not be friendly, even if there is a compelling argument that this is what is needed (Boyles and Joaquin 2020; Muehlhauser and Bostrom 2014). I do not suggest these are not likely, nor that their consideration may not be useful, but my consideration is focussed on the non-dystopian idea that the technologies I explore will be friendly (and useful). What is clear is that the promotion of friendly or beneficial AI is a vital research goal (Baum 2017).
Blockchain technology facilitates a form of encrypted ledger that is held across a number of—non-centralised—locations.
For suggestions of blockchain being used to facilitate online voting, see (Pilkington 2016). See also (Kshetri and Voas 2018; Pawade et al. 2020). There are, however, various other uses that also go beyond its crypto-currency origins—including alternative methods of redistribution (Potts et al. 2017). (Thanks to the anonymous reviewer for pointing this out.).
As with all of the ideas raised here, there are relative pros and cons that could be raised. Online voting can have implications related to the legitimacy of democratic procedures. Kersting and Baldersheim provide a collection relating to many of these issues (Kersting and Baldersheim 2004).
This would not remove the problems associated with potential coercion when casting votes. Whilst I do not address this in this hypothetical, and even though it remains a threat, it seems unlikely to be scalable to mass voter fraud.
The hypothetical sidesteps broader concerns regarding direct democracy. In this respect, see (Raible and Trueblood 2017).
A similar idea has been suggested as being a ‘technocracy’ (Khanna 2017).
A more extreme use of blockchain reflects coins being issued that can then be ‘spent’ as a way of voting and as a way to illustrate preferences (Chandra 2018). Similar ideas relate to concepts like liquid democracy. See, for example (Blum and Zuber 2016). For liquid democracy in computer science focussed terms, see (Kahng et al. 2018).
This would be captured under the ‘anxiety of control’ idea presented by Leszczynski. (Leszczynski 2015) Crawford has raised some of the fears of surveillance of this sort. (Crawford 2014) This is connected to the technological sovereignity idea explored recently by Mann et al. (2020). Mann, in particular, has also been addressed similar issues that bring together and state-based consideration of, inter alia, privacy (Mann et al. 2018; Mann and Daly 2019; Mann and Matzner 2019).
Beyond the hypothetical outlined here, this desire is—generally speaking—one that is applied to the way in which AI is currently being deployed in the public sphere (Hanania and Thieullent 2020).
The intelligence that is imagined behind the algorithm is clearly far beyond that which currently exists. What is envisioned is something that could replicate human thought processes of decision making and assessment of information—yet, in a way that is transparent and methodical.
This argument has recently been applied in the closely related sphere of administrative decision-making (Hermstrüwer 2020).
This statement applies to, and only holds in, the context of the hypothetical. As was helpfully, and accurately, raised by the anonymous reviewer, the relative assessment of extreme-ness may not hold in relation to a change of governmental control within authoritarian dictatorships.
As will be apparent from the volume of work cited above, there is considerable work on the impact and operation of AI in public decision-making. There is also specific research on the process of legal reasoning and the operation of legislative norms: (CSIRO, n.d.). (Thanks to the anonymous reviewer for flagging this research).
The other five relate to: democracy guaranteeing fundamental rights; democracy fostering human development; the facilitation of a high level of political equality; the notion that democratic nations do not fight wars with one another; and that democratic contrives tend to be more prosperous. These are of little relevance to the augmented democracy ideas. (Dahl and Shapiro 2015, pp. 44–61).
There would be financial savings. Other benefits include the relative ease of going to the polls (easing disruption to the working day). A useful summary of pros and cons of internet voting more generally is provided by Rachel Gibson (2001).
As noted above, a summary of some of the issues can be found at (Raible and Trueblood 2017).
Discussions regarding positive and negative freedoms, and the recent distinctions made between liberty and liberalism by people like Quentin Skinner come to mind here. See (Skinner 2012).
I.e. one that is not tainted by social or peer pressures. This, however, creates—even in relation to technology that ‘disappears’—the observer’s paradox: whilst the aim must be to find out how people talk when they are not being systematically observed, yet this can only be obtained by systematic observation (Labov 1972, p. 209). The observed people may amend their behaviour—but the 'invisible’ nature of the technology would decrease this effect as much as possible. (Outside of the hypothetical in which anonymity is possible and maintained, in a real state—especially a totalitarian or oppressive regime—even the ‘invisible’ nature may mean individuals never communicate truthfully).
Which requires also putting aside discussions associated with the identification of the Rousseauian general will or cognate ideas.
Within the confines of this hypothetical, the importing of such a function does not appear to be too much of a stretch.
For a useful overview of liability issues, see (Webb 2016).
Beyond this, there is also the potential for the creation of a technocratic elite of programmers or organisations that create the algorithms. I raise some of these issues in "Retaining the humans".
What is clear, however, is that there can be effective use of digital platforms and the algorithms that operate in digital/social media to power a narrative that may run counter to either the mass media or the more widely-held view (Leyva and Beckett 2020; Rydgren and van der Meiden 2019; Schroeder 2019). (I am grateful to the anonymous reviewer for pressing for clarity on this point.)
For an overview of the issues around contestation and the various accounts that are seen as canonical see: (Burgess 2017).
A useful discussion of these points in the way that Fuller describes them is provided by Waldron: (2016).
This puts to one side the potential for relative changes in the outcome of elections as a result of a lazy yet tech savvy populous voting.
The latter aspect is also addressed below in terms of different desiderata: that of clarity or certainty.
The idea that the law may need to be interpreted by experts is touched upon by Waldron in his discussion of Fuller. (Waldron 2016).
This could be said in the general sense that the issue has been more widely promulgated or whether this process reflects a greater acknowledgement of individuals’ dignity.
Here, I mean something more than the intentional replacement of one act with another due to an overarching shift in a wider policy.
At this stage, it is relevant to note that there is a difference between Algorithm and Algorithm+ in terms of the relative positivity of the outcome of assessing the desiderata, and that Algorithm+ also reflects a relative procedural change.
For example, there would be no reason why groups that hold any view—even a morally repugnant like a neo-nazi party—would be precluded from creating an algorithm that reflects their ideology. There seems here to be a risk that the building of the algorithms would create an autocratic class that would, in effect, control the system by virtue of the fact that they could control, in effect, the ‘parties’ that could stand. Whilst this is true, it seems eminently possible that there could be a simple template way to create an algorithm. Although, this may just shift the question one step further. In any event, the same argument exists in any party formation process. (The argument that the ‘simple’ creation of an algorithm may decrease the transaction costs associated with the creation of a political party or movement, thus making morally repugnant parties more prevalent, is one that must await another discussion).
This is relatively unsurprising as, in effect, Blockchain+ is to direct democracy what Blockchain is to representative democracy.
Aristotle, decries the making of hasty and emotion-fuelled decisions and refers to the inclusion of (what would now may be called) legislative due process. (Aristotle 2004, bks. 1, Ch. 1).
I am grateful to the anonymous reviewer for drawing my attention to this potential.
The very nature of those options would be strongly contested. Extreme differences of opinion between individuals with different political opinions—for example, individualists and collectivists—would radically impact what these options may be.
Other research explores – in more tangible ways than the hypotheticals considered here—the design and development of systems that both relate to identifying issues (Chen et al. 2020; Monteiro 2019), or identifying a way forward (Foth et al. 2015; Komninos and Zuber 2019). (Thanks, again, to the reviewer for pointing to the importance of these.)
References
Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc 35(3):611–623. https://doi.org/10.1007/s00146-019-00931-w
Aristotle (2004) Rhetoric (W. R. Roberts, Trans.). Dover
Aristotle ST, Saunders TJ (1981) The politics. Penguin, UK
Baum SD (2017) On the promotion of safe and socially beneficial artificial intelligence. AI Soc 32(4):543–551. https://doi.org/10.1007/s00146-016-0677-0
Bingham T (2010) The Rule of Law. Allen Lane
Binns R (2018) Algorithmic accountability and public reason. Philos Technol 31(4):543–556. https://doi.org/10.1007/s13347-017-0263-5
Blum C, Zuber CI (2016) Liquid democracy: potentials, problems, and perspectives. J Polit Philos 24(2):162–182. https://doi.org/10.1111/jopp.12065
Boyles RJM, Joaquin JJ (2020) Why friendly AIs won’t be that friendly: a friendly reply to Muehlhauser and Bostrom. AI Soc 35(2):505–507. https://doi.org/10.1007/s00146-019-00903-0
Brent P (2006) The Australian ballot: not the secret ballot. Australian J Polit Sci 41(1):39–50. https://doi.org/10.1080/10361140500507278
Burgess P (2017) The Rule of Law: beyond contestedness. Jurisprudence 8(3):480–500
Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Society 3(1):2053951715622512. https://doi.org/10.1177/2053951715622512
Chandra P (2018) Reimagining democracy: what if votes were a crypto-currency? Democracy without borders. https://www.democracywithoutborders.org/4625/reimagining-democracy-what-if-votes-were-a-crypto-currency/ Accessed 2 Feb 2018
Chen B, Marvin S, While A (2020) Containing COVID-19 in China: AI and the robotic restructuring of future cities. Dialogues Hum Geogr 10(2):238–241. https://doi.org/10.1177/2043820620934267
Citron DK, Pasquale FA (2014) The scored society: due process for automated predictions. Wash Law Rev 89:1–34
Craig P (1997) Formal and substantive conceptions of the rule of raw an analytical framework. Public Law, Autumn, pp 467–487
Crawford K (2014) The anxieties of big data. The New Inquiry. https://thenewinquiry.com/the-anxieties-of-big-data/ Accessed 30 May 2014
Cristianini N, Scantamburlo T (2020) On social machines for algorithmic regulation. AI Soc 35(3):645–662. https://doi.org/10.1007/s00146-019-00917-8
CSIRO (n.d.) Business processes and legal informatics. Business Processes and Legal Informatics. https://research.csiro.au/bpli/ Accessed 31 Jan 2021
Dahl RA, Shapiro I (2015) On democracy (2nd Revised edition). Yale University Press
de Fine Licht J, Naurin D, Esaiasson P, Gilljam M (2014) When does transparency generate legitimacy? Experimenting on a context-bound relationship. Governance 27(1):111–134. https://doi.org/10.1111/gove.12021
de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35(4):917–926. https://doi.org/10.1007/s00146-020-00960-w
de Laat PB (2018) Algorithmic decision-making based on machine learning from big data: can transparency restore accountability? Philos Technol 31(4):525–541. https://doi.org/10.1007/s13347-017-0293-z
Dicey AV (1979) Introduction to the study of the law of the constitution, 10th edn. Palgrave Macmillan, UK
Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol 144(1):114–126
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People—an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5
Foth M, Tomitsch M, Satchell C, Haeusler MH (2015) From users to citizens: some thoughts on designing for polity and civics. In: Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, 623–633. https://doi.org/https://doi.org/10.1145/2838739.2838769
Frick W (2015) Here’s why people trust human judgment over algorithms. Harvard Business Review. https://hbr.org/2015/02/heres-why-people-trust-human-judgment-over-algorithms Accessed 27 Feb 2015
Gibson R (2001) Elections online: assessing internet voting in light of the Arizona democratic primary. Political Sci Q 116(4):561–583. https://doi.org/10.2307/798221
Goldsmith S, Crawford S (2014) The responsive city: engaging communities through data–smart governance. 1st edn. Jossey-Bass
Hagendorff T, Wezel K (2020) 15 challenges for AI: or what AI (currently) can’t do. AI Soc 35(2):355–365. https://doi.org/10.1007/s00146-019-00886-y
Hanania P-A, Thieullent A-L (2020) Perform AI for public sector: public goes AI! Capgemini Worldwide. https://www.capgemini.com/2020/03/perform-ai-for-public-sector-public-goes-ai/ Accessed 4 Mar 2020
Hermstrüwer Y (2020) Artificial intelligence and administrative decisions under uncertainty. In: Wischmeyer T, Rademacher T (eds) Regulating Artificial Intelligence. Springer International Publishing, pp 199–223. https://doi.org/https://doi.org/10.1007/978-3-030-32361-5_9
Jia L (2020) Unpacking China’s Social credit system: informatization, regulatory framework, and market dynamics. Can J Commun https://doi.org/10.22230/cjc.2020v45n1a3483
Kahng A, Mackenzie S, Procaccia AD (2018) Liquid democracy: an algorithmic perspective. Thirty-Second AAAI Conference on Artificial Intelligence. Thirty-Second AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17027 Accessed 25 Apr 2018
Kersting N, Baldersheim H (eds) (2004) Electronic voting and democracy: a comparative analysis. Palgrave Macmillan, UK. https://doi.org/https://doi.org/10.1057/9780230523531_3
Khanna P (2017) Technocracy in America: rise of the info-state. CreateSpace Independent Publishing Platform
Knight W. (n.d.) The dark secret at the heart of AI. MIT Technology Review. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/ Accessed 30 Jan 2021
Komninos N, Zuber C (eds) (2019) Smart cities in the post-algorithmic era. https://www.elgaronline.com/view/edcoll/9781789907049/9781789907049.xml
Labov W (1972) Sociolinguistic patterns. University of Pennsylvania Press
Leike J, Martic M, Legg S (2017) Learning through human feedback. Deep mind. https://deepmind.com/blog/article/learning-through-human-feedback Accessed 6 Dec 2017
Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31(4):611–627. https://doi.org/10.1007/s13347-017-0279-x
Lepri B, Staiano J, Sangokoya D, Letouzé E, Oliver N (2016) The tyranny of data? The bright and dark sides of data-driven decision-making for social good. [Physics]. http://arxiv.org/abs/1612.00323
Leszczynski A (2015) Spatial big data and anxieties of control. Environ Planning D Soc Space 33(6):965–984. https://doi.org/10.1177/0263775815595814
Leyva R, Beckett C (2020) Testing and unpacking the effects of digital fake news: on presidential candidate evaluations and voter support. AI Soc 35(4):969–980. https://doi.org/10.1007/s00146-020-00980-6
Locke J (1988) Two treatises of government. In: Laslett P (ed). Cambridge University Press
Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Decis Process 151:90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Mann M, Daly A (2019) (Big) Data and the North-in-South: Australia’s informational imperialism and digital colonialism. Television New Media 20(4):379–395. https://doi.org/10.1177/1527476418806091
Mann M, Daly A, Wilson M, Suzor N (2018) The limits of (digital) constitutionalism: exploring the privacy-security (Im)balance in Australia. Int Commun Gaz 80(4):369–384. https://doi.org/10.1177/1748048518757141
Mann M, Matzner T (2019) Challenging algorithmic profiling: the limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data Society 6(2):2053951719895805. https://doi.org/10.1177/2053951719895805
Mann M, Mitchell P, Foth M, Anastasiu I (2020) #BlockSidewalk to Barcelona: technological sovereignty and the social license to operate smart cities. J Am Soc Inf Sci 71(9):1103–1115. https://doi.org/10.1002/asi.24387
McCullagh K (2003) E-democracy: potential for political revolution? Int J Law Inform Technol 11(2):149–161. https://doi.org/10.1093/ijlit/11.2.149
Mehr H (2017) Artificial intelligence for citizen services and government. Harvard Kennedy School, p 19
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Society 3(2):2053951716679679. https://doi.org/10.1177/2053951716679679
Monteiro M (2019) Ruined by design: how designers destroyed the world, and what we can do to fix it. Mule Books
Morris D (2000) Direct democracy and the internet symposium: internet voting and democracy. Loy LA Law Rev 34(3):1033–1054
Muehlhauser L, Bostrom N (2014) Why we need friendly. AI Think 13(36):41–47. https://doi.org/10.1017/S1477175613000316
Musiani F (2013) Governance by algorithms. Internet Policy Rev 2(3). https://policyreview.info/articles/analysis/governance-algorithms
Naudé W, Dimitri N (2020) The race for an artificial general intelligence: implications for public policy. AI Soc 35(2):367–379. https://doi.org/10.1007/s00146-019-00887-x
Kshetri N, Voas J (2018) Blockchain-enabled e-voting. IEEE Softw 35(4):95–99
Nissan E (2017) Digital technologies and artificial intelligence’s present and foreseeable impact on lawyering, judging, policing and law enforcement. AI Soc 32(3):441–464. https://doi.org/10.1007/s00146-015-0596-5
Pasquale F (2016) The black box society: the secret algorithms that control money and information (Reprint edition). Harvard University Press
Pawade D, Sakhapara A, Badgujar A, Adepu D, Andrade M (2020) Secure online voting system using biometric and blockchain. In: Sharma N, Chakrabarti A, Balas VE (eds) Data management, analytics and innovation. Springer
Pilkington M (2016) Blockchain technology: principles and applications. In: Olleros FX, Zhegu M (eds) Research handbook on digital transformations. Edward Elgar Publishing Ltd
Potts J, Rennie E, Goldenfein J (2017) Blockchains and the Crypto City. Info Technol 59(6):285–293. https://doi.org/10.1515/itit-2017-0006
Qiang X (2019) The road to digital unfreedom: President Xi’s surveillance state. J Democr 30(1):53–67. https://doi.org/10.1353/jod.2019.0004
Raible L, Trueblood L (2017) The Swiss system of referendums and the impossibility of direct democracy. UK Constitutional Law Association. https://ukconstitutionallaw.org/2017/04/04/lea-raible-and-leah-trueblood-the-swiss-system-of-referendums-and-the-impossibility-of-direct-democracy/ (Acessed 4 Apr 2017)
Raz J (2009) The authority of law: essays on law and morality, 2nd edn. Oxford University Press
Rydgren J, van der Meiden S (2019) The radical right and the end of Swedish exceptionalism. Eur Polit Sci 18(3):439–455. https://doi.org/10.1057/s41304-018-0159-6
Sandvig C, Hamilton K, Karahalios K, Langbort C (2016) When the algorithm itself is a racist: diagnosing ethical harm in the basic components of software. Int J Commun 10:4972–4990
Schroeder R (2019) Digital media and the entrenchment of right-wing populist agendas. Social Media + Society 5(4):2056305119885328. https://doi.org/10.1177/2056305119885328
Skinner Q (2012) Liberty before Liberalism. Cambridge University Press
Spielkamp M (2017) Inspecting algorithms for bias. MIT Technology Review. https://www.technologyreview.com/2017/06/12/105804/inspecting-algorithms-for-bias/ Accessed 12 Jun 2017
Surden H (2017) Values embedded in legal artificial intelligence (Research Paper ID 2932333). University of Colorado Law. https://doi.org/https://doi.org/10.2139/ssrn.2932333
Thibault M, Buruk “Oz” O, Buruk SS, Hamari J (2020) Transurbanism: smart cities for transhumans. In: Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, pp 1915–1928 https://doi.org/https://doi.org/10.1145/3357236.3395523
UN Security Council (2004) The Rule of Law and transitional justice in conflict and post-conflict societies—Report of the secretary-general (UN doc S/2004/616). https://www.un.org/ruleoflaw/files/2004%20report.pdf
von Hayek FA (2007) The road to serfdom. In: Caldwell B (ed). University of Chicago Press
Waldron J (2016) The Rule of Law. In: Zalta EN (ed) The Stanford Encyclopedia of Philosophy (Fall 2016). http://plato.stanford.edu/archives/fall2016/entries/rule-of-law/
Walton DC (2007) Is modern information technology enabling the evolution of a more direct democracy? World Futures 63(5–6):365–385. https://doi.org/10.1080/02604020701402749
Webb KC (2016) Products liability and autonomous vehicles: who’s driving whom. Richmond J Law Technol 23:1–52
Wischmeyer T (2020) Artificial intelligence and transparency: opening the black box. In: Wischmeyer T, Rademacher T (eds) Regulating artificial intelligence. Springer International Publishing, pp. 75–101. https://doi.org/https://doi.org/10.1007/978-3-030-32361-5_4
Zalnieriute M, Moses LB, Williams G (2019) The rule of law and automation of government decision-making. Modern Law Rev 82(3):425–455. https://doi.org/10.1111/1468-2230.12412
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Burgess, P. Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals. AI & Soc 37, 97–112 (2022). https://doi.org/10.1007/s00146-021-01170-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01170-8