Skip to main content

Advertisement

Log in

Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts of democracy and the Rule of Law. To provoke further consideration and illustrate that initial intuitions regarding democratic innovation may not always be accurate, I pose and explore four ways that AI and other forms of technology could be used to augment the representative democratic process. The augmentations range from voting online to the wholesale replacement of the legislature’s human representatives with algorithms. After first noting the intuition that less invasive forms of augmented democracy may be less objectionable than more extreme forms, I go on to critically assess whether the augmentation of existing systems satisfies or enhances ideas associated with democracy and the Rule of Law (provided by Dahl and Fuller). By imagining a (not too far-fetched) future in a (not too far-removed) democratic society, my conclusion is that, when it comes to democracy and the Rule of Law, intuitions regarding technology may lead us astray.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In 2020 alone, a number of articles were published both on which the argument in this article is based and which are extended by the argument in this article (Araujo et al. 2020; Boyles and Joaquin 2020; Cristianini and Scantamburlo 2020; de Fine Licht and de Fine Licht 2020; Hagendorff and Wezel 2020; Leyva and Beckett 2020; Naudé and Dimitri 2020).

  2. Algorithm aversion relates to the preference of human judgement over machines (Dietvorst et al. 2015). This common intuition also has been expressed in business (Frick 2015). Whilst the counter-point to this intuition is algorithm appreciation (Logg et al. 2019); this does not dispel the intuition.

  3. For a detailed and fascinating explanation of the Australian Ballot and its history, see (Brent 2006).

  4. There can be a bi-directional effect between science fiction and design scenarios—so the ‘not (completely) science fiction’ is not intended to be a negative; instead, the—sometimes considerable—extensions of existing technologies can be useful. See, for example, (Thibault et al. 2020). (I am grateful to the anonymous reviewer for making this point clear.) There are, however, some ideas that would fall far outside what AI currently can do: (Hagendorff and Wezel 2020).

  5. For a contemporary assessment of the use of AI and government decision making that takes a less hypothetical approach, see (Zalnieriute et al. 2019).

  6. There are many suggestions that AIs will not be friendly, even if there is a compelling argument that this is what is needed (Boyles and Joaquin 2020; Muehlhauser and Bostrom 2014). I do not suggest these are not likely, nor that their consideration may not be useful, but my consideration is focussed on the non-dystopian idea that the technologies I explore will be friendly (and useful). What is clear is that the promotion of friendly or beneficial AI is a vital research goal (Baum 2017).

  7. Blockchain technology facilitates a form of encrypted ledger that is held across a number of—non-centralised—locations.

  8. For suggestions of blockchain being used to facilitate online voting, see (Pilkington 2016). See also (Kshetri and Voas 2018; Pawade et al. 2020). There are, however, various other uses that also go beyond its crypto-currency origins—including alternative methods of redistribution (Potts et al. 2017). (Thanks to the anonymous reviewer for pointing this out.).

  9. As with all of the ideas raised here, there are relative pros and cons that could be raised. Online voting can have implications related to the legitimacy of democratic procedures. Kersting and Baldersheim provide a collection relating to many of these issues (Kersting and Baldersheim 2004).

  10. This would not remove the problems associated with potential coercion when casting votes. Whilst I do not address this in this hypothetical, and even though it remains a threat, it seems unlikely to be scalable to mass voter fraud.

  11. The hypothetical sidesteps broader concerns regarding direct democracy. In this respect, see (Raible and Trueblood 2017).

  12. A similar idea has been suggested as being a ‘technocracy’ (Khanna 2017).

  13. A more extreme use of blockchain reflects coins being issued that can then be ‘spent’ as a way of voting and as a way to illustrate preferences (Chandra 2018). Similar ideas relate to concepts like liquid democracy. See, for example (Blum and Zuber 2016). For liquid democracy in computer science focussed terms, see (Kahng et al. 2018).

  14. This would be captured under the ‘anxiety of control’ idea presented by Leszczynski. (Leszczynski 2015) Crawford has raised some of the fears of surveillance of this sort. (Crawford 2014) This is connected to the technological sovereignity idea explored recently by Mann et al. (2020). Mann, in particular, has also been addressed similar issues that bring together and state-based consideration of, inter alia, privacy (Mann et al. 2018; Mann and Daly 2019; Mann and Matzner 2019).

  15. A data-driven government may be more effective and more efficient through being more responsive (Goldsmith and Crawford 2014). There may be significant efficiency gains—but with existing technology, they may not necessarily result in benefit without human input (Mehr 2017).

  16. Beyond the hypothetical outlined here, this desire is—generally speaking—one that is applied to the way in which AI is currently being deployed in the public sphere (Hanania and Thieullent 2020).

  17. The intelligence that is imagined behind the algorithm is clearly far beyond that which currently exists. What is envisioned is something that could replicate human thought processes of decision making and assessment of information—yet, in a way that is transparent and methodical.

  18. This argument has recently been applied in the closely related sphere of administrative decision-making (Hermstrüwer 2020).

  19. Khanna provides a compelling argument for a similar position (Khanna 2017). In some senses, people may prefer algorithmic judgement (Logg et al. 2019).

  20. This statement applies to, and only holds in, the context of the hypothetical. As was helpfully, and accurately, raised by the anonymous reviewer, the relative assessment of extreme-ness may not hold in relation to a change of governmental control within authoritarian dictatorships.

  21. As will be apparent from the volume of work cited above, there is considerable work on the impact and operation of AI in public decision-making. There is also specific research on the process of legal reasoning and the operation of legislative norms: (CSIRO, n.d.). (Thanks to the anonymous reviewer for flagging this research).

  22. The other five relate to: democracy guaranteeing fundamental rights; democracy fostering human development; the facilitation of a high level of political equality; the notion that democratic nations do not fight wars with one another; and that democratic contrives tend to be more prosperous. These are of little relevance to the augmented democracy ideas. (Dahl and Shapiro 2015, pp. 44–61).

  23. There would be financial savings. Other benefits include the relative ease of going to the polls (easing disruption to the working day). A useful summary of pros and cons of internet voting more generally is provided by Rachel Gibson (2001).

  24. As noted above, a summary of some of the issues can be found at (Raible and Trueblood 2017).

  25. Whilst this claim is based on a purely practical reality, and some of the very real contemporary problems noted by Raible and Trueblood (ibid) and by Walton (2007) there are, however, some that have taken a more positive view of human potential (McCullagh 2003; Morris 2000).

  26. Discussions regarding positive and negative freedoms, and the recent distinctions made between liberty and liberalism by people like Quentin Skinner come to mind here. See (Skinner 2012).

  27. This—as was helpfully pointed out by the anonymous reviewer—is not merely the ambit of futuristic hypotheticals. Substantial fears regarding the use of data for surveillance, monitoring, and control exist (Jia 2020; Mann et al. 2020; Qiang 2019).

  28. I.e. one that is not tainted by social or peer pressures. This, however, creates—even in relation to technology that ‘disappears’—the observer’s paradox: whilst the aim must be to find out how people talk when they are not being systematically observed, yet this can only be obtained by systematic observation (Labov 1972, p. 209). The observed people may amend their behaviour—but the 'invisible’ nature of the technology would decrease this effect as much as possible. (Outside of the hypothetical in which anonymity is possible and maintained, in a real state—especially a totalitarian or oppressive regime—even the ‘invisible’ nature may mean individuals never communicate truthfully).

  29. Which requires also putting aside discussions associated with the identification of the Rousseauian general will or cognate ideas.

  30. Within the confines of this hypothetical, the importing of such a function does not appear to be too much of a stretch.

  31. For a useful overview of liability issues, see (Webb 2016).

  32. Beyond this, there is also the potential for the creation of a technocratic elite of programmers or organisations that create the algorithms. I raise some of these issues in "Retaining the humans".

  33. What is clear, however, is that there can be effective use of digital platforms and the algorithms that operate in digital/social media to power a narrative that may run counter to either the mass media or the more widely-held view (Leyva and Beckett 2020; Rydgren and van der Meiden 2019; Schroeder 2019). (I am grateful to the anonymous reviewer for pressing for clarity on this point.)

  34. For an overview of the issues around contestation and the various accounts that are seen as canonical see: (Burgess 2017).

  35. Three illustrations can be found in (Dicey 1979, pp. 202–203; Hayek 2007, p. 112; Raz 2009, p. 210).

  36. For example, see (Bingham 2010). See also (UN Security Council 2004, para. 6).

  37. This formed one of Dicey’s desiderata. It was also apparent in Aristotle’s ideas and Locke’s. (Aristotle et al. 1981, para. 1287aI; Dicey 1979, pp. 188–198 and 202–203; Locke 1988, sec. 135).

  38. A useful discussion of these points in the way that Fuller describes them is provided by Waldron: (2016).

  39. This puts to one side the potential for relative changes in the outcome of elections as a result of a lazy yet tech savvy populous voting.

  40. The latter aspect is also addressed below in terms of different desiderata: that of clarity or certainty.

  41. The idea that the law may need to be interpreted by experts is touched upon by Waldron in his discussion of Fuller. (Waldron 2016).

  42. This could be said in the general sense that the issue has been more widely promulgated or whether this process reflects a greater acknowledgement of individuals’ dignity.

  43. Here, I mean something more than the intentional replacement of one act with another due to an overarching shift in a wider policy.

  44. At this stage, it is relevant to note that there is a difference between Algorithm and Algorithm+ in terms of the relative positivity of the outcome of assessing the desiderata, and that Algorithm+ also reflects a relative procedural change.

  45. For example, there would be no reason why groups that hold any view—even a morally repugnant like a neo-nazi party—would be precluded from creating an algorithm that reflects their ideology. There seems here to be a risk that the building of the algorithms would create an autocratic class that would, in effect, control the system by virtue of the fact that they could control, in effect, the ‘parties’ that could stand. Whilst this is true, it seems eminently possible that there could be a simple template way to create an algorithm. Although, this may just shift the question one step further. In any event, the same argument exists in any party formation process. (The argument that the ‘simple’ creation of an algorithm may decrease the transaction costs associated with the creation of a political party or movement, thus making morally repugnant parties more prevalent, is one that must await another discussion).

  46. This is relatively unsurprising as, in effect, Blockchain+ is to direct democracy what Blockchain is to representative democracy.

  47. Aristotle, decries the making of hasty and emotion-fuelled decisions and refers to the inclusion of (what would now may be called) legislative due process. (Aristotle 2004, bks. 1, Ch. 1).

  48. I am grateful to the anonymous reviewer for drawing my attention to this potential.

  49. The very nature of those options would be strongly contested. Extreme differences of opinion between individuals with different political opinions—for example, individualists and collectivists—would radically impact what these options may be.

  50. Other research explores – in more tangible ways than the hypotheticals considered here—the design and development of systems that both relate to identifying issues (Chen et al. 2020; Monteiro 2019), or identifying a way forward (Foth et al. 2015; Komninos and Zuber 2019). (Thanks, again, to the reviewer for pointing to the importance of these.)

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Burgess.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Burgess, P. Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals. AI & Soc 37, 97–112 (2022). https://doi.org/10.1007/s00146-021-01170-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01170-8

Keywords

Navigation