Abstract
Artificial intelligence (AI) decision-making systems are already being extensively used to make decisions in situations where legal rules are applied to establish rights and obligations. In the United States, algorithmic systems are employed to determine the rights of individuals to disability benefits, to evaluate the performance of employees, selecting who will be fired, and to assist judges in granting or denying bail and probation. In this paper I explore some possible implications of H. L. A. Hart’s theory of law as a system of primary and secondary rules to the ongoing debate on the viability and the limits of an adjudicating artificial intelligence. Although much has been recently discussed about the potential practical roles of artificial intelligence in legal practice and assisted decision making, the implication to general jurisprudence still requires further development. I try to map some issues of general jurisprudence that may be consequential to the question of whether a non-human entity (an artificial intelligence) would be theoretically able to perform the kind of legal reasoning made by human judges.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
See [5].
See [2].
Jeremy WALDRON acknowledges this paradigmatic status of the model of rules when challenging on of its key elements: “The rule of recognition is … a central component of modern positivist jurisprudence.” [18].
See [11].
See [7].
See [15].
Id. at 39–40.
Id. at 41.
Id. at 42.
See [10].
Id. at 602.
Id. at 605.
HART. supra note 3, at 70.
Id. at 81.
Id. at 94.
Even when the addressees of rules are non-human or collective entities, like unions, states, corporations or intelligent machines, rules still ultimately determine acts to be performed or avoided by human beings.
HART. supra note 3, at 81.
Id. at 94.
Id. at 95.
Id. at 97.
DWORKIN. supra note 4, at 35.
HART. supra note 3, at 96.
Id. at 96.
Id.
Id. at 97.
See [3].
See [12].
HART. supra note 3, at 98.
Id. at 10.
Id. at 33.
Id. at 95.
Id. at 101.
See [13].
See [6].
HART. supra note 3, at 205.
Id.
Id.
HILDEBRANDT. supra note 26, at 23.
Id.
See [20].
See [1].
See [16].
Id. at 161.
Id.
See [9].
HILDEBRANDT. supra note 26 at 19.
See [17].
HART. supra note 3, at 261.
Id. at 35–7.
Id. at 36.
Id. at 40.
Id.
See [4].
Id at 53.
Consistent with this picture of legal rules, legal institutions can in turn be described as consisting of “informational processes” organized around exchanges of legal information. See ABITEBOUL & DOWEK. supra note 33, at 82.
ABITEBOUL & DOWEK. supra note 33, at 36.
HART. supra note 3, at 27–8.
BERWICK & CHOMSKY. supra note 52 at 53.
See [14].
Here, I try a parallelism with David Hilbert’s formalist program in mathematics.
HART discusses the essential incompleteness of legal rules when making his case about the problems of the penumbra. See supra note 9, at 612.
RAZ. supra note 5, at 45.
HART. supra note 3, at 79.
Contrary to Hans Kelsen, who insists that norms are always originated from acts of will.
RAZ. supra note 5, at 48.
Assuming that the adjudicating AI in not supplied with, nor it articulates, moral, ideological, political or other standards beyond those assimilated and encoded into the legal corpus.
ABITEBOUL & DOWEK. supra note 33, at 36.
Id. at 2.
See [19].
HART. supra note 3, at 135.
See [8].
References
Abiteboul, S., Dowek, G.: The Age of Algorithms, vol. 6. Cambridge University Press, Cambridge (2020)
Ash, E.: Judge, Jury, and EXEcute file: the brave new world of legal automation (2018)
Bathaee, Y.: Artificial intelligence opinion liability. Berkeley Technol. Law J. 35, 113–152 (2020)
Berwick, R.C., Chomsky, N.: Why Only Us: Language and Evolution, vol. 132. MIT Press (2016)
Crawford, K., Schultz, J.: AI systems as state actors. Columbia Law Rev. 119, 1941–1943 (2019)
D’Almeida, A.C.: The Future of AI in the Brazilian Judicial System: AI mapping, integration, and governance. SIPA Capstone 12–13 (2021)
Dworkin, R.: The Law’s Empire, vol. 34 (1986)
Engstrom, D.F., Ho, D.E.: Algorithmic accountability in the administrative state. Yale J. Regul. 37, 800–807 (2020)
Garcez, A.S.d'A., Gabbay, D.M., Lamb, L.C.: A neural cognitive model of argumentation with application to legal inference and decision making. J. Appl. Logic. 12, 109–125 (2014)
Hart, H.L.A.: Positivism and the separation of law and morals. Harv. L. Rev. 71, 593–601 (1957)
Hart, H.L.A.: The Concept of Law. Oxford University Press, Oxford (2012)
Hildebrandt, M.: Law as computation in the era of artificial intelligence: speaking law to the power of statistics. Univ. Toronto Law J. 68, 12–26 (2018)
Oswald, M., Grace, J., Urwin, S., Barnes, G.C.: Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Inf. Commun. Technol. Law 27, 223–235 (2018)
Prakken, H.: Logical Tools for Modeling Legal Argument: A Study of Defeasible Reasoning in Law. Springer, Berlin, pp. 275–280 (1997)
Raz, J.: Legal positivism and the sources of law. In: The authority of law: Essays on law and morality, pp. 37–47 (1979)
Reich, R., Sahami, M., Weinstein, J.M.: System Error: Where Big Tech Went Wrong and How We Can Reboot. Hodder and Stoughton, p. 82 (2021)
Tenenbaum, J.B., Kemp, C., Griffiths, T.L., Goodman, N.D.: How to grow a mind: statistics, structure, and abstraction. Science 6022, 1279–1281 (2011)
Waldron, J.: Who needs rules of recognition? In: Adler, M. & Himma, K.E. (eds.) The rule of recognition and the U.S. Constitution, vol. 327 (2009)
Waluchow, W.: Inclusive Legal Positivism. Clarendon Press, Oxford, p. 80 (1994)
Williams, R.: Rethinking Deference for Algorithmic Decision-Making. Oxford Legal Studies Research Paper No. 7/2019 (2018)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Canalli, R.L. Artificial intelligence and the model of rules: better than us?. AI Ethics 3, 879–885 (2023). https://doi.org/10.1007/s43681-022-00210-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00210-3