Skip to main content
Log in

Legal requirements on explainability in machine learning

  • Original Research
  • Published:
Artificial Intelligence and Law Aims and scope Submit manuscript

Abstract

Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V (2016) Predicting judicial decisions of the European court of human rights: a natural language processing perspective. PeerJ Comput Sci 2:e93

    Article  Google Scholar 

  • Alonso C (2012) La motivation des décisions juridictionnelles : exigence(s) du droit au procès équitable. In Regards sur le droit au procès équitable. Presses de l’Université Toulouse 1 Capitole

  • Ashley KD, Brüninghaus S (2009) Automatically classifying case texts and predicting outcomes. Artif Intell Law 17(2):125–165

    Article  Google Scholar 

  • Autin J-L (2011) La motivation des actes administratifs unilatéraux, entre tradition nationale et évolution des droits européens. Revue française d’administration publique 1:85–99

    Article  Google Scholar 

  • Bibal A, Frénay B (2016) Interpretability of machine learning models and representations: an introduction. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN), Bruges, Belgium, pp 77–82

  • Branting LK (2017) Data-centric and logic-based models for automated legal problem solving. Artif Intell Law 25(1):5–27

    Article  Google Scholar 

  • Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  Google Scholar 

  • Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. arXiv preprint arXiv:1711.01134

  • Dua D, Graff C (2017) UCI machine learning repository. http://archive.ics.uci.edu/ml

  • Edwards L, Veale M (2018) Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”? IEEE Secur Priv 16(3):46–54

    Article  Google Scholar 

  • Fisher A, Rudin C, Dominici F (2018) All models are wrong but many are useful: variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv preprint arXiv:1801.01489

  • Frénay B, Hofmann D, Schulz A, Biehl M, Hammer B (2014) Valid interpretation of feature relevance for linear data mappings. In: IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp 149–156

  • Goodman B, Flaxman S (2016) EU regulations on algorithmic decision-making and a “right to explanation”. In: ICML workshop on human interpretability in machine learning, New York, USA

  • Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42

    Article  Google Scholar 

  • John G. H, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning (ICML), pp 121–129

  • Kodratoff Y (1994) The comprehensibility manifesto. AI Commun 7(2):83–85

    Article  Google Scholar 

  • Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324

    Article  Google Scholar 

  • Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31(4):611–627

    Article  Google Scholar 

  • Li J, Zhang G, Yu L, Meng T (2018) Research and design on cognitive computing framework for predicting judicial decisions. J Signal Process Syst 91:1159–1167

    Article  Google Scholar 

  • Lipton ZC (2016) The mythos of model interpretability. In: ICML workshop on human interpretability of machine learning, New York, USA

  • Luo B, Feng Y, Xu J, Zhang X, Zhao D (2017) Learning to predict charges for criminal cases with legal basis. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 2727–2736

  • Malgieri G, Comandé G (2017) Why a right to legibility of automated decision-making exists in the general data protection regulation. Int Data Priv Law 7(4):243–265

    Article  Google Scholar 

  • Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency (FAT), pp 279–288

  • Palau RM, Moens MF (2009) Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the international conference on artificial intelligence and law (ICAIL), pp 98–107

  • Pasquale F (2015) Black box society, the secret algorithms that control money and information. Harvard University Press, New York

    Book  Google Scholar 

  • Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD, pp. 1135–1144

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–2015

    Article  Google Scholar 

  • Selbst AD, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139

    Google Scholar 

  • Selbst AD, Powles J (2017) Meaningful information and the right to explanation. Int Data Privacy Law 7(4):233–242

    Article  Google Scholar 

  • Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 618–626

  • Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint aDrXiv:1312.6034

  • Singla P, Domingos P (2005) Discriminative training of markov logic networks. In: National conference on artificial intelligence (AAAI), pp 868–873

  • Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc Ser B (Methodol) 58(1):267–288

    MathSciNet  MATH  Google Scholar 

  • Ustun B, Rudin C (2016) Supersparse linear integer models for optimized medical scoring systems. Mach Learn 102(3):349–391

    Article  MathSciNet  Google Scholar 

  • Ustun B, Traca S, Rudin C (2013a) Supersparse linear integer models for interpretable classification. arXiv preprint arXiv:1306.6677

  • Ustun B, Traca S, Rudin C (2013b) Supersparse linear integer models for predictive scoring systems. In: Proceedings of AAAI late breaking track

  • Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76–99

    Article  Google Scholar 

  • Wiener C (1969) La motivation des décisions administratives en droit comparé. Revue internationale de droit comparé 21(21):779–795

    Article  Google Scholar 

  • Ye H, Jiang X, Luo Z, Chao W (2018) Interpretable charge predictions for criminal cases: learning to generate court views from fact descriptions. In: Proceedings of the conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1854–1864

  • Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5:1205–1224

    MathSciNet  MATH  Google Scholar 

  • Zhong H, Zhipeng G, Tu C, Xiao C, Liu Z, Sun M (2018) Legal judgment prediction via topological learning. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp 3540–3549

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrien Bibal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bibal, A., Lognoul, M., de Streel, A. et al. Legal requirements on explainability in machine learning. Artif Intell Law 29, 149–169 (2021). https://doi.org/10.1007/s10506-020-09270-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-020-09270-4

Keywords

Navigation