Skip to main content

Towards Transparent AI: How will the AI Act Shape the Future?

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14967))

Included in the following conference series:

  • 145 Accesses

Abstract

The European Union's Artificial Intelligence Act (AIA) aims to establish legal standards for AI systems, emphasizing transparency and explainability, especially in high-risk systems. Our research it is divided into sections focusing on the legal framework established by the AIA, advancements in AI and the intersection between legal obligations and technological developments. We explore how the AIA addresses these issues and intersects with emerging research in XAI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The Legal Affairs Committee of the European Parliament proposed the inclusion of a definition of transparency – see [3], Article 4a.

  2. 2.

    We analyse the Regulation (EU) 2024/1689 [4]. For analyses of previous versions see [5,6,7,20] which one with a specific categorization for transparency regulation.

  3. 3.

    The initial draft mentioned “users”. The term user has been replaced by the term deployers. Transparency in this area is therefore aimed at deployers, to the exclusion of consumers (non-professions). A deployer is a professional that uses an AI system under their authority.

  4. 4.

    See Article 13(1) and Recital 72.

  5. 5.

    By setting out how the system works, it is possible to establish risk and quality management systems (Articles 9 and 17).

  6. 6.

    Moreover, this obligation is crucial for deployers to comply with the obligation established in Article 26, namely the obligation to monitor the operation of the AI system and control risks.

  7. 7.

    Previously, the AIA draft did not allow a person affected by an AI system to exercise rights against the supplier or by demanding explanations. However, this has been proposed by [3]. In Article 69(c), the Committee proposed that any affected persons subject to a decision based on an output from an AI system which produces legal effect that impact health, safety, fundamental rights, socio-economic well-being should receive an explanation at the time when the decision is communicated.

  8. 8.

    This is the most relevant category of opacity, but there are also other types of opacity, namely because of technical illiteracy and as deliberate corporate or state secrecy [30]. Thus, the source of opacity can be human cognition, technical or legal, depending on the target.

  9. 9.

    This approach uses comprehensible (logical) languages that allow us to check how the machine arrived at a particular result. However, this logic may not be easily understood by non-specialists [12], which requires the use of techniques that allow different stakeholders to better understand it.

  10. 10.

    According to the author, there are usually no differences in performance.

  11. 11.

    In addition, particular attention should be paid to the way information is communicated.

  12. 12.

    Therefore, explainability metrics must be established, which should not be confused with the quality of the system's results [19, p.2715]. A model can be highly explainable but perform inadequately. However, the quality of the explanation can help understand the quality of the model.

  13. 13.

    [7] divided explainability in the AIA in user-empowering and compliance oriented. However, with recent changes now referring “deployers”, we should consider deployer-empowering to allow them to comply with AIA; so, in fact, is a compliance-oriented measure.

  14. 14.

    The second and the third are important to address two types of opacity: (i) intentional opacity, where companies deliberately hide information from public scrutiny; and (ii) opacity due to complex models that are difficult tto understand. Our concern is with how the AIA addresses the the second type. As noted in [16], addressing this type of opacity requires careful selection and design of the algorithms.

  15. 15.

    Not only to give them knowledge, but also to allow the exercise of their rights.

  16. 16.

    According to [22] this is not a complex explanation of algorithms, but sufficient information to allow that the data subject to understand the reasoning behind the decision.

  17. 17.

    As identified by [28], transparency costs “can be mitigated by choosing where and how transparency interventions are necessary”.

  18. 18.

    Trust in a system is bolstered when stakeholders comprehend its design and behaviour.

  19. 19.

    Explainability plays a crucial role in understanding and correcting any flaws in the system, ensuring fairness. Moreover, a fair system allows affected individuals to challenge decisions made by AI systems, which is only possible if individuals have the right to an explanation.

  20. 20.

    GDPR primarily applies to automated decisions with significant or legal effects [8], leaving out AI systems that could have significant impact such as disseminating fake news.

References

  1. Independent High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI (2019)

    Google Scholar 

  2. European Commission. White Paper on Artificial Intelligence – A European approach to excellence and trust (2020)

    Google Scholar 

  3. European Parliament. Compromise AMs – JURI AI Act – FINAL 30/08/2022 (2022)

    Google Scholar 

  4. Regulation 2024/1689 (EU) of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

    Google Scholar 

  5. Gyevnar, B., Ferguson, N., Schafer, B.: Bridging the transparency gap: what can explainable AI learn from the AI Act? In: Gal, K., Nowé, A., Nalepa, G.J., Fairstein, R., Rădulescu, R. (Eds.) Proceedings of ECAI 2023, the 26th European Conference on Artificial Intelligence. Frontiers in Artificial Intelligence and Applications, vol. 372. pp. 964–971 (2023)

    Google Scholar 

  6. Hacker, P., Passoth, J.H.: Varieties of AI explanations under the law. From the GDPR to the AIA, and beyond. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds.) xxAI - Beyond Explainable AI. xxAI 2020. Lecture Notes in Computer Science(), vol. 13200. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-04083-2_17

  7. Sovrano, F., Sapienza, S., Palminari, M., Vitali, F.: Metrics, Explainability and the European AI act proposal. J. 581, 126–138 (2022)

    Google Scholar 

  8. Edwards, L., Veale, M.: Enslaving the algorithm: from a “Right to an Explanation” to a “Right to Better Decisions”?. IEEE Secur. Privacy 16(3), 46–54 (2018)

    Google Scholar 

  9. Ebers, M.: Regulating explainable AI in the European union. An overview of the current legal framework(s). In: Colonna, L., Greenstein, S. (eds.) Nordic Yearbook of Law and Informatics 2020: Law in the Era of Artificial Intelligence (2021)

    Google Scholar 

  10. Panigutti, C., et al.: The role of explainable AI in the context of the AI Act. In: 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2023), pp. 1139–1150 (2023)

    Google Scholar 

  11. Dignum, V,: Responsible Artificial Intelligence. How to Develop and Use AI in a Responsible Way. Springer (2019)

    Google Scholar 

  12. Bruijnm, H., Warnier, M., Janssen, M.: The perils and pitfalls of explainable AI: strategies for explaining algorithmic decision-making. Gov. Inf. 39(2), 101666 (2022)

    Article  Google Scholar 

  13. Arrieta, A., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  14. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  15. Petkovic, D.: It is not “accuracy vs. explainability”—we need both for trustworthy AI systems. IEEE Trans. Technol. Soc. 4(1), 46–53 (2023)

    Article  Google Scholar 

  16. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “Right to Explanation. AI Mag. 38(3), 50–57 (2017)

    Google Scholar 

  17. Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. Front. Big Data 4, 688969 (2021)

    Article  Google Scholar 

  18. KU Leuven Homepage, Cuypers, A.: The right to an explanation in the AI Act: a right to interpretable models? 2024. https://www.law.kuleuven.be/citip/blog/the-right-to-explanation-in-the-ai-act-a-right-to-interpretable-models/. Accessed 15 Jul 2024

  19. Mulder, W.D., Valcke, P.: The need for a numeric measure of explainability. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 2712–2720 (2021)

    Google Scholar 

  20. Walke, F., Bennek, L., Winkler, T.J.: Artificial intelligence explainability requirements of the AI act and metrics for measuring compliance. In: Wirtschaftsinformatik 2023 Proceedings, vol. 77 (2023)

    Google Scholar 

  21. Moreira, N.A., Freitas, P.M., Novais, P.: The AI act meets general purpose AI: the good, the bad and the uncertain. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds.) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science(), vol. 14116. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-49011-8_13

  22. Article 29 Data Protection Working Party. Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (WP251rev.01) (2018)

    Google Scholar 

  23. European Telecomunications Standards Institute. Securing Artificial Intelligence (SAI); Explicability and transparency of AI processing (2023)

    Google Scholar 

  24. Mitchell, M., et al.: Model cards for model reporting. In: FAT* 2019: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019)

    Google Scholar 

  25. European Commission. Building Trust in Human-Centric Articial Intelligence. Technical Report COM (2019) 168 final (2019)

    Google Scholar 

  26. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in general data protection regulation. Int. Data Privacy Law 7(2) (2017)

    Google Scholar 

  27. Binns, R.: Algorithmic accountability and public reason. Philos. Technol. 31(4), 543–556 (2017). https://doi.org/10.1007/s13347-017-0263-5

    Article  Google Scholar 

  28. EPRS. A governance framework for algorithmic accountability and transparency (2019)

    Google Scholar 

  29. Facchini, A., Termine, A.: Towards a taxonomy for the opacity of AI systems. In: Müller, V.C. (eds.) Philosophy and Theory of Artificial Intelligence 2021. PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 63. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09153-7_7

  30. Burrel, J.: How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data & Society (2006)

    Google Scholar 

Download references

Acknowledgments

The work of Nídia Andrade Moreira has been supported by FCT - Fundação para a Ciência e Tecnologia within the Grant 2021.07986.BD.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nídia Andrade Moreira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Moreira, N.A., Freitas, P.M., Novais, P. (2025). Towards Transparent AI: How will the AI Act Shape the Future?. In: Santos, M.F., Machado, J., Novais, P., Cortez, P., Moreira, P.M. (eds) Progress in Artificial Intelligence. EPIA 2024. Lecture Notes in Computer Science(), vol 14967. Springer, Cham. https://doi.org/10.1007/978-3-031-73497-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73497-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73496-0

  • Online ISBN: 978-3-031-73497-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics