Skip to main content

Runtime Verification and AI: Addressing Pragmatic Regulatory Challenges

  • Conference paper
  • First Online:
Bridging the Gap Between AI and Reality (AISoLA 2024)

Abstract

The deployment of AI-driven solutions to increasingly complex tasks with real-world impact raises various challenges in the area of verification. Using the case study of an AI-assisted litter detection being developed for rural areas in Malta, this paper highlights the multi-faceted nature of the risks involved concerning: data issues, functionality correctness, safety concerns, and legal considerations. We place particular focus on the last of these: regulatory challenges.

Drawing inspiration from related works, considering applicable Maltese technology guidelines and EU legislation, against the backdrop of the challenges presented in the case study, the proposed runtime verification architecture brings the pieces together in a comprehensive and pragmatic manner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Turing’s 1949 paper on algorithmic verification [32] and his 1950 paper on machines and intelligence [33] are frequently cited as the starting points of the two fields.

  2. 2.

    In the European Union, the provisions applicable to drone operations in the ‘open’ and ‘specific’ categories are described in EU Regulation 2019/945 and EU Regulation 2019/947.

  3. 3.

    These include systems using subliminal techniques, ones which distort the behaviour of persons on the basis of their age, disability or a specific social or economic situation, use of biometric data to categorise individuals, inferring personal information such as race, political opinions, etc., some uses of real-time remote biometric identification systems in publicly accessible spaces and AI systems to infer emotions of a natural person in the areas of workplace.

  4. 4.

    It is worth highlighting that this is a regulatory, not technological sandbox i.e. it is meant to address and mitigate regulatory risks through the residency period in the sandbox, rather than being a technical solution to limit the interaction of a system with its environment.

  5. 5.

    Innovative Technology Arrangement (ITA) is the general term for the digital systems generally covered by the legislation, of which AI systems applied to critical areas pertain.

  6. 6.

    Although related, they are not equivalent: the notion of a shield is a component of the harness because the former is mostly concerned with enforcing safety properties while the latter is also interested in other aspects such as collecting and analysing data more generally.

References

  1. Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., Topcu, U.: Safe reinforcement learning via shielding. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, 2–7 February 2018, pp. 2669–2678. AAAI Press (2018). https://doi.org/10.1609/AAAI.V32I1.11797

  2. Azzopardi, S., Colombo, C., Ebejer, J., Mallia, E., Pace, G.J.: Runtime verification using VALOUR. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. An International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools, 15 September 2017, Seattle, WA, USA. Kalpa Publications in Computing, vol. 3, pp. 10–18. EasyChair (2017). https://doi.org/10.29007/BWD4

  3. Azzopardi, S., Colombo, C., Pace, G.J.: A controlled natural language for financial services compliance checking. In: Davis, B., Keet, C.M., Wyner, A. (eds.) Controlled Natural Language - Proceedings of the Sixth International Workshop, CNL 2018, Maynooth, Co. Kildare, Ireland, 27–28 August 2018. Front. Artif. Intell. Appl. 304, 11–20. IOS Press (2018). https://doi.org/10.3233/978-1-61499-904-1-11

  4. Cheng, C., NĂ¼hrenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. In: Teich, J., Fummi, F. (eds.) Design, Automation and Test in Europe Conference and Exhibition, DATE 2019, Florence, Italy, 25–29 March 2019, pp. 300–303. IEEE (2019). https://doi.org/10.23919/DATE.2019.8714971

  5. Ellul, J., Galea, J., Ganado, M., Mccarthy, S., Pace, G.J.: Regulating blockchain, DLT and smart contracts: a technology regulator’s perspective. ERA Forum 21(2), 209–220 (2020). https://doi.org/10.1007/s12027-020-00617-7

    Article  Google Scholar 

  6. Ellul, J., Pace, G.J., McCarthy, S., Sammut, T., Brockdorff, J., Scerri, M.: Regulating artificial intelligence: a technology regulator’s perspective. In: MaranhĂ£o, J., Wyner, A.Z. (eds.) ICAIL ’21: Eighteenth International Conference for Artificial Intelligence and Law, SĂ£o Paulo Brazil, 21–25 June 2021, pp. 190–194. ACM (2021). https://doi.org/10.1145/3462757.3466093

  7. European Union: Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) (2019)

    Google Scholar 

  8. European Union: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (2024)

    Google Scholar 

  9. Aranda García, A., Cambronero, M.-E., Colombo, C., Llana, L., Pace, G.J.: Runtime verification of contracts with Themulus. In: de Boer, F., Cerone, A. (eds.) SEFM 2020. LNCS, vol. 12310, pp. 231–246. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58768-0_13

    Chapter  Google Scholar 

  10. Pace, G.J., Ellul, J., Revolidis, I., Schneider, G.: When is good enough good enough? On software assurances. ERA Forum 23, 337–360 (2023). https://doi.org/10.1007/s12027-022-00728-3

  11. Havelund, K.: What does AI have to do with RV? - (extended abstract). In: Margaria, T., Steffen, B. (eds.) ISoLA 2012. LNCS, vol. 7609, pp. 292–294. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34026-0_22

    Chapter  MATH  Google Scholar 

  12. He, Y., Schumann, J., Yu, H.: Toward runtime assurance of complex systems with AI components. In: PHM Society European Conference, vol. 7, pp. 166–174. PHM Society (2022). https://doi.org/10.36001/phme.2022.v7i1.3361

  13. Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. In: Giacomo, G.D., et al. (eds.) ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August–8 September 2020, Santiago de Compostela, Spain - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020). Front. Artif. Intell. Appl. 325, 2433–2440. IOS Press (2020). https://doi.org/10.3233/FAIA200375

  14. Könighofer, B., Bloem, R., Ehlers, R., Pek, C.: Correct-by-construction runtime enforcement in AI - a survey. In: Raskin, J.F., Chatterjee, K., Doyen, L., Majumdar, R. (eds.) Principles of Systems Design. LNCS, vol. 13660, pp. 650–663. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-22337-2_31

    Chapter  MATH  Google Scholar 

  15. Lu, S., Lysecky, R.: Time and sequence integrated runtime anomaly detection for embedded systems. ACM Trans. Embed. Comput. Syst. 17(2), 38:1–38:27 (2018). https://doi.org/10.1145/3122785

  16. Lukina, A., Schilling, C., Henzinger, T.A.: Into the unknown: active monitoring of neural networks. In: Feng, L., Fisman, D. (eds.) RV 2021. LNCS, vol. 12974, pp. 42–61. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88494-9_3

    Chapter  MATH  Google Scholar 

  17. Mallozzi, P., Castellano, E., Pelliccione, P., Schneider, G., Tei, K.: A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments. In: Proceedings of the 2nd International Workshop on Robotics Software Engineering, RoSE@ICSE 2019, Montreal, QC, Canada, 27 May 2019, pp. 5–12. IEEE/ACM (2019). https://doi.org/10.1109/ROSE.2019.00011

  18. of Malta, G.: Innovative Technology Arrangements and Services Act (2018)

    Google Scholar 

  19. Malta Digital Innovation Authority: AI ITA Blueprint Guidelines (2019). https://www.mdia.gov.mt/wp-content/uploads/2022/11/AI-ITA-Blueprint-Guidelines-03OCT19.pdf

  20. Malta Digital Innovation Authority: AI ITA Guidelines (2019). https://www.mdia.gov.mt/wp-content/uploads/2022/11/AI-ITA-Guidelines-03OCT19.pdf

  21. Malta Digital Innovation Authority: AI ITA Nomenclature (2019). https://www.mdia.gov.mt/wp-content/uploads/2022/11/AI-ITA-Nomenclature-03OCT19.pdf

  22. Malta Digital Innovation Authority: AI System Auditor Control Objectives (2019). https://www.mdia.gov.mt/wp-content/uploads/2022/11/AI-ITA-SA-Control-Objectives-03OCT19.pdf

  23. Malta Digital Innovation Authority: Technology Assurance Sandbox v2.0 Programme Guidelines (2020). https://www.mdia.gov.mt/wp-content/uploads/2022/11/MDIA-Technology-Assurance-Sandbox-TAS-Programme-Guidelines.pdf

  24. Neufeld, E.A., Bartocci, E., Ciabattoni, A., Governatori, G.: Enforcing ethical goals over reinforcement-learning policies. Ethics Inf. Technol. 24(4), 43 (2022). https://doi.org/10.1007/S10676-022-09665-8

  25. Pace, G.J., Ravn, A.P. (eds.): Proceedings Sixth Workshop on Formal Languages and Analysis of Contract-Oriented Software, FLACOS 2012, Bertinoro, Italy, 19 September 2012, EPTCS, vol. 94 (2012). https://doi.org/10.4204/EPTCS.94

  26. Pace, G.J., SĂ¡nchez, C., Schneider, G.: Reliable smart contracts. In: Margaria, T., Steffen, B. (eds.) ISoLA 2020, Part III. LNCS, vol. 12478, pp. 3–8. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61467-6_1

    Chapter  MATH  Google Scholar 

  27. Pardo, R., Colombo, C., Pace, G.J., Schneider, G.: An automata-based approach to evolving privacy policies for social networks. In: Falcone, Y., SĂ¡nchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 285–301. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46982-9_18

    Chapter  MATH  Google Scholar 

  28. Pisani, D., Seychell, D., Schembri, M.: Detecting litter from aerial imagery using the SODA dataset. In: 2024 IEEE 22nd Mediterranean Electrotechnical Conference (MELECON) (2024)

    Google Scholar 

  29. Rahman, Q.M., Corke, P., Dayoub, F.: Run-time monitoring of machine learning for robotic perception: a survey of emerging trends. IEEE Access 9, 20067–20075 (2021). https://doi.org/10.1109/ACCESS.2021.3055015

  30. Schembri, M., Seychell, D.: Small object detection in highly variable backgrounds. In: 2019 IEEE 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 32–37 (2019)

    Google Scholar 

  31. Sha, L.: Using simplicity to control complexity. IEEE Softw. 18(4), 20–28 (2001). https://doi.org/10.1109/MS.2001.936213

  32. Turing, A.M.: Checking a large routine. In: Report of a Conference on High Speed Automatic Calculating Machines, pp. 67–69 (1949)

    Google Scholar 

  33. Turing, A.M.: Computing machinery and intelligence. MIND: Q. Rev. Psychol. Philos. LIX(236), 67–69 (1950)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordon Pace .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Colombo, C., Pace, G., Seychell, D. (2025). Runtime Verification and AI: Addressing Pragmatic Regulatory Challenges. In: Steffen, B. (eds) Bridging the Gap Between AI and Reality. AISoLA 2024. Lecture Notes in Computer Science, vol 15217. Springer, Cham. https://doi.org/10.1007/978-3-031-75434-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-75434-0_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-75433-3

  • Online ISBN: 978-3-031-75434-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics