Skip to main content

Centrality of AI Quality in MLOPs Lifecycle and Its Impact on the Adoption of AI/ML Solutions

  • Conference paper
  • First Online:
Intelligent Systems Design and Applications (ISDA 2022)

Abstract

Despite the challenges around incorporating Artificial Intelligence into business processes, AI is revolutionizing the way companies are doing business. The biggest business and social challenge in the adoption of AI solutions is achieving the end users’ trust in the models and scaling prototypes to production ready models in an enterprise environment. Scaling trustworthy AI in a more transparent, responsible, and governed manner could facilitate widespread adoption of AI solutions in an enterprise environment. After conducting an extensive literature review on different aspects of AI quality, we have developed an integrated AI Quality-MLOps framework which enables the development and deployment of AI solutions in an enterprise environment. AI Quality is the center of the proposed framework, and it guides businesses towards putting a complete set of quality metrics, tests, approaches, and algorithms together to ensure conformance with business objectives. This approach improves the delivery efficiency of the solution both during the design and production phase while conforming to the regulatory guidelines adopted by an organization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Schmitz, A., Akila, M., Hecker, D., Poretschkin, M., Wrobel, S.: The why and how of trustworthy AI. at-Automatisierungstechnik 70(9), 793–804 (2022)

    Google Scholar 

  2. DIN (German Institute for Standardization) Homepage. https://www.din.de/resource/blob/772610/e96c34dd6b12900ea75b460538805349/normungsroadmap-en-data.pdf. Accessed 16 Oct 2022

  3. American Council for Technology-Industry Advisory Council’s Homepage (ACT-IAC). https://www.actiac.org/system/files/Ethical%20Application%20of%20AI%20Framework_0.pdf. Accessed 16 Oct 2022

  4. Bellamy, R.K., et al.: AI Fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63(4/5), 4:1–4:15 (2019)

    Google Scholar 

  5. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  6. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems. vol. 30 (2017)

    Google Scholar 

  7. Pearl, J.: The seven tools of causal inference, with reflections on machine learning. Commun. ACM 62(3), 54–60 (2019)

    Google Scholar 

  8. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating Noise to Sensitivity in Private Data Analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  9. FDA Homepage. https://www.fda.gov/media/122535/download. Accessed 16 Oct 2022

  10. Cortes, C., DeSalvo, G., Mohri, M.: Learning with rejection. In: Ortner, R., Simon, H.U., Zilles, S. (eds.) ALT 2016. LNCS (LNAI), vol. 9925, pp. 67–82. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46379-7_5

    Chapter  MATH  Google Scholar 

  11. Boult, T.E., Cruz, S., Dhamija, A.R., Gunther, M., Henrydoss, J. Scheirer, W.J.: Learning and the unknown: surveying steps toward open world recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33, No. 01, pp. 9801–9807 (2019)

    Google Scholar 

  12. Peters, J., Buhlmann, P., Meinshausen, N.: Causal inference using invariant prediction: identification and confidence intervals. arXiv. Methodology (2015)

    Google Scholar 

  13. Arjovsky, M., Bottou, L., Gulrajani, I., Lopez-Paz, D.: Invariant risk minimization. arXiv preprint arXiv:1907.02893. (2019)

  14. Settles, B.: Active Learning Literature Survey. (2009)

    Google Scholar 

  15. Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: International conference on machine learning, pp. 2152–2161 (2015)

    Google Scholar 

  16. Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. pattern Anal. Mach. Intell. 28(4), pp. 594–611 (2006)

    Google Scholar 

  17. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016)

    Google Scholar 

  18. Frazão, I., Abreu, P.H., Cruz, T., Araújo, H., Simões, P.: Denial of service attacks: Detecting the frailties of machine learning algorithms in the classification process. In: Luiijf, E., Žutautaitė, I., Hämmerli, B.M. (eds.) CRITIS 2018. LNCS, vol. 11260, pp. 230–235. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05849-4_19

    Chapter  Google Scholar 

  19. Newaz, A.I., Haque, N.I., Sikder, A.K., Rahman, M.A., Uluagac, A.S.: Adversarial attacks to machine learning-based smart healthcare systems. In: IEEE Global Communications Conference, pp. 1–6 IEEE (2020)

    Google Scholar 

  20. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  21. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: 25th USENIX Security Symposium, pp. 601–618 (2016)

    Google Scholar 

  22. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy, pp. 3–18 IEEE (2017)

    Google Scholar 

  23. Wang, C., Chen, J., Yang, Y., Ma, X., Liu, J.: Poisoning attacks and countermeasures in intelligent networks: status quo and prospects. Digital Commun. Netw. 8(2) (2021)

    Google Scholar 

  24. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069 (2018)

  25. Tabassi, E., Burns, K.J., Hadjimichael, M., Molina-Markham, A.D., Sexton, J.T.: A taxonomy and terminology of adversarial machine learning. NIST IR, 1–29. (2019)

    Google Scholar 

  26. Bifet, A., Gavalda, R.: Learning from time-changing data with adaptive windowing. In: Proceedings of the 2007 SIAM International Conference on Data Mining, pp. 443–448. Society for Industrial and Applied Mathematics (2007)

    Google Scholar 

  27. Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS (LNAI), vol. 3171, pp. 286–295. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28645-5_29

    Chapter  Google Scholar 

  28. Baena-Garcıa, M., del Campo-Ávila, J., Fidalgo, R., Bifet, A., Gavalda, R., Morales-Bueno, R.: Early drift detection method. In: Fourth International Workshop on Knowledge Discovery from Data Streams. Vol. 6, pp. 77–86 (2006)

    Google Scholar 

  29. Page, E.S.: Continuous inspection schemes. Biometrika 41(1/2), pp.100–115 (1954)

    Google Scholar 

  30. Cotter, A., et al.: Training fairness-constrained classifiers to generalize. In: ICML Workshop: Fairness, Accountability, and Transparency in Machine Learning (2018)

    Google Scholar 

  31. Federal Housing finance Agency’s HomePage. https://www.fhfa.gov/SupervisionRegulation/AdvisoryBulletins/Pages/Artificial-Intelligence-Machine-Learning-Risk-Management.aspx. Accessed 08 Nov 2022

Download references

Acknowledgements

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting. This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

Copyright © 2022 Deloitte Development LLC. All rights reserved.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arunkumar Akkineni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Akkineni, A., Koohborfardhaghighi, S., Singh, S. (2023). Centrality of AI Quality in MLOPs Lifecycle and Its Impact on the Adoption of AI/ML Solutions. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol 717. Springer, Cham. https://doi.org/10.1007/978-3-031-35510-3_42

Download citation

Publish with us

Policies and ethics