Skip to main content

Bayesian CAIPI: A Probabilistic Approach to Explanatory and Interactive Machine Learning

  • Conference paper
  • First Online:
Artificial Intelligence. ECAI 2023 International Workshops (ECAI 2023)

Abstract

Explanatory Interactive Machine Learning queries user feedback regarding the prediction and the explanation of novel instances. CAIPI, a state-of-the-art algorithm, captures the user feedback and iteratively biases a data set toward a correct decision-making mechanism using counterexamples. The counterexample generation procedure relies on hand-crafted data augmentation and might produce implausible instances. We propose Bayesian CAIPI that embeds a Variational Autoencoder into CAIPI’s classification cycle and samples counterexamples from the likelihood distribution. Using the MNIST data set, where we distinguish ones from sevens, we show that Bayesian CAIPI matches the predictive accuracy of both, traditional CAIPI and default deep learning. Moreover, it outperforms both in terms of explanation quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Figure adapted from https://danijar.com/building-variational-auto-encoders-in-tensorflow/, 2023/07/11.

  2. 2.

    Architecture adapted from https://www.tensorflow.org/tutorials/generative/cvae, 2023/07/11.

  3. 3.

    http://yann.lecun.com/exdb/mnist/, 2023/07/11.

References

  1. Amershi, S., Cakmak, M., Knox, W.B., Kulesza, T.: Power to the people: the role of humans in interactive machine learning. AI Mag. 35(4), 105–120 (2014). https://doi.org/10.1609/aimag.v35i4.2513

    Article  Google Scholar 

  2. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: a review for statisticians. J. American Stat. Assoc. 112(518), 859–877 (Apr 2017). https://doi.org/10.1080/01621459.2017.1285773

  3. Doersch, C.: Tutorial on variational autoencoders (2016). https://arxiv.org/abs/1606.05908

  4. Ji, T., Vuppala, S.T., Chowdhary, G., Driggs-Campbell, K.R.: Multi-modal anomaly detection for unstructured and uncertain environments. In: Kober, J., Ramos, F., Tomlin, C.J. (eds.) 4th Conference on Robot Learning, CoRL 2020, 16–18 November 2020, Virtual Event / Cambridge, MA, USA. Proceedings of Machine Learning Research, vol. 155, pp. 1443–1455. PMLR (2020). https://proceedings.mlr.press/v155/ji21a.html

  5. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings (2015). https://arxiv.org/abs/1412.6980

  6. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings (2014). https://arxiv.org/abs/1312.6114

  7. Kulesza, T., Burnett, M.M., Wong, W., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Brdiczka, O., Chau, P., Carenini, G., Pan, S., Kristensson, P.O. (eds.) Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI 2015, Atlanta, GA, USA, March 29 - April 01, 2015, pp. 126–137. ACM (2015). https://doi.org/10.1145/2678025.2701399

  8. Nakao, Y., Stumpf, S., Ahmed, S., Naseer, A., Strappelli, L.: Toward involving end-users in interactive human-in-the-loop AI Fairness. ACM Trans. Interact. Intell. Syst. 12(3), 1–3 (Jul 2022). https://doi.org/10.1145/3514258

  9. Pfeuffer, N., et al.: Explanatory interactive machine learning. Business Inform. Syst. Eng. (2023). https://doi.org/10.1007/s12599-023-00806-x

    Article  Google Scholar 

  10. Pu, Y., et al.: Variational autoencoder for deep learning of images, labels and captions. In: Lee, D.D., Sugiyama, M., von Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5–10, 2016, Barcelona, Spain, pp. 2352–2360 (2016)

    Google Scholar 

  11. Ribeiro, M.T., Singh, S., Guestrin, C.: "Why should I trust you?": explaining the predictions of any classifier. In: Krishnapuram, B., Shah, M., Smola, A.J., Aggarwal, C.C., Shen, D., Rastogi, R. (eds.) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13–17, 2016, pp. 1135–1144. ACM (2016). https://doi.org/10.1145/2939672.2939778

  12. Salimans, T., Kingma, D.P., Welling, M.: Markov chain monte carlo and variational inference: bridging the gap. In: Bach, F.R., Blei, D.M. (eds.) Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015. JMLR Workshop and Conference Proceedings, vol. 37, pp. 1218–1226. JMLR.org (2015). https://ngs.mlr.press/v37/salimans15.html

  13. Cellier, P., Driessens, K. (eds.): Machine Learning and Knowledge Discovery in Databases: International Workshops of ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part I. Springer International Publishing, Cham (2020)

    Google Scholar 

  14. Schramowski, P., et al.: Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Mach. Intell. 2(8), 476–486 (2020). https://doi.org/10.1038/s42256-020-0212-3

    Article  Google Scholar 

  15. Settles, B.: Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan & Claypool Publishers (2012). https://doi.org/10.2200/S00429ED1V01Y201207AIM018

  16. Shivaswamy, P., Joachims, T.: Coactive Learning. J. Artif. Intell. Res. 53, 1–40 (2015). https://doi.org/10.1613/jair.4539

    Article  MathSciNet  Google Scholar 

  17. Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds.): Artificial Intelligence Applications and Innovations. AIAI 2022 IFIP WG 12.5 International Workshops: MHDW 2022, 5G-PINE 2022, AIBMG 2022, ML@HC 2022, and AIBEI 2022, Hersonissos, Crete, Greece, June 17–20, 2022, Proceedings. Springer International Publishing, Cham (2022)

    Google Scholar 

  18. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017. Proceedings of Machine Learning Research, vol. 70, pp. 3319–3328. PMLR (2017). http://proceedings.mlr.press/v70/sundararajan17a.html

  19. Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Conitzer, V., Hadfield, G.K., Vallor, S. (eds.) Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, HI, USA, January 27–28, 2019, pp. 239–245. ACM (2019). https://doi.org/10.1145/3306618.3314293

  20. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.: Extracting and composing robust features with denoising autoencoders. In: Cohen, W.W., McCallum, A., Roweis, S.T. (eds.) Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5–9, 2008. ACM International Conference Proceeding Series, vol. 307, pp. 1096–1103. ACM (2008). https://doi.org/10.1145/1390156.1390294

  21. Ware, M., Frank, E., Holmes, G., Hall, M.A., Witten, I.H.: Interactive machine learning: letting users build classifiers. Int. J. Hum Comput Stud. 55(3), 281–292 (2001). https://doi.org/10.1006/ijhc.2001.0499

    Article  Google Scholar 

  22. Shen, D., et al. (eds.): Medical Image Computing and Computer Assisted Intervention – MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II. Springer International Publishing, Cham (2019)

    Google Scholar 

  23. Zhu, Q., Zhang, R.: A Classification Supervised Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids (2019). https://arxiv.org/abs/1902.00220

Download references

Acknowledgments

This research is funded by BMBF Germany (hKI-Chemie, # 01IS21023A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emanuel Slany .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Slany, E., Scheele, S., Schmid, U. (2024). Bayesian CAIPI: A Probabilistic Approach to Explanatory and Interactive Machine Learning. In: Nowaczyk, S., et al. Artificial Intelligence. ECAI 2023 International Workshops. ECAI 2023. Communications in Computer and Information Science, vol 1947. Springer, Cham. https://doi.org/10.1007/978-3-031-50396-2_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50396-2_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50395-5

  • Online ISBN: 978-3-031-50396-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics