skip to main content
10.1145/3637528.3671499acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract

Workshop on Human-Interpretable AI

Published: 24 August 2024 Publication History

Abstract

This workshop aims to spearhead research on Human-Interpretable Artificial Intelligence (HI-AI) by providing: (i) a general overview of the key aspects of HI-AI, in order to equip all researchers with the necessary background and set of definitions; (ii) novel and interesting ideas coming from both invited talks and top paper contributions; (iii) the chance to engage in dialogue with prominent scientists during poster presentations and coffee breaks. The workshop welcomes contributions covering novel interpretable-by-design or post-hoc approaches, as well as theoretical analysis of existing works. Additionally, we accept visionary contributions speculating on the future potential of this field. Finally, we welcome contributions from related fields such as Ethical AI, Knowledge-driven Machine learning, Human-machine Interaction, but also applications in Medicine and Industry, and analyses from Regulatory experts.

References

[1]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems, Vol. 31 (2018).
[2]
David Alvarez Melis and Tommi Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems, Vol. 31 (2018).
[3]
Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Lió, Marco Maggini, and Stefano Melacci. 2023. Logic explained networks. Artificial Intelligence, Vol. 314 (2023), 103822.
[4]
Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a ?right to explanation". AI magazine, Vol. 38, 3 (2017), 50--57.
[5]
Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, and Pietro Lio. 2022. Extending Logic Explained Networks to Text Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8838--8857.
[6]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. PMLR, 2668--2677.
[7]
Sunnie SY Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. " Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1--17.
[8]
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. Concept bottleneck models. In International conference on machine learning. PMLR, 5338--5348.
[9]
Johann Laux, Sandra Wachter, and Brent Mittelstadt. 2024. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, Vol. 18, 1 (2024), 3--32.
[10]
Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, and Elena Baralis. 2023. Concept-based Explainable Artificial Intelligence: A Survey. arXiv preprint arXiv:2312.12936 (2023).
[11]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, Vol. 1, 5 (2019), 206--215.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
August 2024
6901 pages
ISBN:9798400704901
DOI:10.1145/3637528
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 August 2024

Check for updates

Author Tags

  1. explainability
  2. hi-ai
  3. human-interpretable ai
  4. interpretability
  5. xai

Qualifiers

  • Abstract

Conference

KDD '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 87
    Total Downloads
  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)3
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media