Impact Statement:Our work has the potential of gaining end users? trust in deep neural networks and making it possible to answer ?why? by creating human-like explanations. Future applicat...Show More
Abstract:
Applications of deep neural networks (DNNs) are booming in more and more fields but lack transparency due to their black-box nature. Explainable artificial intelligence (...Show MoreMetadata
Impact Statement:
Our work has the potential of gaining end users? trust in deep neural networks and making it possible to answer ?why? by creating human-like explanations. Future applications could include sensitive fields where practitioners are desperate to understand how black-box models decide on a specific prediction before their deployment. A prominent example is medical imaging where it is sine qua non to see how DNNs make decisions. Our technique could help domain experts trust the automated system they get help from. This is achieved differently from currently available techniques that can only highlight the part of an image that DNNs seem to rely on. We argue that self-explainable DNNs are the future of machine learning applications. As DNNs are now currently the most preferred techniques and their most apparent limitation is the complicated decision process, we bring about a novel and cheap technique that, to the best of our knowledge, has never been proposed before.
Abstract:
Applications of deep neural networks (DNNs) are booming in more and more fields but lack transparency due to their black-box nature. Explainable artificial intelligence (XAI) is, therefore, of paramount importance, where strategies are proposed to understand how these black-box models function. The research so far mainly focuses on producing, for example, class-wise saliency maps, highlighting parts of a given image that affect the prediction the most. However, this method does not fully represent the way humans explain their reasoning, and awkwardly, validating these maps is quite complex and generally requires subjective interpretation. In this article, we conduct XAI differently by proposing a new XAI methodology in a multilevel (i.e., visual and linguistic) manner. By leveraging the interplay between the learned representations, i.e., image features and linguistic attributes, the proposed approach can provide salient attributes and attribute-wise saliency maps, which are far more i...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 5, May 2024)