Abstract:
Predictive models are often limited by their strong focus on prediction accuracy, leading to potential for shortcut learning and limited out-of-set generalization. Recent...Show MoreMetadata
Abstract:
Predictive models are often limited by their strong focus on prediction accuracy, leading to potential for shortcut learning and limited out-of-set generalization. Recent interpretability methods have focused primarily on understanding the contribution of individual features or image regions to classification performance, but have placed less emphasis on the larger set of representational motifs that are being learned by predictive models. In this talk, I will highlight recent work from our own group aimed at revealing interpretable object representations from human behavior, patterns of brain activity, and artificial neural networks. Our approach operates at the level of triplet similarities and yields low-dimensional human interpretable embeddings with excellent reconstruction accuracy, providing both perceptual as well as semantic representational dimensions. By providing a trade-off between complexity, interpretability and performance, this approach can reveal important contributions to prediction performance that may be useful for improving future predictive models.
Date of Conference: 20-22 February 2023
Date Added to IEEE Xplore: 28 March 2023
ISBN Information: