skip to main content
10.1145/3475731.3484954acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

An Empirical Study of Uncertainty Gap for Disentangling Factors

Published: 22 October 2021 Publication History

Abstract

Disentangling factors has proven to be crucial for building interpretable AI systems. Disentangled generative models would have explanatory input variables to increase the trustworthiness and robustness. Previous works apply a progressive disentanglement learning regime where the ground-truth factors are disentangled in an order. However, they didn't answer why such an order for disentanglement is important. In this work, we propose a novel metric, namely Uncertainty Gap, to evaluate how the uncertainty of generative models changes given input variables. We generalize the Uncertainty Gap to image reconstruction tasks using BCE and MSE. Extensive experiments on three commonly-used benchmarks also demonstrate the effectiveness of our Uncertainty Gap in evaluating both informativeness and redundancy of given variables. We empirically find that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance.

References

[1]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, 8 (2013), 1798--1828.
[2]
Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. 2018. Understanding disentangling in β-VAE. In International Conference on Machine Learning (ICML) .
[3]
Tian Qi Chen, Xuechen Li, Roger B. Grosse, and David Duvenaud. 2018. Isolating Sources of Disentanglement in Variational Autoencoders. In Neural Information Processing Systems (NeurIPS) .
[4]
Sunny Duan, Loic Matthey, Andre Saraiva, Nick Watters, Christopher Burgess, Alexander Lerchner, and Irina Higgins. 2020. Unsupervised Model Selection for Variational Disentangled Representation Learning. In International Conference on Learning Representations (ICLR) .
[5]
Cian Eastwood and Christopher K. I. Williams. 2018. A Framework for the Quantitative Evaluation of Disentangled Representations. In International Conference on Learning Representations (ICLR) .
[6]
Irina Higgins, David Amos, David Pfau, Sé bastien Racaniè re, Lo"i c Matthey, Danilo J. Rezende, and Alexander Lerchner. 2018a. Towards a Definition of Disentangled Representations. arXiv preprint arXiv:1812.02230 (2018).
[7]
Irina Higgins, Lo"i c Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In International Conference on Learning Representations (ICLR) .
[8]
Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. 2018b. SCAN: Learning Hierarchical Compositional Visual Concepts. In International Conference on Learning Representations (ICLR) .
[9]
Insu Jeon, Wonkwang Lee, and Gunhee Kim. 2019. IB-GAN: Disentangled Representation Learning with Information Bottleneck GAN.
[10]
Yeonwoo Jeong and Hyun Oh Song. 2019. Learning Discrete and Continuous Factors of Data via Alternating Disentanglement. In International Conference on Machine Learning (ICML) .
[11]
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representations (ICLR) .
[12]
Hyunjik Kim and Andriy Mnih. 2018. Disentangling by Factorising. In International Conference on Machine Learning (ICML) .
[13]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations (ICLR) .
[14]
Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR) .
[15]
Diederik P. Kingma and Max Welling. 2019. An introduction to variational autoencoders. Foundations and Trends in Machine Learning, Vol. 12, 4 (2019), 307--392.
[16]
Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. 2018. Variational Inference of Disentangled Latent Concepts from Unlabeled Observations. In International Conference on Learning Representations (ICLR) .
[17]
Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Fader Networks: Manipulating Images by Sliding Attributes. In Neural Information Processing Systems (NeurIPS) .
[18]
Yann LeCun, Fu Jie Huang, and Lé on Bottou. 2004. Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) .
[19]
José Lezama. 2019. Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision. In International Conference on Learning Representations (ICLR) .
[20]
Zhiyuan Li, Jaideep Vitthal Murkute, Prashnna Kumar Gyawali, and Linwei Wang. 2020. Progressive Learning and Disentanglement of Hierarchical Representations. In International Conference on Learning Representations (ICLR) .
[21]
J. Lin. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information Theory, Vol. 37, 1 (1991), 145--151.
[22]
Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar R"a tsch, Sylvain Gelly, Bernhard Schö lkopf, and Olivier Bachem. 2019. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. In International Conference on Machine Learning (ICML) .
[23]
Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. 2017. dSprites: Disentanglement testing Sprites dataset. https://github.com/deepmind/dsprites-dataset/.
[24]
Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. 2015. Deep Visual Analogy-Making. In Neural Information Processing Systems (NeurIPS) .
[25]
Jiantao Wu, Lin Wang, and Chunxiuzi Liu. 2021. DEFT: Distilling Entangled Factors. arXiv preprint arXiv:2102.03986 (2021).

Index Terms

  1. An Empirical Study of Uncertainty Gap for Disentangling Factors

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    Trustworthy AI'21: Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing
    October 2021
    42 pages
    ISBN:9781450386746
    DOI:10.1145/3475731
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. disentanglement
    2. inductive bias
    3. uncertainty gap

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '21
    Sponsor:
    MM '21: ACM Multimedia Conference
    October 24, 2021
    Virtual Event, China

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 96
      Total Downloads
    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media