Skip to main content

Weakly Supervised Invariant Representation Learning via Disentangling Known and Unknown Nuisance Factors

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13808))

Included in the following conference series:

Abstract

Disentangled and invariant representations are two critical goals of representation learning and many approaches have been proposed to achieve either one of them. However, those two goals are actually complementary to each other so that we propose a framework to accomplish both of them simultaneously. We introduce a weakly supervised signal to learn disentangled representation which consists of three splits containing predictive, known nuisance and unknown nuisance information respectively. Furthermore, we incorporate contrastive method to enforce representation invariance. Experiments shows that the proposed method outperforms state-of-the-art (SOTA) methods on four standard benchmarks and shows that the proposed method can have better adversarial defense ability comparing to other methods without adversarial training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives (2014)

    Google Scholar 

  2. Burgess, C., Kim, H.: 3D shapes dataset (2018). https://github.com/deepmind/3dshapes-dataset/

  3. Burgess, C.P., et al.: Understanding disentangling in \(\beta \)-VAE (2018)

    Google Scholar 

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP) (2017)

    Google Scholar 

  5. Castro, P.S.: Scalable methods for computing state similarity in deterministic Markov decision processes. In: AAAI (2020)

    Google Scholar 

  6. Chen, J., Konrad, J., Ishwar, P.: A cyclically-trained adversarial network for invariant representation learning. In: CVPR Workshops (2020)

    Google Scholar 

  7. Chen, R.T.Q., Li, X., Grosse, R., Duvenaud, D.: Isolating sources of disentanglement in variational autoencoders (2019)

    Google Scholar 

  8. Eastwood, C., Williams, C.K.I.: A framework for the quantitative evaluation of disentangled representations. In: ICLR (2018). https://openreview.net/forum?id=By-7dz-AZ

  9. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2130 (2016)

    MathSciNet  Google Scholar 

  10. Gondal, M.W., et al.: On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset (2019)

    Google Scholar 

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). http://arxiv.org/abs/1412.6572, cite arxiv:1412.6572

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)

    Google Scholar 

  13. Higgins, I., et al.: \(\beta \)-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR (2017)

    Google Scholar 

  14. Jaiswal, A., Wu, R.Y., Abd-Almageed, W., Natarajan, P.: Unsupervised adversarial invariance. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  15. Khosla, P., et al.: Supervised contrastive learning. CoRR abs/2004.11362 (2020). https://arxiv.org/abs/2004.11362

  16. Kim, H., Mnih, A.: Disentangling by factorising. In: ICML (2018). http://proceedings.mlr.press/v80/kim18b.html

  17. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)

    Google Scholar 

  18. Kumar, A., Sattigeri, P., Balakrishnan, A.: Variational inference of disentangled latent concepts from unlabeled observations. In: ICLR (2018)

    Google Scholar 

  19. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. CoRR abs/1611.01236 (2016). http://arxiv.org/abs/1611.01236

  20. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/

  21. Li, Y., Swersky, K., Zemel, R.: Learning unbiased features. arXiv preprint arXiv:1412.5244 (2014)

  22. Locatello, F., et al.: Challenging common assumptions in the unsupervised learning of disentangled representations (2019)

    Google Scholar 

  23. Locatello, F., Poole, B., Raetsch, G., Schölkopf, B., Bachem, O., Tschannen, M.: Weakly-supervised disentanglement without compromises. In: ICML (2020). http://proceedings.mlr.press/v119/locatello20a.html

  24. Locatello, F., et al.: Disentangling factors of variations using few labels. In: ICLR (2020). https://openreview.net/forum?id=SygagpEKwB

  25. Louizos, C., Swersky, K., Li, Y., Welling, M., Zeme, R.: The variational fair autoencoder. In: ICLR (2016)

    Google Scholar 

  26. Moyer, D., Gao, S., Brekelmans, R., Galstyan, A., Ver Steeg, G.: Invariant representations without adversarial training. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  27. Sagawa, S., Koh, P.W., Hashimoto, T.B., Liang, P.: Distributionally robust neural networks. In: ICLR (2020). https://openreview.net/forum?id=ryxGuJrFvS

  28. Sanchez, E.H., Serrurier, M., Ortner, M.: Learning disentangled representations via mutual information estimation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 205–221. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_13

    Chapter  Google Scholar 

  29. van Steenkiste, S., Locatello, F., Schmidhuber, J., Bachem, O.: Are disentangled representations helpful for abstract visual reasoning? CoRR abs/1905.12506 (2019). http://arxiv.org/abs/1905.12506

  30. Suter, R., Đorđe Miladinović, Schölkopf, B., Bauer, S.: Robustly disentangled causal mechanisms: validating deep representations for interventional robustness (2019)

    Google Scholar 

  31. Wang, F., Liu, H.: Understanding the behaviour of contrastive loss. In: CVPR (2021)

    Google Scholar 

  32. Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

Download references

Acknowledgement

This research is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), under cooperative agreement number HR00112020009. The views and conclusions contained herein should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation thereon.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiageng Zhu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 3047 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, J., Xie, H., Abd-Almageed, W. (2023). Weakly Supervised Invariant Representation Learning via Disentangling Known and Unknown Nuisance Factors. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13808. Springer, Cham. https://doi.org/10.1007/978-3-031-25085-9_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25085-9_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25084-2

  • Online ISBN: 978-3-031-25085-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics