Skip to main content

The Virtues of Peer Pressure: A Simple Method for Discovering High-Value Mistakes

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9257))

Included in the following conference series:

Abstract

Much of the recent success of neural networks can be attributed to the deeper architectures that have become prevalent. However, the deeper architectures often yield unintelligible solutions, require enormous amounts of labeled data, and still remain brittle and easily broken. In this paper, we present a method to efficiently and intuitively discover input instances that are misclassified by well-trained neural networks. As in previous studies, we can identify instances that are so similar to previously seen examples such that the transformation is visually imperceptible. Additionally, unlike in previous studies, we can also generate mistakes that are significantly different from any training sample, while, importantly, still remaining in the space of samples that the network should be able to classify correctly. This is achieved by training a basket of N “peer networks” rather than a single network. These are similarly trained networks that serve to provide consistency pressure on each other. When an example is found for which a single network, S, disagrees with all of the other \(N-1\) networks, which are consistent in their prediction, that example is a potential mistake for S. We present a simple method to find such examples and demonstrate it on two visual tasks. The examples discovered yield realistic images that clearly illuminate the weaknesses of the trained models, as well as provide a source of numerous, diverse, labeled-training samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baluja, S.: Finding regions of uncertainty in learned models: an application to face detection. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, p. 461. Springer, Heidelberg (1998)

    Google Scholar 

  2. Cohn, D.A., Ghahramani, Z., Jordan, M.I.: Active learning with statistical models. Journal of Artificial Intelligence Research 4 (1996)

    Google Scholar 

  3. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. Tech. Rep. 1341, Université de Montréal (2009)

    Google Scholar 

  4. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(10) (1990)

    Google Scholar 

  5. Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning Workshop (2014)

    Google Scholar 

  6. Howard, A.G.: Some improvements on deep convolutional neural network based image classification (2013). arXiv:1312.5402

  7. Kindermann, J., Linden, A.: Inversion of neural networks by gradient descent. Parallel Computing 14(3) (1990)

    Google Scholar 

  8. LeCun, Y., Cortes, C., Burges, C.: The MNIST database of handwritten images (1998), http://yann.lecun.com/exdb/mnist/

  9. Nguyen, A., Yosinksi, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  10. Pelikan, M., Sastry, K., Cantú-Paz, E.: Scalable optimization via probabilistic modeling: From algorithms to applications, vol. 33. Springer Science & Business Media (2006)

    Google Scholar 

  11. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge (2014). arXiv:1409.0575

  12. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions (2014). arXiv:1409.4842

  13. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks (2013). arXiv:1312.6199

  14. Touretzky, D.S., Pomerleau, D.A.: What’s hidden in the hidden layers. Byte (1989)

    Google Scholar 

  15. Tsymbal, A.: The problem of concept drift: Definitions and related work. Tech. Rep. 106, Computer Science Department, Trinity College Dublin (2004)

    Google Scholar 

  16. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part I. LNCS, vol. 8689, pp. 818–833. Springer, Heidelberg (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shumeet Baluja .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Baluja, S., Covell, M., Sukthankar, R. (2015). The Virtues of Peer Pressure: A Simple Method for Discovering High-Value Mistakes. In: Azzopardi, G., Petkov, N. (eds) Computer Analysis of Images and Patterns. CAIP 2015. Lecture Notes in Computer Science(), vol 9257. Springer, Cham. https://doi.org/10.1007/978-3-319-23117-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-23117-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-23116-7

  • Online ISBN: 978-3-319-23117-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics