Skip to main content

Clustering-Based Adaptive Dropout for CNN-Based Classification

  • Conference paper
  • First Online:
Pattern Recognition (ACPR 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12046))

Included in the following conference series:

Abstract

Dropout has been widely used to improve the generalization ability of a deep network, while current dropout variants rarely adapt the dropout probabilities of the network hidden units or weights dynamically to their contributions on the network optimization. In this work, a clustering-based dropout based on the network characteristics of features, weights or their derivatives is proposed, where the dropout probabilities for these characteristics are updated self-adaptively according to the corresponding clustering group to differentiate their contributions. Experimental results on the databases of Fashion-MNIST and CIFAR10 and expression databases of FER2013 and CK+ show that the proposed clustering-based dropout achieves better accuracy than the original dropout and various dropout variants, and the most competitive performances compared with state-of-the-art algorithms.

The work was supported by Natural Science Foundation of China under grants no. 61602315, 61672357 and U1713214, the Science and Technology Project of Guangdong Province under grant no. 2018A050501014, the Tencent “Rhinoceros Birds”-Scientific Research Foundation for Young Teachers of Shenzhen University, the School Startup Fund of Shenzhen University under grants no. 2018063.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu, S., Deng, W.: Very deep convolutional neural network based image classification using small training sample size. In: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 730–734 (2015)

    Google Scholar 

  2. Xie, W., Jia, X., Shen, L., Yang, M.: Sparse deep feature learning for facial expression recognition. Pattern Recogn. (PR) 96, 106966 (2019)

    Article  Google Scholar 

  3. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  4. Wang, H., Wang, L.: Learning robust representations using recurrent neural networks for skeleton based action classification and detection. In: International Conference on Multimedia Expo Workshops (ICMEW), pp. 591–596, July 2017

    Google Scholar 

  5. Wan, L., Zeiler, M., Zhang, S., Cun, Y.L., Fergus, R.: Regularization of neural networks using DropConnect. In: Proceedings of International Conference on Machine Learning (ICML), vol. 28, pp. 1058–1066, June 2013

    Google Scholar 

  6. Khan, S., Hayat, M., Porikli, F.: Regularization of deep neural networks with spectral dropout. Neural Netw. 110, 82–90 (2019)

    Article  Google Scholar 

  7. Tompson, J., Goroshin, R., Jain, A., LeCun, Y., Bregler, C.: Efficient object localization using convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 648–656 (2014)

    Google Scholar 

  8. Poernomo, A., Kang, D.K.: Biased dropout and crossmap dropout: learning towards effective dropout regularization in convolutional neural network. Neural Netw. 104, 60–67 (2018)

    Article  Google Scholar 

  9. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems (NIPS), pp. 1135–1143 (2015)

    Google Scholar 

  10. Rohit, K., Richa, S., Mayank, V.: Guided dropout. In: AAAI Conference on Artificial Intelligence (AAAI) (2019)

    Google Scholar 

  11. Zhang, X., Yang, Y., Feng, J.: ML-LocNet: improving object localization with multi-view learning network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 248–263. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_15

    Chapter  Google Scholar 

  12. Zhang, C., Zhu, C., Xiao, J., Xu, X., Liu, Y.: Image ordinal classification and understanding: grid dropout with masking label. In: 2018 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, July 2018

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  14. Wager, S., Wang, S., Liang, P.S.: Dropout training as adaptive regularization. In: Advances in Neural Information Processing Systems (NIPS), pp. 351–359 (2013)

    Google Scholar 

  15. Ba, J., Frey, B.: Adaptive dropout for training deep neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 3084–3092 (2013)

    Google Scholar 

  16. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  17. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto (2009)

    Google Scholar 

  18. Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16

    Chapter  Google Scholar 

  19. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG), pp. 46–53 (2000)

    Google Scholar 

  20. De Winter, J.C.: Using the student’s t-test with extremely small sample sizes. Pract. Assess. Res. Eval. 18(10), 1–12 (2013)

    MathSciNet  Google Scholar 

  21. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015)

  22. Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10 (2016)

    Google Scholar 

  23. Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2983–2991 (2015)

    Google Scholar 

  24. Tang, Y.: Deep learning using support vector machines. In: Proceedings of International Conference on Machine Learning (ICML) (2013)

    Google Scholar 

  25. Liu, X., Vijaya Kumar, B., You, J., Jia, P.: Adaptive deep metric learning for identity-aware facial expression recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 20–29 (2017)

    Google Scholar 

  26. Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems (NIPS), pp. 3123–3131 (2015)

    Google Scholar 

  27. Wen, G., Hou, Z., Li, H., Li, D., Jiang, L., Xun, E.: Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cogn. Comput. 9(5), 597–610 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weicheng Xie .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wen, Z., Ke, Z., Xie, W., Shen, L. (2020). Clustering-Based Adaptive Dropout for CNN-Based Classification. In: Palaiahnakote, S., Sanniti di Baja, G., Wang, L., Yan, W. (eds) Pattern Recognition. ACPR 2019. Lecture Notes in Computer Science(), vol 12046. Springer, Cham. https://doi.org/10.1007/978-3-030-41404-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41404-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41403-0

  • Online ISBN: 978-3-030-41404-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics