Skip to main content

SAU: Smooth Activation Function Using Convolution with Approximate Identities

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin. Over the years, many smooth approximations of ReLU have been proposed using various smoothing techniques. We propose new smooth approximations of a non-differentiable activation function by convolving it with approximate identities. In particular, we present smooth approximations of Leaky ReLU and show that they outperform several well-known activation functions in various datasets and models. We call this function Smooth Activation Unit (SAU). Replacing ReLU by SAU, we get 5.63%, 2.95%, and 2.50% improvement with ShuffleNet V2 (2.0x), PreActResNet 50 and ResNet 50 models respectively on the CIFAR100 dataset and 2.31% improvement with ShuffleNet V2 (1.0x) model on ImageNet-1k dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chollet, F., et al.: Keras (2015). https://keras.io

  2. Chollet, F.: Xception: deep learning with depthwise separable convolutions (2017)

    Google Scholar 

  3. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus) (2016)

    Google Scholar 

  4. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding (2016)

    Google Scholar 

  5. Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  6. Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks (2013)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification (2015)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks (2016)

    Google Scholar 

  10. Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus) (2020)

    Google Scholar 

  11. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications (2017)

    Google Scholar 

  12. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745

  13. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks (2016)

    Google Scholar 

  14. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size (2016)

    Google Scholar 

  15. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift (2015)

    Google Scholar 

  16. Kidger, P., Lyons, T.: Universal approximation with deep narrow networks (2020)

    Google Scholar 

  17. Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23, 462–466 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1412.6980

  19. Krizhevsky, A.: Learning multiple layers of features from tiny images. University of Toronto, Technical report (2009)

    Google Scholar 

  20. Krizhevsky, A.: Convolutional deep belief networks on cifar-10 (2010)

    Google Scholar 

  21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, pp. 1097–1105. NIPS 2012, Curran Associates Inc., Red Hook, NY, USA (2012)

    Google Scholar 

  22. LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989). https://doi.org/10.1162/neco.1989.1.4.541

    Article  Google Scholar 

  23. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  24. LeCun, Y., Cortes, C., Burges, C.: Mnist handwritten digit database. ATT Labs 2 (2010). http://yann.lecun.com/exdb/mnist

    Google Scholar 

  25. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  26. Loshchilov, I., Hutter, F.: Sgdr: stochastic gradient descent with warm restarts (2017)

    Google Scholar 

  27. Ma, N., Zhang, X., Liu, M., Sun, J.: Activate or not: learning customized activation (2021)

    Google Scholar 

  28. Ma, N., Zhang, X., Zheng, H.T., Sun, J.: Shufflenet v2: practical guidelines for efficient cnn architecture design (2018)

    Google Scholar 

  29. Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: in ICML Workshop on Deep Learning for Audio, Speech and Language Processing (2013)

    Google Scholar 

  30. Misra, D.: Mish: a self regularized non-monotonic activation function (2020)

    Google Scholar 

  31. Molina, A., Schramowski, P., Kersting, K.: Padé activation units: end-to-end learning of flexible activation functions in deep networks (2020)

    Google Scholar 

  32. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Fürnkranz, J., Joachims, T. (eds.) Proceedings of the 27th International Conference on Machine Learning (ICML-10), 21–24 June 2010, Haifa, Israel, pp. 807–814. Omnipress (2010). https://icml.cc/Conferences/2010/papers/432.pdf

  33. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)

    Google Scholar 

  34. Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming. In: 2008 IEEE Hot Chips 20 Symposium (HCS), pp. 40–53 (2008). https://doi.org/10.1109/HOTCHIPS.2008.7476525

  35. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library (2019)

    Google Scholar 

  36. Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions (2017)

    Google Scholar 

  37. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  38. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation (2015)

    Google Scholar 

  39. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks (2019)

    Google Scholar 

  40. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2015)

    Google Scholar 

  41. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  42. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision (2015)

    Google Scholar 

  43. Tan, M., Le, Q.V.: Efficientnet: rethinking model scaling for convolutional neural networks (2020)

    Google Scholar 

  44. Vaswani, A., et al.: Attention is all you need (2017)

    Google Scholar 

  45. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  46. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks (2017)

    Google Scholar 

  47. Zagoruyko, S., Komodakis, N.: Wide residual networks (2016)

    Google Scholar 

  48. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=r1Ddp1-Rb

  49. Hui-zhen Zhao, Fu-xian Liu, L.y.L.: Improving deep convolutional neural networks with mixed maxout units (2017). https://doi.org/10.1371/journal.pone.0180049

  50. Zheng, H., Yang, Z., Liu, W., Liang, J., Li, Y.: Improving deep neural networks using softplus units. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–4 (2015). https://doi.org/10.1109/IJCNN.2015.7280459

Download references

Acknowledgment

The authors are very grateful to Dr. Bapi Chatterjee for lending GPU equipment which helped in carrying out some of the important experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Koushik Biswas .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 314 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Biswas, K., Kumar, S., Banerjee, S., Kumar Pandey, A. (2022). SAU: Smooth Activation Function Using Convolution with Approximate Identities. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13681. Springer, Cham. https://doi.org/10.1007/978-3-031-19803-8_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19803-8_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19802-1

  • Online ISBN: 978-3-031-19803-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics