Abstract
Activation functions have an important role in obtaining the most appropriate output by processing the information coming into the network in deep learning architectures. Deep learning architectures are widely used in areas such as image processing applications, time series, and disease classification, generally in line with the analysis of large and complex data. Choosing the appropriate architecture and activation function is an important factor in achieving successful learning and classification performance. There are many studies to improve the performance of deep learning architectures and to overcome the disappearing gradient and negative region problems in activation functions. A flexible and trainable fast exponential linear unit (P + FELU) activation function is proposed to overcome existing problems. With the proposed P + FELU activation function, a higher success rate and faster calculation time can be achieved by incorporating the advantages of fast exponentially linear unit (FELU), exponential linear unit (ELU), and rectified linear unit (RELU) activation functions. Performance evaluations of the proposed P + FELU activation function were made on MNIST, CIFAR-10, and CIFAR-100 benchmark datasets. Experimental evaluations have shown that the proposed activation function outperforms the ReLU, ELU, SELU, MPELU, TReLU, and FELU activation functions and effectively improves the noise robustness of the network. Experimental results show that this activation function with “flexible and trainable” properties can effectively prevent vanishing gradient and make multilayer perceptron neural networks deeper.







Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.1. References
Adem K (2018) Exudate detection for diabetic retinopathy with circular Hough transformation and convolutional neural networks. Expert Syst Appl 114:289–295
Adem K, Közkurt C (2019) Defect detection of seals in multilayer aseptic packages using deep learning. Turk J Electr Eng Comput Sci 27(6):4220–4230
Bawa VS, Kumar V (2019) Linearized sigmoidal activation: A novel activation function with tractable non-linear characteristics to boost representation capability. Expert Syst Appl 120:346–356
Clevert, D. A., Unterthiner, T., & Hochreiter, S. (2015). Fast and accurate deep network learning by exponential linear units (Elus).
Gao H, Xu K, Cao M, Xiao J, Xu Q, Yin Y (2021) The deep features and attention mechanism-based method to dish healthcare under social IoT systems: an empirical study with a hand-deep local-global net. IEEE Transact Comput Soc Syst 9(1):336–347
Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth int conf on art intell stat JMLR Workshop and Conference Proceedings: 249–256)
Godfrey LB (2019) An evaluation of parametric activation functions for deep learning. In IEEE Int Conf Syst, Man Cybern (SMC) IEEE, pp 3006–3011
Godin F, Degrave J, Dambre J, De Neve W (2018) Dual rectified linear units (DReLUs): A replacement for tanh activation functions in quasi-recurrent neural networks. Pattern Recogn Lett 116:8–14
Gupta, S., & Dinesh, D. A. (2017). Resource usage prediction of cloud workloads using deep bidirectional long short term memory networks. In 2017 IEEE international conference on advanced networks and telecommunications systems (ANTS): 1–6 IEEE.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision: 1026–1034
Huizhen ZHAO, Fuxian L, Longyue L (2018) A novel softplus linear unit for deep CNN. J Harbin Inst Technol 50(4):117–123
Kiliçarslan S, Celik M (2021) RSigELU: A nonlinear activation function for deep neural networks. Expert Syst Appl 174:114805
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint .
Kiseľák J, Lu Y, Švihra J, Szépe P, Stehlík M (2021) “SPOCU”: scaled polynomial constant unit activation function. Neural Comput Appl 33(8):3385–3401
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., & Jackel, L. (1989). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems 2
Lee, J., Shridhar, K., Hayashi, H., Iwana, B. K., Kang, S., & Uchida, S. (2019). Probact: A probabilistic activation function for deep neural networks. arXiv preprint 5, 13
Li Y, Fan C, Li Y, Wu Q, Ming Y (2018) Improving deep neural network with multiple parametric exponential linear units. Neurocomputing 301:11–24
Livieris IE, Pintelas E, Pintelas P (2020) A CNN–LSTM model for gold price time-series forecasting. Neural Comput Appl 32(23):17351–17360
Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc Icml 30(1): 3
Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (Icml): 807–814
Ozguven MM, Adem K (2019) Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Physica A 535:122537
Pacal I, Karaboga D (2021) A Robust Real-Time Deep Learning Based Automatic Polyp Detection System. Comput Biol Med 134:104519
Qiumei Z, Dan T, Fenghua W (2019) Improved convolutional neural network based on fast exponentially linear unit activation function. Ieee Access 7:151359–151367
Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. arXiv preprint .
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint .
Trottier, L., Giguere, P., & Chaib-Draa, B. (2017, December). Parametric exponential linear unit for deep convolutional neural networks. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA): 207–214 IEEE
Wang X, Qin Y, Wang Y, Xiang S, Chen H (2019) ReLTanh: An activation function with vanishing gradient resistance for SAE-based DNNs and its application to rotating machinery fault diagnosis. Neurocomputing 363:88–98
Wang Y, Li Y, Song Y, Rong X (2020) The influence of the activation function in a convolution neural network model of facial expression recognition. Appl Sci 10(5):1897
Xiao J, Xu H, Gao H, Bian M, Li Y (2021) A weakly supervised semantic segmentation network by aggregating seed cues: the multi-object proposal generation perspective. ACM Transact Multimidia Comput Communicat Appl 17(1s):1–19
Zhang T, Yang J, Song WA, Song CF (2019) Research on improved activation function TReLU. J Chinese Comput Syst 40(1):58–63
Zhou Y, Li D, Huo S, Kung SY (2021) Shape autotuning activation function. Expert Syst Appl 171:114534
Zhu H, Zeng H, Liu J, Zhang X (2021) Logish: A new nonlinear nonmonotonic activation function for convolutional neural network. Neurocomputing 458:490–499
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Adem, K. P + FELU: Flexible and trainable fast exponential linear unit for deep learning architectures. Neural Comput & Applic 34, 21729–21740 (2022). https://doi.org/10.1007/s00521-022-07625-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07625-3