Abstract
Deep convolutional neural networks (CNNs) represent the state-of-the-art model structure in image classification problems. However, deep CNNs suffer from issues of interpretability and are difficult to train. This work presents new tree-like shallow ANNs, and offers a novel approach to exploring and examining the relationship between activation functions and network performance. The proposed work is examined on the MNIST and CIFAR10 datasets, finding surprising results relating to the necessity and benefit of activation functions in this new type of shallow network. In particular the work finds high accuracy networks for the MNIST dataset which utilise pooling operations as the only non-linearity, and demonstrate a certain invariance to the specific form of activation functions on the more complicated CIFAR10 dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Balduzzi, D., Frean, M., Leary, L., Lewis, J., Ma, K.W.D., McWilliams, B.: The Shattered Gradients Problem: If resnets are the answer, then what is the question? arXiv preprint arXiv:1702.08591 (2017)
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)
Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289 (2015)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
Graham, B.: Fractional max-pooling. arXiv preprint arXiv:1412.6071 (2014)
Hagg, A., Mensing, M., Asteroth, A.: Evolving parsimonious networks by mixing activation functions. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 425–432. ACM (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Stat. Comput. 4(2), 87–112 (1994)
Liu, H., Simonyan, K., Vinyals, O., Fernando, C., Kavukcuoglu, K.: Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436 (2017)
McDonnell, M.D., Vladusich, T.: Enhanced image classification with a fast-learning shallow convolutional neural network. arXiv preprint arXiv:1503.04596 (2015)
Potter, M.A., Jong, K.A.D.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evol. Comput. 8(1), 1–29 (2000)
Ramachandran, P., Zoph, B., Le, Q.V.: Searching for Activation Functions (2018). https://openreview.net/forum?id=SkBYYyZRZ
Real, E., et al.: Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041 (2017)
Spears, W.M.: Adapting crossover in evolutionary algorithms. In: Evolutionary Programming, pp. 367–384 (1995)
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Turner, A.J., Miller, J.F.: Recurrent Cartesian genetic programming of artificial neural networks. Genet. Program. Evolvable Mach. 18(2), 185–212 (2017)
Xie, L., Yuille, A.: Genetic CNN. arXiv preprint arXiv:1703.01513 (2017)
Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015)
Zhong, Z., Yan, J., Liu, C.L.: Practical Network Blocks Design with Q-Learning, abs/1708.05552. CoRR (2017)
Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016)
Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.070122(6) (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
O’Neill, D., Xue, B., Zhang, M. (2018). Co-evolution of Novel Tree-Like ANNs and Activation Functions: An Observational Study. In: Mitrovic, T., Xue, B., Li, X. (eds) AI 2018: Advances in Artificial Intelligence. AI 2018. Lecture Notes in Computer Science(), vol 11320. Springer, Cham. https://doi.org/10.1007/978-3-030-03991-2_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-03991-2_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03990-5
Online ISBN: 978-3-030-03991-2
eBook Packages: Computer ScienceComputer Science (R0)