Skip to main content
Log in

Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. https://github.com/rohitash-chandra/VanillaFNN-Python.

References

  1. Bassett DS, Wymbs NF, Porter MA, Mucha PJ, Carlson JM, Grafton ST (2011) Dynamic reconfiguration of human brain networks during learning. Proc Natl Acad Sci 108(18):7641–7646

    Article  Google Scholar 

  2. Meunier D, Lambiotte R, Fornito A, Ersche KD, Bullmore ET (2009) Hierarchical modularity in human brain functional networks. Front. Neuroinformatics 3:37. doi:10.3389/neuro.11.037.2009

    Article  Google Scholar 

  3. Nicolini C, Bifone A (2016) Modular structure of brain functional networks: breaking the resolution limit by surprise. Sci Rep 6. doi:10.1038/srep19250 (2016)

  4. Happel BL, Murre JM (1994) Design and evolution of modular neural network architectures. Neural Netw 7(67):985–1004 (Models of neurodynamics and behavior)

    Article  Google Scholar 

  5. Moon SW, Kong SG (2001) Block-based neural networks. IEEE Trans Neural Netw 12(2):307–317

    Article  Google Scholar 

  6. San PP, Ling SH, Nguyen H (2014) Evolvable rough-block-based neural network and its biomedical application to hypoglycemia detection system. IEEE Trans Cybern 44(8):1338–1349

    Article  Google Scholar 

  7. Nambiar VP, Khalil-Hani M, Sahnoun R, Marsono M (2014) Hardware implementation of evolvable block-based neural networks utilizing a cost efficient sigmoid-like activation function. Neurocomputing 140:228–241

    Article  Google Scholar 

  8. Clune J, Mouret JB, Lipson H (2003) The evolutionary origins of modularity. Proc R Soc Lond B Biol Sci 280(1755). doi:10.1098/rspb.2012.2863

  9. Ellefsen KO, Mouret JB, Clune J (2015) Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput Biol 11(4):1–24

    Article  Google Scholar 

  10. Misra J, Saha I (2010) Artificial neural networks in hardware: a survey of two decades of progress. Neurocomputing 74(13):239–255 (Artificial brains)

    Article  Google Scholar 

  11. Caruana R (1997) Multitask learning. Mach Learn 28(1):41–75

    Article  MathSciNet  Google Scholar 

  12. Angeline P, Saunders G, Pollack J (1994) An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans Neural Netw 5(1):54–65

    Article  Google Scholar 

  13. Moriarty DE, Miikkulainen R (1997) Forming neural networks through efficient and adaptive coevolution. Evolut Comput 5(4):373–399

    Article  Google Scholar 

  14. Cortez P, Cerdeira A, Almeida F, Matos T, Reis J (2009) Modeling wine preferences by data mining from physicochemical properties. Decis Support Syst 47(4):547–553

    Article  Google Scholar 

  15. Chandra R, Gupta A, Ong YS, Goh CK (2016) Evolutionary multi-task learning for modular training of feedforward neural networks. In: Neural information processing—23rd international conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part II. 37–46

  16. Lindbeck A, Snower DJ (2000) Multitask learning and the reorganization of work: from tayloristic to holistic organization. J Labor Econ 18(3):353–376

    Article  Google Scholar 

  17. Ando RK, Zhang T (2005) A framework for learning predictive structures from multiple tasks and unlabeled data. J Mach Learn Res 6:1817–1853

    MathSciNet  MATH  Google Scholar 

  18. Jaco L, philippe Vert J, Bach FR (2009) Clustered multi-task learning: a convex formulation. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems, vol 21. Curran Associates, Inc., Dutchess, pp 745–752

    Google Scholar 

  19. Chen J, Tang L, Liu J, Ye J (2009) A convex formulation for learning shared structures from multiple tasks. In: Proceedings of the 26th annual international conference on machine learning. ICML ’09, New York, NY, USA, ACM 137–144

  20. Chen J, Liu J, Ye J (2012) Learning incoherent sparse and low-rank patterns from multiple tasks. ACM Trans Knowl Discov Data 5(4):22:1–22:31

    Article  Google Scholar 

  21. Zhang Y, Yeung DY (2010) Transfer metric learning by learning task relationships. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’10, New York, NY, USA, ACM, pp 1199–1208

  22. Bakker B, Heskes T (2003) Task clustering and gating for bayesian multitask learning. J Mach Learn Res 4:83–99

    MATH  Google Scholar 

  23. Zhong S, Pu J, Jiang YG, Feng R, Xue X (2016) Flexible multi-task learning with latent task grouping. Neurocomputing 189:179–188

    Article  Google Scholar 

  24. Yuan H, Paskov I, Paskov H, González AJ, Leslie CS (2016) Multitask learning improves prediction of cancer drug sensitivity. Sci. Rep 6. doi:10.1038/srep31619

  25. Sexton RS, Dorsey RE (2000) Reliable classification using neural networks: a genetic algorithm and backpropagation comparison. Decis Support Syst 30(1):11–22

    Article  Google Scholar 

  26. Cant-Paz E, Kamath C (2005) An empirical comparison of combinations of evolutionary algorithms and neural networks for classification problems. IEEE Trans Syst Man Cybern B Cybern 35(5):915–933

    Article  Google Scholar 

  27. Garcia-Pedrajas N, Hervas-Martinez C, Munoz-Perez J (2003) COVNET: a cooperative coevolutionary model for evolving artificial neural networks. IEEE Trans Neural Netw 14(3):575–596

    Article  Google Scholar 

  28. Gomez F, Schmidhuber J, Miikkulainen R (2008) Accelerated neural evolution through cooperatively coevolved synapses. J Mach Learn Res 9:937–965

    MathSciNet  MATH  Google Scholar 

  29. Chandra R (2015) Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction. IEEE Trans Neural Netw Learn Syst 26:3123–3136

    Article  MathSciNet  Google Scholar 

  30. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evolut Comput 10(2):99–127

    Article  Google Scholar 

  31. Heidrich-Meisner V, Igel C (2009) Neuroevolution strategies for episodic reinforcement learning. J Algorithms 64(4):152–168 (Special issue: reinforcement learning)

    Article  MATH  Google Scholar 

  32. Gupta A, Ong YS, Feng L (2016) Multifactorial evolution: toward evolutionary multitasking. IEEE Trans Evolut Comput 20(3):343–357

    Article  Google Scholar 

  33. Gupta A, Ong YS, Feng L, Tan KC (2016) Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Trans Cybern (accepted)

  34. Ong YS, Gupta A (2016) Evolutionary multitasking: a computer science view of cognitive multitasking. Cognit Comput 8(2):125–142

    Article  Google Scholar 

  35. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  36. Chen X, Ong YS, Lim MH, Tan KC (2011) A multi-facet survey on memetic computation. IEEE Trans Evolut Comput 15(5):591–607

    Article  Google Scholar 

  37. Deb K, Agrawal RB (1995) Simulated binary crossover for continuous search space. Complex Syst 9(2):115–148

    MathSciNet  MATH  Google Scholar 

  38. Deb K, Deb D (2014) Analysing mutation schemes for real-parameter genetic algorithms. Int J Artif Intelli Soft Comput 4(1):1–28

    Article  Google Scholar 

  39. Liu D, Hohil ME, Smith SH (2002) N-bit parity neural networks: new solutions based on linear programming. Neurocomputing 48(14):477–488

    Article  MATH  Google Scholar 

  40. Mangal M, Singh MP (2007) Analysis of pattern classification for the multidimensional parity-bit-checking problem with hybrid evolutionary feed-forward neural network. In: Advances in computational intelligence and learning 14th European symposium on artificial neural networks 2006. Neurocomputing 70(79):1511–1524

  41. Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11125–11137

    MathSciNet  MATH  Google Scholar 

  42. Chandra R, Frean MR, Zhang M (2012) Crossover-based local search in cooperative co-evolutionary feedforward neural networks. Appl Soft Comput 12(9):2924–2932

    Article  Google Scholar 

  43. Asuncion A, Newman D (2007) UCI machine learning repository. http://archive.ics.uci.edu/ml/datasets.html

  44. Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Netw 4(5):740–747

    Article  Google Scholar 

  45. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  46. Meltzoff AN, Kuhl PK, Movellan J, Sejnowski TJ (2009) Foundations for a new science of learning. Science 325(5938):284–288

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially conducted within the Rolls-Royce@NTU Corporate Lab with support from the National Research Foundation (NRF) Singapore under the Corp Lab@University Scheme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rohitash Chandra.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chandra, R., Gupta, A., Ong, YS. et al. Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks. Neural Process Lett 47, 993–1009 (2018). https://doi.org/10.1007/s11063-017-9718-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9718-z

Keywords

Navigation