Skip to main content
Log in

A Brain-Inspired Method of Facial Expression Generation Using Chaotic Feature Extracting Bidirectional Associative Memory

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Human cognitive system adapts many different environments by exhibiting a broad range of behaviors according to the context. These behaviors vary from general abstractions referred as prototypes to specific perceptual patterns referred as exemplars. A chaotic feature extracting associative memory is proposed to mimic human brain in generating prototype and exemplar facial expressions. This model automatically extracts features of each category of images related to a specific subject and expression. In the training phase, the features are extracted as fixed points. In recall phase, the output attractor of the network ranges from fixed point which results in a prototype facial image, to chaotic attractors which lead to generating exemplar faces. The generative model is applied to enrich a facial image dataset in terms of variability by generating various virtual patterns, in case that only one image per subject is provided. A face recognition task is implemented to compare the enriched and original dataset in training classifiers. Our results show that recognition accuracy increases from 32 to 100% when exemplars generated by the proposed model are used to enrich the training dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Martinez B, Valstar MF (2016) Advances, challenges, and opportunities in automatic facial expression recognition. In: Kawulok M, Celebi ME, Smolka B (eds) Advances in face detection and facial image analysis. Springer, Heidelberg, pp 63–100

  2. Kamarol SKA, Jaward MH, Parkkinen J, Parthiban R (2016) Spatiotemporal feature extraction for facial expression recognition. IET Image Process 10:534–541

    Article  Google Scholar 

  3. Wen Y, Liu W, Yang M, Fu Y, Xiang Y, Hu R (2016) Structured occlusion coding for robust face recognition. Neurocomputing 178:11–24

    Article  Google Scholar 

  4. Suja P, Thomas SM, Tripathi S, Madan VK (2016) Emotion recognition from images under varying illumination conditions. In: Balas V, Jain L, Kovačević B (eds) Soft computing applications. Advances in intelligent systems and computing, vol 357. Springer, pp 913–921

  5. Peng X, Xia Z, Li L, Feng X (2016) Towards facial expression recognition in the wild: a new database and deep recognition system. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 93–99

  6. Qi M, Li D (2009) Review of image-based facial expression generation techniques. In: Proceedings of international conference on computational intelligence and software engineering, 2009. CiSE 2009, IEEE, pp 1–5

  7. Korshunov P, Ebrahimi T (2013) Using face morphing to protect privacy. In: Proceedings of 10th IEEE international conference on 2013, advanced video and signal based surveillance (AVSS), IEEE, pp 208–213

  8. Hu Y, Zhou M, Wu Z (2009) A dense point-to-point alignment method for realistic 3d face morphing and animation. Int J Comput Games Technol 2009:3

    Article  Google Scholar 

  9. Shirore D, Baji S (2015) Facial image morphing for animation using mesh warping method. Int J Comput Appl 109(6):18–22

    Google Scholar 

  10. Liu S, Huang D-Y, Lin W, Dong M, Li H, Ong EP (2014) Emotional facial expression transfer based on temporal restricted boltzmann machines. In: Signal and information processing association annual summit and conference (APSIPA), 2014 Asia-Pacific, IEEE, pp 1–7

  11. Wei W, Tian C, Maybank SJ, Zhang Y (2016) Facial expression transfer method based on frequency analysis. Pattern Recognit 49:115–128

    Article  Google Scholar 

  12. Weise T, Bouaziz S, Li H, Pauly M (2011) Realtime performance-based facial animation. In: ACM transactions on graphics (TOG), ACM, vol 30, p 77

  13. Testa RL, Muniz AHN, Carpio LUS, da Silva Dias R, de Almeida Rocca CC, Lima AM, dos Santos Nunes FdL, et al (2015) Generating facial emotions for diagnosis and training. In: IEEE 28th international symposium on computer-based medical systems, 2015, IEEE, pp 304–309

  14. Amini R, Lisetti C (2013) Hapfacs: An open source api/software to generate facs-based expressions for ecas animation and for corpus generation. In: Affective computing and intelligent interaction (ACII), Humaine association conference on 2013, pp 270–275, IEEE

  15. Ekman P, Freisen WV, Ancoli S (1980) Facial signs of emotional experience. J Personal Soc Psychol 39(6):1125

    Article  Google Scholar 

  16. Ekman P, Friesen W, Hager J (2002) Facial action coding system: the manual on CD ROM. A Human Face, Salt Lake City

    Google Scholar 

  17. Hamm J, Kohler CG, Gur RC, Verma R (2011) Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders. J Neurosci Methods 200(2):237–256

    Article  Google Scholar 

  18. Yan X, Yang J, Sohn K, Lee H (2015) Attribute2image: Conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570

  19. Posner J, Russell JA, Peterson BS (2005) The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev Psychopathol 17(03):715–734

    Article  Google Scholar 

  20. Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39(6):1161

    Article  Google Scholar 

  21. Darwin C, Ekman P, Prodger P (1998) The expression of the emotions in man and animals. Oxford University Press, Oxford

    Google Scholar 

  22. Ekman P (1999) Facial expressions. Handb Cognit Emot 16:301–320

    Google Scholar 

  23. Harris RJ, Young AW, Andrews TJ (2012) Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain. Proc Natl Acad Sci 109(51):21164–21169

    Article  Google Scholar 

  24. Fisher K, Towler J, Eimer M (2016) Facial identity and facial expression are initially integrated at visual perceptual stages of face processing. Neuropsychologia 80:115–125

    Article  Google Scholar 

  25. Meaux E, Vuilleumier P (2016) Facing mixed emotions: analytic and holistic perception of facial emotion expressions engages separate brain networks. NeuroImage 141:154–173

    Article  Google Scholar 

  26. Tsuda I (2001) Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav Brain Sci 24(05):793–810

    Article  Google Scholar 

  27. Adachi M, Aihara K (1997) Associative dynamics in a chaotic neural network. Neural Netw 10(1):83–98

    Article  Google Scholar 

  28. Chartier S, Renaud P, Boukadoum M (2008) A nonlinear dynamic artificial neural network model of memory. New Ideas Psychol 26(2):252–277

    Article  Google Scholar 

  29. Huang Y, Zhang H, Wang Z (2012) Multistability and multiperiodicity of delayed bidirectional associative memory neural networks with discontinuous activation functions. Appl Math Comput 219(3):899–910

    MathSciNet  MATH  Google Scholar 

  30. Susskind JM, Anderson AK, Hinton GE, Movellan JR (2008) Generating facial expressions with deep belief nets. INTECH Open Access Publisher, Rijeka

    Google Scholar 

  31. Nejadgholi I, Chartier S, Seyyedsalehi SA (2013) Controlling deterministic output variability in a feature extracting chaotic BAM. Neurocomputing 120:298–309

    Article  Google Scholar 

  32. Chartier S, Giguère G, Renaud P, Lina J-M, Proulx R (2007) Febam: a feature-extracting bidirectional associative memory. In: International joint conference on neural networks, 2007. IJCNN 2007, IEEE, pp 1679–1684

  33. Sutton RS (1988) Learning to predict by the methods of temporal differences. Mach Learn 3(1):9–44

    Google Scholar 

  34. Kosko B (1990) Unsupervised learning in noise. Neural Netw IEEE Trans 1(1):44–57

    Article  Google Scholar 

  35. Chartier S, Boukadoum M (2006) A bidirectional heteroassociative memory for binary and grey-level patterns. Neural Netw IEEE Trans 17(2):385–396

    Article  Google Scholar 

  36. Nomura S, Yamanaka K, Katai O, Kawakami H, Shiose T (2005) Improved mlp learning via orthogonal bipolar target vectors. J Adv Comput Intell Intell Inf 9(6):580–589

    Article  Google Scholar 

  37. Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Proceedings of fourth IEEE international conference on automatic face and gesture recognition, 2000, IEEE, pp 46–53

  38. Hosoya H, Hyvärinen A (2016) Learning visual spatial pooling by strong pca dimension reduction. Neural Comput 28(7):1249–1264

  39. Haxby JV, Hoffman EA, Gobbini MI (2002) Human neural systems for face recognition and social communication. Biol Psychiatry 51(1):59–67

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Isar Nejadgholi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nejadgholi, I., SeyyedSalehi, S.A. & Chartier, S. A Brain-Inspired Method of Facial Expression Generation Using Chaotic Feature Extracting Bidirectional Associative Memory. Neural Process Lett 46, 943–960 (2017). https://doi.org/10.1007/s11063-017-9615-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9615-5

Keywords

Navigation