Skip to main content
Log in

Lip detection by the use of neural networks

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Lip detection is used in many applications such as face detection and lip reading. In this article, a method for lip detection in color images in a normalized RGB color scheme is presented. In this method, MLP neural networks are used to perform lip detection on segmented skin regions. Several combinations of chrominance components of the normalized RGB color space were used as the input to the neural networks. Two methods were used for obtaining the normalized RGB components from the RGB color scheme. These are called the maximum and intensity normalization methods, respectively. The method was tested on two Asian databases. The number of neurons in the hidden layer was determined by using a modified network-growing algorithm. It was found that the pixel intensity normalization method gave lower lip detection error than the maximum intensity normalization method regardless of the database used, and for most of the combinations of chrominance components. In addition, the combination of the g and r/g chrominance components gave the lowest lip detection error when the pixel intensity normalization method was used for both databases. The effects of the scale and facial expression on lip detection was also studied. It was found that the lip detection error decreased as the scale factor increased. As for facial expression, a laughing facial expression gave the highest lip detection error, followed by smiling and neutral expressions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gomez E, Travieso CM, Briceno JC, et al. (2002) Biometric identification system by lip shape. Proceedings of the 36th Annual International Carnahan Conference on Security Technology, Oct. 10–24, 2002, pp 39–42

  2. Chang TC, Huang TS, Novak C (1994) Facial feature extraction from color images. Proceedings of the 12th IAPR International Conference on Pattern Recognition, Computer Vision and Image Processing, vol 2, Oct. 9–13, 1994, IEE, Israel, pp 39–43

    Google Scholar 

  3. Sadeghi M, Kittler J, Messer K (2002) Modelling and segmentation of lip area in face images. Proceedings of IEE Conference on Vision, Image and Signal Processing, vol 149, issue 3, Jun. 2002, IET, U.K., pp 179–184

    Google Scholar 

  4. Eveno N, Caplier A, Coulon P-Y (2002) A parametric model for realistic lip segmentation. Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, vol 3, Dec. 2–5, 2002, IEEE, Singapore, pp 1426–1431

    Chapter  Google Scholar 

  5. Liew AWC, Leung SH, Lau WH (2000) Lip contour extraction using a deformable model. Proceedings of the International Conference on Image Processing, vol 2, IEEE, Vancouver, Canada, pp 255–258

    Google Scholar 

  6. Dargham JA, Chekima A (2006) Lips detection in the normalised rgb colour space. Proceeding of the 2nd International Conference on Information and Communication Technologies: From Theory to Applications, ICTTA’06, vol 1, Apr. 24–28, 2006, pp 1546–1551, IEEE, Damascus, Syria

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sigeru Omatu.

About this article

Cite this article

Dargham, J.A., Chekima, A. & Omatu, S. Lip detection by the use of neural networks. Artif Life Robotics 12, 301–306 (2008). https://doi.org/10.1007/s10015-007-0494-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-007-0494-0

Key words

Navigation