skip to main content
research-article

On the Perception Analysis of User Feedback for Interactive Face Retrieval

Authors Info & Claims
Published:03 August 2020Publication History
Skip Abstract Section

Abstract

In this article, we explore the coherence of face perception between human and machine in the scenario of interactive face retrieval. In the part of human perception, we collect user feedback to the stimuli of a target face and groups of displayed candidate face images in a face database with a large number of subjects. In the part of machine vision, we compare the benchmark features and general metrics to measure face similarity. We propose a series of coherence measurements to evaluate the statistic characteristic of human and machine face perception. We discover that despite the unfamiliarity of users to most faces in the database, the coherence between human and machine remains in a stable level across multiple variations in metrics, features, size of databases, and demographics. The simulation experiments with the coherence distributions demonstrate that the embedded information is valuable to speed up interactive retrieval. The comparisons over multiple parameter settings provide feasible instructions in designing the interactive face retrieval system with more consideration of human factors.

References

  1. Timo Ahonen, Abdenour Hadid, and Matti Pietikainen. 2006. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28, 12 (2006), 2037--2041.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Regine Armann, Linda Jeffery, Andrew J. Calder, and Gillian Rhodes. 2011. Race-specific norms for coding face identity and a functional role for norms. J. Vis. 11, 13 (2011), 9--9.Google ScholarGoogle ScholarCross RefCross Ref
  3. Peter N. Belhumeur, João P. Hespanha, and David J. Kriegman. 1997. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. Technical Report. Yale University, New Haven, CT.Google ScholarGoogle Scholar
  4. Vicki Bruce, Hadyn D. Ellis, Felicity Gibling, and Andy Young. 1987. Parallel processing of the sex and familiarity of faces. Can. J. Psychol./Rev. can. psychol. 41, 4 (1987), 510.Google ScholarGoogle Scholar
  5. Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In Proceedings of the 13th IEEE International Conference on Automatic Face 8 Gesture Recognition (FG’18). IEEE, 67--74.Google ScholarGoogle ScholarCross RefCross Ref
  6. Le Chang and Doris Y. Tsao. 2017. The code for facial identity in the primate brain. Cell 169, 6 (2017), 1013--1028.Google ScholarGoogle ScholarCross RefCross Ref
  7. Fangmei Chen, Yong Xu, and David Zhang. 2014. A new hypothesis on facial beauty perception. ACM Trans. Appl. Percept. 11, 2, Article 8 (July 2014), 20 pages. DOI: https://doi.org/10.1145/2622655Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Gong Cheng, Yuchun Fang, Ying Tan, Wang Dai, and Qiyun Cai. 2011. A local difference coding algorithm for face recognition. In Proceedings of the 4th International Congress on Image and Signal Processing (CISP’11), Vol. 2. IEEE, 828--832.Google ScholarGoogle ScholarCross RefCross Ref
  9. Wang Dai, Yuchun Fang, and Binbin Hu. 2011. Feature selection in interactive face retrieval. In Proceedings of the 4th International Congress on Image and Signal Processing (CISP’11), Vol. 3. IEEE, 1358--1362.Google ScholarGoogle ScholarCross RefCross Ref
  10. Hadyn D. Ellis, John W. Shepherd, and Graham M. Davies. 1979. Identification of familiar and unfamiliar faces from internal and external features: Some implications for theories of face recognition. Perception 8, 4 (1979), 431--439.Google ScholarGoogle ScholarCross RefCross Ref
  11. Yuchun Fang, Qiyun Cai, Jie Luo, Wang Dai, and Chengsheng Lou. 2011. A bi-objective optimization model for interactive face retrieval. In Proceedings of the International Conference on Multimedia Modeling. Springer, 393--400.Google ScholarGoogle ScholarCross RefCross Ref
  12. Yuchun Fang and Donald Geman. 2005. Experiments in mental face retrieval. In Proceedings of the International Conference on Audio-and Video-Based Biometric Person Authentication. Springer, 637--646.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Yuchun Fang, Donald Geman, and Nozha Boujemaa. 2005. An interactive system for mental face retrieval. In Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval. ACM, 193--200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yuchun Fang and Qiulong Yuan. 2018. Attribute-enhanced metric learning for face retrieval. EURASIP J. Image Vid. Process. 2018, 1 (7 June 2018), 44. DOI: https://doi.org/10.1186/s13640-018-0282-xGoogle ScholarGoogle Scholar
  15. Winrich A. Freiwald and Doris Y. Tsao. 2010. Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science 330, 6005 (2010), 845--851.Google ScholarGoogle ScholarCross RefCross Ref
  16. Xufeng Han, Thomas Leung, Yangqing Jia, Rahul Sukthankar, and Alexander C. Berg. 2015. Matchnet: Unifying feature and metric learning for patch-based matching. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 3279--3286. DOI: https://doi.org/10.1109/CVPR.2015.7298948Google ScholarGoogle Scholar
  17. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  18. Gary B. Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. 2008. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Proceedings of the Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition.Google ScholarGoogle Scholar
  19. Sofia M. Landi and Winrich A. Freiwald. 2017. Two areas for familiar face recognition in the primate brain. Science 357, 6351 (2017), 591--595.Google ScholarGoogle Scholar
  20. Y. Li and Y. Fang. 2013. A robust feature selection criterion based on preserving locality structure. J. Comput. Inf. Syst. 9, 15 (2013), 6155--6162.Google ScholarGoogle Scholar
  21. Miguel Vargas Martin, Victor Cho, and Gabriel Aversano. 2016. Detection of subconscious face recognition using consumer-grade brain-computer interfaces. ACM Trans. Appl. Percept. 14, 1, Article 7 (Aug. 2016), 20 pages. DOI: https://doi.org/10.1145/2955097Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Radoslaw Niewiadomski and Catherine Pelachaud. 2015. The effect of wrinkles, presentation mode, and intensity on the perception of facial actions and full-face expressions of laughter. ACM Trans. Appl. Percept. 12, 1, Article 2 (Feb. 2015), 21 pages. DOI: https://doi.org/10.1145/2699255Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Alice J. O’Toole, Xaiobo An, Joseph Dunlop, Vaidehi Natu, and P. Jonathon Phillips. 2012. Comparing face recognition algorithms to humans on challenging tasks. ACM Trans. Appl. Percept. 9, 4, Article 16 (Oct. 2012), 13 pages. DOI: https://doi.org/10.1145/2355598.2355599Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Alice J. O’Toole, Jennifer Peterson, and Kenneth A. Deffenbacher. 1996. An “other-race effect” for categorizing faces by sex. Perception 25, 6 (1996), 669--676.Google ScholarGoogle ScholarCross RefCross Ref
  25. Unsang Park and Anil K. Jain. 2010. Face matching and retrieval using soft biometrics. IEEE Trans. Inf. Forens. Secur. 5, 3 (2010), 406--415.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In Proceedings of the British Machine Vision Conference. 41.1–41.12.Google ScholarGoogle ScholarCross RefCross Ref
  27. P. Jonathon Phillips, Fang Jiang, Janet Ayyad, Nils Pénard, and Herve Abdi. 2007. Face recognition algorithms surpass humans matching faces over changes in illumination. IEEE Trans. Pattern Anal. Mach. Intell. 29, 9 (2007), 1642--1646.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. P. Jonathon Phillips, Fang Jiang, Abhijit Narvekar, Julianne Ayyad, and Alice J. O’Toole. 2011. An other-race effect for face recognition algorithms. ACM Trans. Appl. Percept. 8, 2, Article 14 (Feb. 2011), 11 pages. DOI: https://doi.org/10.1145/1870076.1870082Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. P. Jonathon Phillips, Hyeonjoon Moon, Syed A. Rizvi, and Patrick J. Rauss. 2000. The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22, 10 (2000), 1090--1104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. P. Jonathon Phillips, Amy N. Yates, Ying Hu, Carina A. Hahn, Eilidh Noyes, Kelsey Jackson, Jacqueline G. Cavazos, Géraldine Jeckeln, Rajeev Ranjan, Swami Sankaranarayanan, et al. 2018. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proc. Natl. Acad. Sci. U.S.A. 115, 24 (2018), 6171--6176.Google ScholarGoogle ScholarCross RefCross Ref
  31. N. Pourdamghani, H. R. Rabiee, and M. Zolfaghari. 2012. Metric learning for graph based semi-supervised human pose estimation. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR’12). 3386--3389.Google ScholarGoogle Scholar
  32. Richard Russell. 2003. Sex, beauty, and the relative luminance of facial features. Perception 32, 9 (2003), 1093--1107.Google ScholarGoogle ScholarCross RefCross Ref
  33. Matthaeus Schumacher and Volker Blanz. 2012. Which facial profile do humans expect after seeing a frontal view? A comparison with a linear face model. ACM Trans. Appl. Percept. 9, 3, Article 11 (Aug. 2012), 16 pages. DOI: https://doi.org/10.1145/2325722.2325724Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations.Google ScholarGoogle Scholar
  35. Yi Sun, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1891--1898.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Matthew Turk and Alex Pentland. 1991. Eigenfaces for recognition. J. Cogn. Neurosci. 3, 1 (1991), 71--86.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Shufu Xie, Shiguang Shan, Xilin Chen, and Jie Chen. 2010. Fusing local patterns of gabor magnitude and phase for face recognition. IEEE Trans. Image Process. 19, 5 (2010), 1349--1361.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. M. K. Yamaguchi, T. Hirukawa, and S. Kanazawa. 1995. Judgment of gender through facial parts. Perception 24, 5 (1995), 563.Google ScholarGoogle ScholarCross RefCross Ref
  39. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 586--595. DOI: https://doi.org/10.1109/CVPR.2018.00068Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. On the Perception Analysis of User Feedback for Interactive Face Retrieval

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Applied Perception
        ACM Transactions on Applied Perception  Volume 17, Issue 3
        July 2020
        85 pages
        ISSN:1544-3558
        EISSN:1544-3965
        DOI:10.1145/3415024
        Issue’s Table of Contents

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 3 August 2020
        • Accepted: 1 May 2020
        • Revised: 1 March 2020
        • Received: 1 August 2018
        Published in tap Volume 17, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format