Abstract
In this article, we explore the coherence of face perception between human and machine in the scenario of interactive face retrieval. In the part of human perception, we collect user feedback to the stimuli of a target face and groups of displayed candidate face images in a face database with a large number of subjects. In the part of machine vision, we compare the benchmark features and general metrics to measure face similarity. We propose a series of coherence measurements to evaluate the statistic characteristic of human and machine face perception. We discover that despite the unfamiliarity of users to most faces in the database, the coherence between human and machine remains in a stable level across multiple variations in metrics, features, size of databases, and demographics. The simulation experiments with the coherence distributions demonstrate that the embedded information is valuable to speed up interactive retrieval. The comparisons over multiple parameter settings provide feasible instructions in designing the interactive face retrieval system with more consideration of human factors.
- Timo Ahonen, Abdenour Hadid, and Matti Pietikainen. 2006. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28, 12 (2006), 2037--2041.Google ScholarDigital Library
- Regine Armann, Linda Jeffery, Andrew J. Calder, and Gillian Rhodes. 2011. Race-specific norms for coding face identity and a functional role for norms. J. Vis. 11, 13 (2011), 9--9.Google ScholarCross Ref
- Peter N. Belhumeur, João P. Hespanha, and David J. Kriegman. 1997. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. Technical Report. Yale University, New Haven, CT.Google Scholar
- Vicki Bruce, Hadyn D. Ellis, Felicity Gibling, and Andy Young. 1987. Parallel processing of the sex and familiarity of faces. Can. J. Psychol./Rev. can. psychol. 41, 4 (1987), 510.Google Scholar
- Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In Proceedings of the 13th IEEE International Conference on Automatic Face 8 Gesture Recognition (FG’18). IEEE, 67--74.Google ScholarCross Ref
- Le Chang and Doris Y. Tsao. 2017. The code for facial identity in the primate brain. Cell 169, 6 (2017), 1013--1028.Google ScholarCross Ref
- Fangmei Chen, Yong Xu, and David Zhang. 2014. A new hypothesis on facial beauty perception. ACM Trans. Appl. Percept. 11, 2, Article 8 (July 2014), 20 pages. DOI: https://doi.org/10.1145/2622655Google ScholarDigital Library
- Gong Cheng, Yuchun Fang, Ying Tan, Wang Dai, and Qiyun Cai. 2011. A local difference coding algorithm for face recognition. In Proceedings of the 4th International Congress on Image and Signal Processing (CISP’11), Vol. 2. IEEE, 828--832.Google ScholarCross Ref
- Wang Dai, Yuchun Fang, and Binbin Hu. 2011. Feature selection in interactive face retrieval. In Proceedings of the 4th International Congress on Image and Signal Processing (CISP’11), Vol. 3. IEEE, 1358--1362.Google ScholarCross Ref
- Hadyn D. Ellis, John W. Shepherd, and Graham M. Davies. 1979. Identification of familiar and unfamiliar faces from internal and external features: Some implications for theories of face recognition. Perception 8, 4 (1979), 431--439.Google ScholarCross Ref
- Yuchun Fang, Qiyun Cai, Jie Luo, Wang Dai, and Chengsheng Lou. 2011. A bi-objective optimization model for interactive face retrieval. In Proceedings of the International Conference on Multimedia Modeling. Springer, 393--400.Google ScholarCross Ref
- Yuchun Fang and Donald Geman. 2005. Experiments in mental face retrieval. In Proceedings of the International Conference on Audio-and Video-Based Biometric Person Authentication. Springer, 637--646.Google ScholarDigital Library
- Yuchun Fang, Donald Geman, and Nozha Boujemaa. 2005. An interactive system for mental face retrieval. In Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval. ACM, 193--200.Google ScholarDigital Library
- Yuchun Fang and Qiulong Yuan. 2018. Attribute-enhanced metric learning for face retrieval. EURASIP J. Image Vid. Process. 2018, 1 (7 June 2018), 44. DOI: https://doi.org/10.1186/s13640-018-0282-xGoogle Scholar
- Winrich A. Freiwald and Doris Y. Tsao. 2010. Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science 330, 6005 (2010), 845--851.Google ScholarCross Ref
- Xufeng Han, Thomas Leung, Yangqing Jia, Rahul Sukthankar, and Alexander C. Berg. 2015. Matchnet: Unifying feature and metric learning for patch-based matching. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 3279--3286. DOI: https://doi.org/10.1109/CVPR.2015.7298948Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770--778.Google ScholarCross Ref
- Gary B. Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. 2008. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Proceedings of the Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition.Google Scholar
- Sofia M. Landi and Winrich A. Freiwald. 2017. Two areas for familiar face recognition in the primate brain. Science 357, 6351 (2017), 591--595.Google Scholar
- Y. Li and Y. Fang. 2013. A robust feature selection criterion based on preserving locality structure. J. Comput. Inf. Syst. 9, 15 (2013), 6155--6162.Google Scholar
- Miguel Vargas Martin, Victor Cho, and Gabriel Aversano. 2016. Detection of subconscious face recognition using consumer-grade brain-computer interfaces. ACM Trans. Appl. Percept. 14, 1, Article 7 (Aug. 2016), 20 pages. DOI: https://doi.org/10.1145/2955097Google ScholarDigital Library
- Radoslaw Niewiadomski and Catherine Pelachaud. 2015. The effect of wrinkles, presentation mode, and intensity on the perception of facial actions and full-face expressions of laughter. ACM Trans. Appl. Percept. 12, 1, Article 2 (Feb. 2015), 21 pages. DOI: https://doi.org/10.1145/2699255Google ScholarDigital Library
- Alice J. O’Toole, Xaiobo An, Joseph Dunlop, Vaidehi Natu, and P. Jonathon Phillips. 2012. Comparing face recognition algorithms to humans on challenging tasks. ACM Trans. Appl. Percept. 9, 4, Article 16 (Oct. 2012), 13 pages. DOI: https://doi.org/10.1145/2355598.2355599Google ScholarDigital Library
- Alice J. O’Toole, Jennifer Peterson, and Kenneth A. Deffenbacher. 1996. An “other-race effect” for categorizing faces by sex. Perception 25, 6 (1996), 669--676.Google ScholarCross Ref
- Unsang Park and Anil K. Jain. 2010. Face matching and retrieval using soft biometrics. IEEE Trans. Inf. Forens. Secur. 5, 3 (2010), 406--415.Google ScholarDigital Library
- Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In Proceedings of the British Machine Vision Conference. 41.1–41.12.Google ScholarCross Ref
- P. Jonathon Phillips, Fang Jiang, Janet Ayyad, Nils Pénard, and Herve Abdi. 2007. Face recognition algorithms surpass humans matching faces over changes in illumination. IEEE Trans. Pattern Anal. Mach. Intell. 29, 9 (2007), 1642--1646.Google ScholarDigital Library
- P. Jonathon Phillips, Fang Jiang, Abhijit Narvekar, Julianne Ayyad, and Alice J. O’Toole. 2011. An other-race effect for face recognition algorithms. ACM Trans. Appl. Percept. 8, 2, Article 14 (Feb. 2011), 11 pages. DOI: https://doi.org/10.1145/1870076.1870082Google ScholarDigital Library
- P. Jonathon Phillips, Hyeonjoon Moon, Syed A. Rizvi, and Patrick J. Rauss. 2000. The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22, 10 (2000), 1090--1104.Google ScholarDigital Library
- P. Jonathon Phillips, Amy N. Yates, Ying Hu, Carina A. Hahn, Eilidh Noyes, Kelsey Jackson, Jacqueline G. Cavazos, Géraldine Jeckeln, Rajeev Ranjan, Swami Sankaranarayanan, et al. 2018. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proc. Natl. Acad. Sci. U.S.A. 115, 24 (2018), 6171--6176.Google ScholarCross Ref
- N. Pourdamghani, H. R. Rabiee, and M. Zolfaghari. 2012. Metric learning for graph based semi-supervised human pose estimation. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR’12). 3386--3389.Google Scholar
- Richard Russell. 2003. Sex, beauty, and the relative luminance of facial features. Perception 32, 9 (2003), 1093--1107.Google ScholarCross Ref
- Matthaeus Schumacher and Volker Blanz. 2012. Which facial profile do humans expect after seeing a frontal view? A comparison with a linear face model. ACM Trans. Appl. Percept. 9, 3, Article 11 (Aug. 2012), 16 pages. DOI: https://doi.org/10.1145/2325722.2325724Google ScholarDigital Library
- Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations.Google Scholar
- Yi Sun, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1891--1898.Google ScholarDigital Library
- Matthew Turk and Alex Pentland. 1991. Eigenfaces for recognition. J. Cogn. Neurosci. 3, 1 (1991), 71--86.Google ScholarDigital Library
- Shufu Xie, Shiguang Shan, Xilin Chen, and Jie Chen. 2010. Fusing local patterns of gabor magnitude and phase for face recognition. IEEE Trans. Image Process. 19, 5 (2010), 1349--1361.Google ScholarDigital Library
- M. K. Yamaguchi, T. Hirukawa, and S. Kanazawa. 1995. Judgment of gender through facial parts. Perception 24, 5 (1995), 563.Google ScholarCross Ref
- Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 586--595. DOI: https://doi.org/10.1109/CVPR.2018.00068Google ScholarCross Ref
Index Terms
- On the Perception Analysis of User Feedback for Interactive Face Retrieval
Recommendations
Face matching and retrieval using soft biometrics
Soft biometric traits embedded in a face (e.g., gender and facial marks) are ancillary information and are not fully distinctive by themselves in face-recognition tasks. However, this information can be explicitly combined with face matching score to ...
An efficient method for face retrieval from large video datasets
CIVR '10: Proceedings of the ACM International Conference on Image and Video RetrievalThe human face is one of the most important objects in videos since it provides rich information for spotting certain people of interest, such as government leaders in news video, or the hero in a movie, and is the basis for interpreting facts. ...
A bi-objective optimization model for interactive face retrieval
MMM'11: Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part IIIn this paper, based on Bayesian relevance feedback methods, we propose a novel interactive face retrieving model based on two objective functions, one is the Maximum a Posterior (MAP) and the other is maximization of mutual information. The proposed bi-...
Comments