Abstract
The concept of Smart Environment (SE) provides a great benefit to its users: interactive informational services (corporate TV, video communication, navigation and localization services) and mobile autonomous entities: mobile robotic platforms, quadcopters, anthropomorphic robots etc. It is important to take into account all the personal details of the user’s behavior so to provide him a personalized most useful data. So, the task of a person identification based on an image of his face is urgent. One of the most famous approach to recognize a client is by their face. It’s important to have as huge dataset as it’s conceivable to prepare an exact classifier, particularly if profound neural systems are being used. It’s expensive to compose delegate dataset physically – to snap a picture of each individual from each conceivable edge with each conceivable light condition. This is the reason why generation of a synthetic data for training a classifier, utilizing least genuine information is so urgent. At this paper several face features extractors were tested based on deep learning models in order to find its advantages and disadvantages in the context of training a classifier for a facial recognition task and a clustering for a tracking unique people in SE.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kagirov, I., Tolstoy, I., Savelyev, A., Karpov, A.: Gesture control of collaborative robot. Robot. Techn. Cybern. 7(2), 139–144 (2019)
Saveliev, A., Uzdiaev, M., Dmitrii, M.: Aggressive Action Recognition Using 3D CNN Architectures. In: 2019 12th International Conference on Developments in eSystems Engineering (DeSE), pp. 890–895 (2019)
Goodfellow, I. et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 56, 2672–2680 (2014)
Wang, X., Gupta, A.: Generative image modeling using style and structure adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 318–335. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_20
Zhu, J.Y., Krähenbühl, P., Shechtman, E., Efros, A.A.: Generative Visual Manipulation on the Natural Image Manifold. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 597–613. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_36
Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. Adv. Neural Inf. Processing Syst. 29, 469–477 (2016)
Tuzel, O., Taguchi, Y., Hershey, J. R.: Global-local face upsampling network. arXiv preprint arXiv:1603.07235 (2016)
Generation photorealistic celebrity faces. https://blog.insightdatascience.com/generatingcustom-photo-realistic-faces-using-ai-d170b1b59255. Accessed 10 Jan 2020, Accessed 01 July 2020
Kim, T., Cha, M., Kim, H., Lee, J. K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1857–1865 (2017)
Shrivastava, A.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116 (2017)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning, MIT Press, Cambridge (2016)
Zhukovskiy, Y.L., Korolev, N.A., Babanova, I.S., Boikov, A.V.: The prediction of the residual life of electromechanical equipment based on the artificial neural network. In: IOP Conference Series: Earth and Environmental Science, vol. 87, p. 032056. IOP Publishing (2017)
Schroff, F., Kalenichenko, D., Philbin, J.F.: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)
Keras-vggface. https://github.com/rcmalli/keras-vggface. Accessed 01 July 2020
Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Oleinik, A.L., Kukharev, G.A.: Algorithms for Face Image Mutual Reconstruction by Means of Two-Dimensional Projection Methods. SPIIRAS Proceedings. 2, 45–74 (2018). https://doi.org/10.15622/sp.57.3
Bogomolov, A.V., Gan, S.P., Zinkin, V.N., Alekhin, M.D.: Acoustic factor environmental safety monitoring information system. In: Proceedings of 2019 22nd International Conference on Soft Computing and Measurements, SCM 2019. 8903729, pp. 215–218 (2019)
Vasiljevic, I.S., Dragan, D., Obradovic, R., Petrović, V.B.: Analysis of compression techniques for stereoscopic images. In: SPIIRAS Proceedings. vol. 6, 197–220 (2018). https://doi.org/10.15622/sp.61.8
Pavliuk, N., Kharkov, I., Zimuldinov, E., Saprychev, V.: Development of multipurpose mobile platform with a modular structure. In: Ronzhin, A., Shishlakov, V. (eds.) Proceedings of 14th International Conference on Electromechanics and Robotics “Zavalishin’s Readings”. SIST, vol. 154, pp. 137–147. Springer, Singapore (2020). https://doi.org/10.1007/978-981-13-9267-2_12
Acknowledgements
This research is supported by RSF №16-19-00044П.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Letenkov, M., Levonevskiy, D. (2020). Fast Face Features Extraction Based on Deep Neural Networks for Mobile Robotic Platforms. In: Ronzhin, A., Rigoll, G., Meshcheryakov, R. (eds) Interactive Collaborative Robotics. ICR 2020. Lecture Notes in Computer Science(), vol 12336. Springer, Cham. https://doi.org/10.1007/978-3-030-60337-3_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-60337-3_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60336-6
Online ISBN: 978-3-030-60337-3
eBook Packages: Computer ScienceComputer Science (R0)