Abstract
When it comes to more autonomous unmanned aerial vehicles, enhanced sensory and perceptive capabilities need to be integrated and qualified for mission scenarios of larger scale. In this context, recent developments in embedded technologies now allow the use of onboard image processing on such airborne platforms. However, the acquisition of mission relevant imagery and video test data necessary to develop and verify such processing algorithms can be complicated and costly. Therefore we are interested in the usability of commercial-of-the-shelf virtual simulation environments for generation of test and training data. To yield general acceptance, the relevance and comparability to real world imagery needs to be investigated. We pursue a multi-level approach to analyze differences between real and coherently simulated imagery and measure respective influence on image processing algorithm performance, taking into account typical visual database and rendering benchmarks such as level of detail, texture composition and rendering details. More specifically, in this paper we analyze corresponding real and synthetic footage using image descriptors from the content based image retrieval domain introduced in the MPEG7 standard. This allows us to compare the appearance of images in regard to specific image properties without disregarding their overall content. In future work it is planned to apply and evaluate the test subject, a computer vision algorithm on real and synthetic imagery. These evaluations are compared to allow detection of specific image properties influencing the performance of the test subject and therefore will help in identifying differences in the synthetically generated image. The results will provide insight on how to specifically trim image generation methods to reach equal processing performance with both image sets, mandatory to justify usage of synthetic footage for algorithm development and qualification. …
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Erman, A.: AWARE: Platform for Autonomous Self-Deploying and Operation of Wireless Sensor-Actuator Networks Cooperating with AeRial ObjEcts. In: Proceedings of IEEE International... (2007)
Ollero, A., Lacroix, S.: Multiple eyes in the skies: architecture and perception issues in the COMETS unmanned air vehicles project. Robotics & …, 46–57 (June 2005)
Hummel, G., Russ, M., Stütz, P., Soldatos, J., Rossi, L., Knape, T., Utasi, Á., Kovács, L., Szirányi, T., Doulaverakis, C., Kompatsiaris, I.: Intelligent Multi Sensor Fusion System for Advanced Situation Awareness in Urban Environments. In: Aschenbruck, N., Martini, P., Meier, M., Tölle, J. (eds.) Future Security. CCIS, vol. 318, pp. 93–104. Springer, Heidelberg (2012)
Szeliski, R.: Computer vision: algorithms and applications (2011)
Hummel, G., Stütz, P.: Conceptual design of a simulation test bed for ad-hoc sensor networks based on a serious gaming environment. In: International Training and Education Conference 2011, Cologne (2011)
Ferwerda, J.: Three varieties of realism in computer graphics. Proceedings SPIE Human Vision and Electronic... SPIE 5007, pp. 290–297 (2003)
Morrison, P.: White Paper: VBS2 Release Version 2.0. Technical report, Bohemia Interactive Australia, Nelson Bay, Australia (2012)
Hummel, G., Kovács, L., Stütz, P., Szirányi, T.: Data Simulation and Testing of Visual Algorithms in Synthetic Environments for Security Sensor Networks. In: Aschenbruck, N., Martini, P., Meier, M., Tölle, J. (eds.) Future Security 2012. CCIS, vol. 318, pp. 212–215. Springer, Heidelberg (2012)
Hummel, G., Smirnov, D., Kronenberg, A.: Prototyping and training of computer vision algorithms in a synthetic UAV mission test bed. In: 52nd Aerospace Sciences Meeting, pp. 1–10 (2014)
Russ, M., Stutz, P.: Airborne sensor and perception management: A conceptual approach for surveillance UAS. In: 2012 15th ... Information Fusion (FUSION), pp. 2444–2451 (2012)
Nentwig, M., Miegler, M., Stamminger, M.: Concerning the applicability of computer graphics for the evaluation of image processing algorithms. In: 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES 2012), pp. 205–210 (July 2012)
Nentwig, M., Stamminger, M.: Hardware-in-the-loop testing of computer vision based driver assistance systems. In: Intelligent Vehicles Symposium (IV... (Iv)), pp. 339–344 (2011)
Hiblot, N., Gruyer, D.: Pro-SiVIC and ROADS. A Software suite for sensors simulation and virtual prototyping of ADAS. In: Proceedings of DSC, pp. 277–288 (2010)
Yamada, A., Pickering, M., Jeannin, S.: Text of 15938-3/FCD Information Technology–Multimedia Content Description Interface–Part 3 Visual. Tech. Rep. (2000)
Chalmers, A., McNamara, A., Daly, S.: Image quality metrics (July 2000)
Rushmeier, H., Ward, G., Piatko, C.: Comparing real and synthetic images: Some ideas about metrics. Rendering Techniques’ …(1995)
Sikora, T.: The MPEG-7 visual standard for content description-an overview. IEEE ... Circuits and Systems for Video Technology 11(6), 696–702 (2001)
Manjunath, B., Ohm, J.R., Vasudevan, V., Yamada, A.: Color and texture descriptors. IEEE Transactions on Circuits and Systems for Video Technology 11(6), 703–715 (2001)
Buturovic, A.: MPEG 7 Color Structure Descriptor for visual information retrieval project VizIR 1. In: Interface, pp. 7–8 (2005)
Wang, J., Li, J.L.J., Wiederhold, G.: SIMPLIcity: semantics-sensitive integrated matching for picture libraries. IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2001)
Spyrou, E., Tolias, G., Mylonas, P., Avrithis, Y.: Concept detection and keyframe extraction using a visual thesaurus. Multimedia Tools and Applications 41(3), 337–373 (2008)
Ma, W.Y., Deng, Y., Manjunath, B.S.: Tools for texture / color based search of images (1997)
Bastan, M., Cam, H., Gudukbay, U., Ulusoy, O.: Bilvideo-7: an MPEG-7-compatible video indexing and retrieval system. IEEE MultiMedia, 62–73 (2010)
ISO/IEC: Information technology – Multimedia content description interface –. Part 6: Reference software 15938-6:20 (2003)
Böhm, F., Schulte, A.: UAV Autonomy Research–Challenges and advantages of a fully distributed system architecture. In: International Telemetering Conference, ITC 2012, pp. 1–10 (2012)
Boehm, F., Schulte, A.: Scalable COTS Based Data Processing and Distribution Architecture for UAV Technology Demonstrators. In: European Telemetry and Test Conference, ETC 2012... (2012)
Bender, M., Brill, M.: Computergrafik. Hanser (2003)
Hanke, B.: 3D-Modellierung des Geländes der UniBwM für einen UAV Simulator. Bachelorthesis, University of the german federal armed forces (2013)
Bavoil, L., Sainz, M.: Screen space ambient occlusion. NVIDIA Developer Information (2008), http://developer.download.nvidia.com/SDK/10.5/direct3d/Source/ScreenSpaceAO/doc/ScreenSpaceAO.pdf
Shannon, C.: Communication in the presence of noise. Proceedings of the IRE 86(2), 447–457 (1949)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Hummel, G., Stütz, P. (2014). Using Virtual Simulation Environments for Development and Qualification of UAV Perceptive Capabilities: Comparison of Real and Rendered Imagery with MPEG7 Image Descriptors. In: Hodicky, J. (eds) Modelling and Simulation for Autonomous Systems. MESAS 2014. Lecture Notes in Computer Science, vol 8906. Springer, Cham. https://doi.org/10.1007/978-3-319-13823-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-13823-7_4
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13822-0
Online ISBN: 978-3-319-13823-7
eBook Packages: Computer ScienceComputer Science (R0)