Skip to main content

Using Virtual Simulation Environments for Development and Qualification of UAV Perceptive Capabilities: Comparison of Real and Rendered Imagery with MPEG7 Image Descriptors

  • Conference paper
Modelling and Simulation for Autonomous Systems (MESAS 2014)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8906))

Abstract

When it comes to more autonomous unmanned aerial vehicles, enhanced sensory and perceptive capabilities need to be integrated and qualified for mission scenarios of larger scale. In this context, recent developments in embedded technologies now allow the use of onboard image processing on such airborne platforms. However, the acquisition of mission relevant imagery and video test data necessary to develop and verify such processing algorithms can be complicated and costly. Therefore we are interested in the usability of commercial-of-the-shelf virtual simulation environments for generation of test and training data. To yield general acceptance, the relevance and comparability to real world imagery needs to be investigated. We pursue a multi-level approach to analyze differences between real and coherently simulated imagery and measure respective influence on image processing algorithm performance, taking into account typical visual database and rendering benchmarks such as level of detail, texture composition and rendering details. More specifically, in this paper we analyze corresponding real and synthetic footage using image descriptors from the content based image retrieval domain introduced in the MPEG7 standard. This allows us to compare the appearance of images in regard to specific image properties without disregarding their overall content. In future work it is planned to apply and evaluate the test subject, a computer vision algorithm on real and synthetic imagery. These evaluations are compared to allow detection of specific image properties influencing the performance of the test subject and therefore will help in identifying differences in the synthetically generated image. The results will provide insight on how to specifically trim image generation methods to reach equal processing performance with both image sets, mandatory to justify usage of synthetic footage for algorithm development and qualification. …

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Erman, A.: AWARE: Platform for Autonomous Self-Deploying and Operation of Wireless Sensor-Actuator Networks Cooperating with AeRial ObjEcts. In: Proceedings of IEEE International... (2007)

    Google Scholar 

  2. Ollero, A., Lacroix, S.: Multiple eyes in the skies: architecture and perception issues in the COMETS unmanned air vehicles project. Robotics & …, 46–57 (June 2005)

    Google Scholar 

  3. Hummel, G., Russ, M., Stütz, P., Soldatos, J., Rossi, L., Knape, T., Utasi, Á., Kovács, L., Szirányi, T., Doulaverakis, C., Kompatsiaris, I.: Intelligent Multi Sensor Fusion System for Advanced Situation Awareness in Urban Environments. In: Aschenbruck, N., Martini, P., Meier, M., Tölle, J. (eds.) Future Security. CCIS, vol. 318, pp. 93–104. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  4. Szeliski, R.: Computer vision: algorithms and applications (2011)

    Google Scholar 

  5. Hummel, G., Stütz, P.: Conceptual design of a simulation test bed for ad-hoc sensor networks based on a serious gaming environment. In: International Training and Education Conference 2011, Cologne (2011)

    Google Scholar 

  6. Ferwerda, J.: Three varieties of realism in computer graphics. Proceedings SPIE Human Vision and Electronic... SPIE 5007, pp. 290–297 (2003)

    Google Scholar 

  7. Morrison, P.: White Paper: VBS2 Release Version 2.0. Technical report, Bohemia Interactive Australia, Nelson Bay, Australia (2012)

    Google Scholar 

  8. Hummel, G., Kovács, L., Stütz, P., Szirányi, T.: Data Simulation and Testing of Visual Algorithms in Synthetic Environments for Security Sensor Networks. In: Aschenbruck, N., Martini, P., Meier, M., Tölle, J. (eds.) Future Security 2012. CCIS, vol. 318, pp. 212–215. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  9. Hummel, G., Smirnov, D., Kronenberg, A.: Prototyping and training of computer vision algorithms in a synthetic UAV mission test bed. In: 52nd Aerospace Sciences Meeting, pp. 1–10 (2014)

    Google Scholar 

  10. Russ, M., Stutz, P.: Airborne sensor and perception management: A conceptual approach for surveillance UAS. In: 2012 15th ... Information Fusion (FUSION), pp. 2444–2451 (2012)

    Google Scholar 

  11. Nentwig, M., Miegler, M., Stamminger, M.: Concerning the applicability of computer graphics for the evaluation of image processing algorithms. In: 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES 2012), pp. 205–210 (July 2012)

    Google Scholar 

  12. Nentwig, M., Stamminger, M.: Hardware-in-the-loop testing of computer vision based driver assistance systems. In: Intelligent Vehicles Symposium (IV... (Iv)), pp. 339–344 (2011)

    Google Scholar 

  13. Hiblot, N., Gruyer, D.: Pro-SiVIC and ROADS. A Software suite for sensors simulation and virtual prototyping of ADAS. In: Proceedings of DSC, pp. 277–288 (2010)

    Google Scholar 

  14. Yamada, A., Pickering, M., Jeannin, S.: Text of 15938-3/FCD Information Technology–Multimedia Content Description Interface–Part 3 Visual. Tech. Rep. (2000)

    Google Scholar 

  15. Chalmers, A., McNamara, A., Daly, S.: Image quality metrics (July 2000)

    Google Scholar 

  16. Rushmeier, H., Ward, G., Piatko, C.: Comparing real and synthetic images: Some ideas about metrics. Rendering Techniques’ …(1995)

    Google Scholar 

  17. Sikora, T.: The MPEG-7 visual standard for content description-an overview. IEEE ... Circuits and Systems for Video Technology 11(6), 696–702 (2001)

    Google Scholar 

  18. Manjunath, B., Ohm, J.R., Vasudevan, V., Yamada, A.: Color and texture descriptors. IEEE Transactions on Circuits and Systems for Video Technology 11(6), 703–715 (2001)

    Article  Google Scholar 

  19. Buturovic, A.: MPEG 7 Color Structure Descriptor for visual information retrieval project VizIR 1. In: Interface, pp. 7–8 (2005)

    Google Scholar 

  20. Wang, J., Li, J.L.J., Wiederhold, G.: SIMPLIcity: semantics-sensitive integrated matching for picture libraries. IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2001)

    Google Scholar 

  21. Spyrou, E., Tolias, G., Mylonas, P., Avrithis, Y.: Concept detection and keyframe extraction using a visual thesaurus. Multimedia Tools and Applications 41(3), 337–373 (2008)

    Article  Google Scholar 

  22. Ma, W.Y., Deng, Y., Manjunath, B.S.: Tools for texture / color based search of images (1997)

    Google Scholar 

  23. Bastan, M., Cam, H., Gudukbay, U., Ulusoy, O.: Bilvideo-7: an MPEG-7-compatible video indexing and retrieval system. IEEE MultiMedia, 62–73 (2010)

    Google Scholar 

  24. ISO/IEC: Information technology – Multimedia content description interface –. Part 6: Reference software 15938-6:20 (2003)

    Google Scholar 

  25. Böhm, F., Schulte, A.: UAV Autonomy Research–Challenges and advantages of a fully distributed system architecture. In: International Telemetering Conference, ITC 2012, pp. 1–10 (2012)

    Google Scholar 

  26. Boehm, F., Schulte, A.: Scalable COTS Based Data Processing and Distribution Architecture for UAV Technology Demonstrators. In: European Telemetry and Test Conference, ETC 2012... (2012)

    Google Scholar 

  27. Bender, M., Brill, M.: Computergrafik. Hanser (2003)

    Google Scholar 

  28. Hanke, B.: 3D-Modellierung des Geländes der UniBwM für einen UAV Simulator. Bachelorthesis, University of the german federal armed forces (2013)

    Google Scholar 

  29. Bavoil, L., Sainz, M.: Screen space ambient occlusion. NVIDIA Developer Information (2008), http://developer.download.nvidia.com/SDK/10.5/direct3d/Source/ScreenSpaceAO/doc/ScreenSpaceAO.pdf

  30. Shannon, C.: Communication in the presence of noise. Proceedings of the IRE 86(2), 447–457 (1949)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Hummel, G., Stütz, P. (2014). Using Virtual Simulation Environments for Development and Qualification of UAV Perceptive Capabilities: Comparison of Real and Rendered Imagery with MPEG7 Image Descriptors. In: Hodicky, J. (eds) Modelling and Simulation for Autonomous Systems. MESAS 2014. Lecture Notes in Computer Science, vol 8906. Springer, Cham. https://doi.org/10.1007/978-3-319-13823-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-13823-7_4

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-13822-0

  • Online ISBN: 978-3-319-13823-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics