Skip to main content
Log in

Multi-view approach for drone light show

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Drones, or precisely quadrotors, have been increasingly used in the field of robotics and also in entertainment. Coordinated multiple drones that form visual presentations through their equipped LEDs are known as drone light shows (Waibel, M., Keays, B., Augugliaro, F.: in Drone shows: Creative potential and best practices. ETH Zurich, 2017). Such a performance offers visual enjoyment for a large audience, particularly in festivals. However, the majority of current drone light shows are manually coordinated by personnel using software. Drone light shows also have limited viewing range, thereby preventing the audience from getting a good view of the actual performance. This study proposes a method to provide multiple visual presentations in accordance with multiple viewing angles. We use visual hull to filter out the candidate areas that form input images, and takes projection error and classification values as weight for optimization. Consequently, the proposed method reduces the number of drones needed to form a multi-view structure for visual presentations. Furthermore, to meet the demands of performing the animation in multi-view structure, we implement the flight algorithm to locate the most suitable corresponding points between two different structures, and then generate the shortest flight paths without collision. Experiments conducted in our simulator provide additional insights and discussions, and each factor is visualized to provide an improved understanding of our approach for multi-view drone light shows.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Ali, A.A., Rashid, A.T., Frasca, M., Fortuna, L.: An algorithm for multi-robot collision-free navigation based on shortest distance. Robot. Auton. Syst. 75, 119–128 (2016). https://doi.org/10.1016/j.robot.2015.10.010

    Article  Google Scholar 

  2. Van den Berg, J., Lin, M., Manocha, D.: Reciprocal velocity obstacles for real-time multi-agent navigation. In: 2008 IEEE International Conference on Robotics and Automation (pp. 1928–1935). IEEE. (2008, May). https://doi.org/10.1109/ROBOT.2008.4543489

  3. Bradski, G.: The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000). https://github.com/opencv/opencv/wiki/CiteOpenCV

  4. Fujimori, A., Teramoto, M., Nikiforuk, P.N., Gupta, M.M.: Cooperative collision avoidance between multiple mobile robots. J. Robot. Syst. 17(7), 347–363 (2000). https://doi.org/10.1002/10974563

    Article  MATH  Google Scholar 

  5. Hsiao, K.W., Huang, J.B., Chu, H.K.: Multi-view wire art. ACM Trans. Graph. 37(6), 242–251 (2018). https://doi.org/10.2307/777701

    Article  Google Scholar 

  6. Kolev, K., Klodt, M., Brox, T., Cremers, D.: Continuous global optimization in multiview 3d reconstruction. Int. J. Comput. Vision 84(1), 80–96 (2009). https://doi.org/10.1007/s11263-009-0233-1

    Article  Google Scholar 

  7. Kuhn, H.W.: The Hungarian method for the assignment problem. Naval Res. Logist. Q. 2(1–2), 83–97 (1955). https://doi.org/10.3390/math8112050

    Article  MathSciNet  MATH  Google Scholar 

  8. Laurentini, A.: The visual hull concept for silhouette-based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 16(2), 150–162 (1994). https://doi.org/10.1109/34.273735

    Article  Google Scholar 

  9. Mitra, N.J., Pauly, M.: Shadow art. ACM Trans. Gr. 28, 156–161 (2009). https://doi.org/10.1145/1618452.1618502

    Article  Google Scholar 

  10. Ohta, A.: Sky magic: Drone entertainment show. In: ACM SIGGRAPH 2017 Emerging Technologies. (2017). https://doi.org/10.1145/3084822.3108158

  11. Rockwood, A.P., Winget, J.: Three-dimensional object reconstruction from two-dimensional images. Comput. Aided Des. 29(4), 279–285 (1997). https://doi.org/10.1016/S0010-4485(96)00056-5

    Article  Google Scholar 

  12. Shah, M. A., Aouf, N.: 3d cooperative pythagorean hodograph path planning and obstacle avoidance for multiple uavs. In: 2010 IEEE 9th International Conference on Cyberntic Intelligent Systems (pp. 1–6). IEEE. (2010, September). https://doi.org/10.1109/UKRICIS.2010.5898124

  13. Skrjanc, I., Klancar, G.: Cooperative collision avoidance between multiple robots based on bezier curves. In: 2007 29th International Conference on Information Technology Interfaces (pp. 451–456). IEEE. (2007, June). https://doi.org/10.1016/j.robot.2009.09.003

  14. Trager, M., Hebert, M., Ponce, J.: Consistency of silhouettes and their duals. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3346–3354). (2016). https://doi.org/10.1109/CVPR.2016.364

  15. Wikipedia Contributors. Ambigram — Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Ambigram&oldid=935231453. (2020). Accessed 20 January 2020

  16. Waibel, M., Keays, B., Augugliaro, F.: Drone shows: creative potential and best practices. ETH Zurich (2017)

  17. Xiong, W., Zhang, P., Sander, P. V., Joneja, A.: Shape-inspired architectural design. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 1–10). (2018, May). https://doi.org/10.1145/3190834.3198034

  18. Van Den Berg, Jur, et al.: Rvo2 library: Reciprocal collision avoidance for real-time multi-agent simulation. (2011).

  19. Perron, L., Furnon, V.: OR-Tools 7.2. https://developers.google.com/optimization/

Download references

Acknowledgements

This work was supported by The Ministry of Science and Technology, Taiwan, under GRANT No. MOST 110-2221-E-004-009 and MOST 110-2634-F-004-001 through Pervasive Artificial Intelligence Research (PAIR) Labs. We would also thank Sin-Fei Lee for the demo video editing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming-Te Chi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material (PDF 2503 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weng, KC., Lin, ST., Hu, CC. et al. Multi-view approach for drone light show. Vis Comput 39, 5797–5808 (2023). https://doi.org/10.1007/s00371-022-02696-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02696-8

Keywords

Navigation