Abstract
Drones, or precisely quadrotors, have been increasingly used in the field of robotics and also in entertainment. Coordinated multiple drones that form visual presentations through their equipped LEDs are known as drone light shows (Waibel, M., Keays, B., Augugliaro, F.: in Drone shows: Creative potential and best practices. ETH Zurich, 2017). Such a performance offers visual enjoyment for a large audience, particularly in festivals. However, the majority of current drone light shows are manually coordinated by personnel using software. Drone light shows also have limited viewing range, thereby preventing the audience from getting a good view of the actual performance. This study proposes a method to provide multiple visual presentations in accordance with multiple viewing angles. We use visual hull to filter out the candidate areas that form input images, and takes projection error and classification values as weight for optimization. Consequently, the proposed method reduces the number of drones needed to form a multi-view structure for visual presentations. Furthermore, to meet the demands of performing the animation in multi-view structure, we implement the flight algorithm to locate the most suitable corresponding points between two different structures, and then generate the shortest flight paths without collision. Experiments conducted in our simulator provide additional insights and discussions, and each factor is visualized to provide an improved understanding of our approach for multi-view drone light shows.


















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Ali, A.A., Rashid, A.T., Frasca, M., Fortuna, L.: An algorithm for multi-robot collision-free navigation based on shortest distance. Robot. Auton. Syst. 75, 119–128 (2016). https://doi.org/10.1016/j.robot.2015.10.010
Van den Berg, J., Lin, M., Manocha, D.: Reciprocal velocity obstacles for real-time multi-agent navigation. In: 2008 IEEE International Conference on Robotics and Automation (pp. 1928–1935). IEEE. (2008, May). https://doi.org/10.1109/ROBOT.2008.4543489
Bradski, G.: The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000). https://github.com/opencv/opencv/wiki/CiteOpenCV
Fujimori, A., Teramoto, M., Nikiforuk, P.N., Gupta, M.M.: Cooperative collision avoidance between multiple mobile robots. J. Robot. Syst. 17(7), 347–363 (2000). https://doi.org/10.1002/10974563
Hsiao, K.W., Huang, J.B., Chu, H.K.: Multi-view wire art. ACM Trans. Graph. 37(6), 242–251 (2018). https://doi.org/10.2307/777701
Kolev, K., Klodt, M., Brox, T., Cremers, D.: Continuous global optimization in multiview 3d reconstruction. Int. J. Comput. Vision 84(1), 80–96 (2009). https://doi.org/10.1007/s11263-009-0233-1
Kuhn, H.W.: The Hungarian method for the assignment problem. Naval Res. Logist. Q. 2(1–2), 83–97 (1955). https://doi.org/10.3390/math8112050
Laurentini, A.: The visual hull concept for silhouette-based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 16(2), 150–162 (1994). https://doi.org/10.1109/34.273735
Mitra, N.J., Pauly, M.: Shadow art. ACM Trans. Gr. 28, 156–161 (2009). https://doi.org/10.1145/1618452.1618502
Ohta, A.: Sky magic: Drone entertainment show. In: ACM SIGGRAPH 2017 Emerging Technologies. (2017). https://doi.org/10.1145/3084822.3108158
Rockwood, A.P., Winget, J.: Three-dimensional object reconstruction from two-dimensional images. Comput. Aided Des. 29(4), 279–285 (1997). https://doi.org/10.1016/S0010-4485(96)00056-5
Shah, M. A., Aouf, N.: 3d cooperative pythagorean hodograph path planning and obstacle avoidance for multiple uavs. In: 2010 IEEE 9th International Conference on Cyberntic Intelligent Systems (pp. 1–6). IEEE. (2010, September). https://doi.org/10.1109/UKRICIS.2010.5898124
Skrjanc, I., Klancar, G.: Cooperative collision avoidance between multiple robots based on bezier curves. In: 2007 29th International Conference on Information Technology Interfaces (pp. 451–456). IEEE. (2007, June). https://doi.org/10.1016/j.robot.2009.09.003
Trager, M., Hebert, M., Ponce, J.: Consistency of silhouettes and their duals. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3346–3354). (2016). https://doi.org/10.1109/CVPR.2016.364
Wikipedia Contributors. Ambigram — Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Ambigram&oldid=935231453. (2020). Accessed 20 January 2020
Waibel, M., Keays, B., Augugliaro, F.: Drone shows: creative potential and best practices. ETH Zurich (2017)
Xiong, W., Zhang, P., Sander, P. V., Joneja, A.: Shape-inspired architectural design. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 1–10). (2018, May). https://doi.org/10.1145/3190834.3198034
Van Den Berg, Jur, et al.: Rvo2 library: Reciprocal collision avoidance for real-time multi-agent simulation. (2011).
Perron, L., Furnon, V.: OR-Tools 7.2. https://developers.google.com/optimization/
Acknowledgements
This work was supported by The Ministry of Science and Technology, Taiwan, under GRANT No. MOST 110-2221-E-004-009 and MOST 110-2634-F-004-001 through Pervasive Artificial Intelligence Research (PAIR) Labs. We would also thank Sin-Fei Lee for the demo video editing.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Weng, KC., Lin, ST., Hu, CC. et al. Multi-view approach for drone light show. Vis Comput 39, 5797–5808 (2023). https://doi.org/10.1007/s00371-022-02696-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-022-02696-8