Abstract
Wind turbines require periodic inspection to ensure efficient power generation and a prolonged lifetime. Traditionally, inspection involves the risk of a person falling while abseiling from the top of the nacelle. To avoid this, drones have been controlled by operators to inspect the blades. However, this task requires expert pilots, who experience fatigue quickly. Alternatively, autonomous drones are not subject to human tiredness and can follow trajectories in a repeatable manner. Motivated by the latter, we introduce a vision-based blade detector capable of recognizing their orientation and relative position to generate a flight plan that allows it to safely collect image data. The proposed blade detector extracts line features with the camera, which are filtered to reduce the search space by using bounding boxes. They are obtained with a single-shot detector based on a convolutional neural network. Finally, a random sample consensus procedure finds the lines that best fit a geometrical model of the wind turbine. We compare our deep learning approach against a color segmentation method, showing that it is up to 6 times faster. We also compare against guided search during random sampling, which exploits the separate boxes detected by the network, seeking to reduce the number of outliers. We conclude with an illustrative example of how our proposed detector could be used for autonomous wind turbine inspection.
Similar content being viewed by others
References
SPARC (2017) Robotics 2020 multi-annual roadmap 5(2):178–228
Marzena P, Szymon P, Łukasz K (2017) The use of UAV’s for search and rescue operations. In: Procedia engineering, vol 192, Elsevier Ltd, pp 748–752. https://doi.org/10.1016/j.proeng.2017.06.129
von Gioi RG, Jakubowicz J, Morel JM, Randall G (2010) LSD: a fast line segment detector with a false detection control. IEEE Trans Pattern Anal Mach Intell 32(4):722–732
Wei L, Dragomir A, Dumitru E, Christian S, Scott R, Cheng-Yang F, Berg Alexander C (2015) SSD: single shot multibox detector. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 9905, LNCS, pp 21–37. https://doi.org/10.1007/978-3-319-46448-0_2
Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395
Schafer BE, Picchi D, Engelhardt T, Abel D (2016) Multicopter unmanned aerial vehicle for automated inspection of wind turbines. In: 2016 24th Mediterranean conference on control and automation (MED), IEEE, pp 244–249. ISBN 978-1-4673-8345-5. https://doi.org/10.1109/MED.2016.7536055.http://ieeexplore.ieee.org/document/7536055/
Stokkeland M, Klausen K, Johansen TA (2015) Autonomous visual navigation of unmanned aerial vehicle for wind turbine inspection. In: 2015 International conference on unmanned aircraft systems (ICUAS), IEEE, pp 998–1007. ISBN 978-1-4799-6010-1. 10.1109/ICUAS.2015.7152389.http://ieeexplore.ieee.org/document/7152389/
Høglund S (2014) Autonomous inspection of wind turbines and buildings using an UAV. Master’s thesis, Norwegian University of Science and Technology. http://hdl.handle.net/11250/261287
Kanellakis C, Fresk E, Mansouri SS, Kominiak D, Nikolakopoulos G (2019) Autonomous visual inspection of large-scale infrastructures using aerial robots. CoRR. arXiv preprint arXiv:1901.05510
Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. In: Autonomous robots
Stokkeland M (2014) A computer vision approach for autonomous wind turbine inspection using a multicopter. Master’s thesis, Norwegian University of Science and Technology. http://www.diva-portal.org/smash/get/diva2:744160/FULLTEXT01.pdf
Hough PVC (1962) Method and means for recognizing complex patterns. US Patent No. 3,069,654.https://patents.google.com/patent/US3069654A/en
Duda RO, Hart PE (1972) Use of the Hough transformation to detect lines and curves in pictures. Commun ACM 15(1):11–15. https://doi.org/10.1145/361237.361242
Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17(1–3):185–203. https://doi.org/10.1016/0004-3702(81)90024-2
Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on artificial intelligence, vol 2, IJCAI’81, San Francisco, CA, USA, Morgan Kaufmann Publishers Inc., pp 674–679. http://dl.acm.org/citation.cfm?id=1623264.1623280
Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Technical report. http://www.robots.ox.ac.uk/
Cho DM, Tsiotras P, Zhang G, Holzinger M (2013) Robust feature detection, acquisition and tracking for relative navigation in space with a known target. In: AIAA guidance, navigation, and control (GNC) conference, guidance, navigation, and control and co-located conferences, American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2013-5197. https://doi.org/10.2514/6.2013-5197
Boonsuk W (2016) Investigating effects of stereo baseline distance on accuracy of 3D projection for industrial robotic applications. In: 5th IAJC/ISAM joint international conference, ISBN 9781606433799
Abbas SM, Muhammad A (2012) Outdoor RGB-D SLAM performance in slow mine detection. In: ROBOTIK 2012; 7th German conference on robotics
Mur-Artal R, Tardos JD (2017) ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans Rob 33(5):1255–1262. https://doi.org/10.1109/TRO.2017.2705103
Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: Proceedings of the 9th European conference on computer vision: part I, ECCV’06, Springer, Berlin, pp 430–443. ISBN 3-540-33832-2, 978-3-540-33832-1. https://doi.org/10.1007/11744023_34. http://dx.doi.org/10.1007/11744023_34
Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. In: 2011 International conference on computer vision, pp 2564–2571. https://doi.org/10.1109/ICCV.2011.6126544
Rojas-Perez LO, Martinez-Carranza J (2017) Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight. In: 2017 Workshop on research, education and development of unmanned aerial systems (RED-UAS)
Moon H, Martinez-Carranza J, Cieslewski T, Faessler M, Falanga D, Simovic A, Scaramuzza D, Li S, Ozo M, Wagter C, de Croon G, Hwang S, Jung S, Shim H, Kim H, Park M, Au TC, Kim SJ (2019) Challenges and implemented technologies used in autonomous drone racing. Intell Serv Rob. https://doi.org/10.1007/s11370-018-00271-6
Zollhöfer M, Stotko P, Görlitz A, Theobalt C, Nießner M, Klein R, Kolb A (2018) State of the art on 3D reconstruction with RGB-D cameras. Comput Gr Forum 37:625–652
Acknowledgements
This research project has been funded by Conacyt grant 291137 and Conacyt-INEGI project 268528.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Parlange, R., Martinez-Carranza, J. Leveraging single-shot detection and random sample consensus for wind turbine blade inspection. Intel Serv Robotics 14, 611–628 (2021). https://doi.org/10.1007/s11370-021-00383-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11370-021-00383-6