Abstract:
Convolutional neural networks applied to video streams often suffer from short-lived misclassifications or false alarms from clutter and noise. We introduce a novel netwo...Show MoreMetadata
Abstract:
Convolutional neural networks applied to video streams often suffer from short-lived misclassifications or false alarms from clutter and noise. We introduce a novel network training method based on the Siamese Networks technique that mitigates false alarms in an Hourglass CNN that segments out quadcopters in a live video stream. To demonstrate this method in a real-world application in real-time, we implement it as part of a quadcopter tracker for vision-based formation control. Quadcopter drone formation control is an important capability for fields like area surveillance, search and rescue, agriculture, and reconnaissance. Of particular interest is formation control in environments where wireless communications and/or GPS may be either denied or not sufficiently accurate for the desired application. Using vision to guide the quadcopters addresses these situations, but computer vision algorithms are often computationally expensive and suffer from high false detection rates. Our novel Siamese networks-based clutter mitigation technique is a good way to mitigate this clutter without added computational complexity at run-time. We run our real-time implementation on an ODROID XU4 with a standard webcam mounted to a quadcopter drone. Flight tests in a motion capture volume demonstrate successful formation control with two quadcopters in a leader-follower setup.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 1, January 2021)