Abstract:
In recent years, gesture-based interfaces have been explored in order to control robots in non-traditional ways. These require the use of systems that are able to track h...Show MoreMetadata
Abstract:
In recent years, gesture-based interfaces have been explored in order to control robots in non-traditional ways. These require the use of systems that are able to track human body movements in 3D space. Deploying Mo-cap or camera systems to perform this tracking tend to be costly, intrusive, or require a clear line of sight, making them ill-adapted for artistic performances. In this paper, we explore the use of consumer-grade armbands (Myo armband) which capture orientation information (via an inertial measurement unit) and muscle activity (via electromyography) to ultimately guide a robotic device during live performances. To compensate for the drop in information quality, our approach rely heavily on machine learning and leverage the multimodality of the sensors. In order to speed-up classification, dimensionality reduction was performed automatically via a method based on Random Forests (RF). Online classification results achieved 88% accuracy over nine movements created by a dancer during a live performance, demonstrating the viability of our approach. The nine movements are then grouped into three semanticallymeaningful moods by the dancer for the purpose of an artistic performance achieving 94% accuracy in real-time. We believe that our technique opens the door to aesthetically-pleasing sequences of body motions as gestural interface, instead of traditional static arm poses.
Published in: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Date of Conference: 28 August 2017 - 01 September 2017
Date Added to IEEE Xplore: 14 December 2017
ISBN Information:
Electronic ISSN: 1944-9437