ABSTRACT
Considerable amount of research has been carried out in the domain of man-machine interaction. Interaction with machines using hand gestures, eye motion, etc. has already been proposed by researchers all over the world. However, interacting with devices using speech is of particular interest to researchers since speech is the most natural way of interaction and communication for human beings. In this paper, we have tried to develop a client-server based architecture for controlling several robots simultaneously through voice commands. The robots used in the experiment are the LEGO® Mindstorm® NXT robots. The entire architecture is developed using the client-server model of communication which enables each and every component of the architecture to be present on a physically different machine and at a physically different location allowing the user to control multiple robots using speech on the go. The speech recognition server accepts speech inputs from the client, translates them into robot control commands and sends them to the specific server controlling the robot. Thus the user need not be physically present at the same location as the robot and is able to control robots remotely.
- House, B., Malkin, J., Bilmes, J. The VoiceBot: A Voice Controlled Robot Arm. 27th Int. Conf. Human Factors in Computing Systems. 2009. Pages 183--192. Google ScholarDigital Library
- Martens, C., Ruchel, N., Lang, O., Ivlev, O., Gräser, A. A FRIEND for assisting handicapped people. March 2001. IEEE Robot Automat. Mag. Vol. 8, No. 1. Pages 57--65.Google Scholar
- Liu, P. X., Chan, A. D. C., Chen, R., Wang, K., Zhu, Y. Voice based robot control. July 2005. Proceedings of the 2005 IEEE International Conference on Information Acquisition. Pages 543--547.Google Scholar
- Palma, C. M., Ibarra, O. Robotic Remote Navigation by Speech Commands with Automatic Obstacles Detection. 25-27 June 2003. IASTED International Conference Robotics and Applications, Salzburg, Austria. Pages 53--57.Google Scholar
- Kubik, T., Sugisaka, M. Use of a Cellular Phone in Mobile Robot Voice Control. 2001. 40th SICE Annual Conference Int. Session Papers. Pages 106--111.Google Scholar
- Juarez, I. L., Castuera, J. C., Cabrera, M. P., Hernandez, K. O. On the Design of Intelligent Robotic Agents for Assembly. 2005. Information Sciences. Vol. 171(4). Pages 377--402. Google ScholarDigital Library
- Stanton, K. B., Sherman, P. R., Rohwedder, M. L., Fleskes, C. P., Gray, D. R., Minh, D. T., Espinoza, C., Mayui, D., Ishaque, M., Perkowski, M. A. PSUBOT - a Voice-controlled Wheelchair for the Handicapped. 1990. Proceedings of the 33rd Midwest Symposium. Pages 669--672.Google Scholar
- Jha, S. S., Nair, S. B. A Logic Programming Interface for Multiple Robots. 2012. 3rd National Conference on Emerging Trends and Applications in Computer Science (NCETACS). Pages 152--156.Google Scholar
- Be, D., García, M., González, C., Miranda, C., Escalante, M., Gonzalez, S. Wireless Control LEGO NXT Robot using Voice Commands. August 2011. International Journal on Computer Science and Engineering. August 2011. Vol. 3, No. 8. Pages 2926--2934.Google Scholar
- Julius. Open-Source Large Vocabulary CSR Engine Julius. Retrieved June 1, 2013 from Sourceforge: http://julius.sourceforge.jp/en_index.phpGoogle Scholar
- HTK. HTK Speech Recognition Toolkit. Retrieved May 1, 2013 from Cambridge University: http://htk.eng.cam.ac.uk/Google Scholar
- Hidden Markov Model. Hidden Markov Model - Wikiepdia, the free encyclopedia. Retrieved May 1, 2013 from Wikipedia: http://en.wikipedia.org/wiki/Hidden_Markov_modelGoogle Scholar
- VoxForge. Tutorial: Create Acoustic Model -- Manually. Retrieved June 1, 2013 from Wikipedia: http://en.wikipedia.org/wiki/Hidden_Markov_modelGoogle Scholar
Index Terms
- A Speech Recognition Client-Server Model for Control of Multiple Robots
Recommendations
A wavelet- and neural network-based voice interface system for wheelchair control
Voice control has long been considered as a natural mechanism to assist powered wheelchair users. However, one implementation difficulty is that a voice input system may fail to recognise a user's voice. Indeed, speech activated interface between human ...
Kinect microphone array-based speech and speaker recognition for the exhibition control of humanoid robots
Kinect microphone array-based voice control for operating a robot is proposed.Speech and speaker recognition are effectively combined for fine robot control.Kinect fuzzy-DTW with an accurately designed fuzzy controller is proposed. This study developed ...
A Comparative Analysis of Real Time Open-Source Speech Recognition Tools for Social Robots
Design, User Experience, and UsabilityAbstractSocial robots are designed to support people through their capabilities such as information gathering, processing, analyzing, and predicting. Social robots play a vital role in various fields such as medical, entertainment, education, and ...
Comments