Elsevier

Future Generation Computer Systems

Volume 100, November 2019, Pages 859-881
Future Generation Computer Systems

A new approach for mobile robot localization based on an online IoT system

https://doi.org/10.1016/j.future.2019.05.074Get rights and content

Highlights

  • Development of an online system for localization mobile robots.

  • A topological mapping method is robust in the exploration environment considered.

  • The IoT system confers reuse of the robot’s idle computing power.

Abstract

In Mobile Robotics, localization is a primordial task, as it makes possible the navigation of the robot, thus enabling it to carry out its activities. From the emergence of the Internet of Things (IoT), there was a different approach of interacting objects with each other, as well as between objects and humans. Based on the context presented, this article evidences the utilization of IoT for the development of a system aimed for localization mobile robots employing Convolutional Neural Networks (CNN) in the process of feature extraction of the images, according to the concept of Transfer Learning. The mechanism uses the topological mapping method to orient themselves in the exploration environment considered. The effectiveness of the approach is demonstrated by parameters such as Accuracy, F1-score and time of processing. The IoT system confers the centralization of processing, reducing costs and allowing reuse of the robot’s idle computing power. Combined with this benefit, CNN still achieves 100% Accuracy and F1-Score, proving to be an effective technique for the required activity. With this, the proposed approach demonstrates to be efficient for the use in the task of locating mobile robots.

Introduction

Robots are devices designed to perform tasks substituting humans either because the jobs are strenuous or unhealthy or simply because of convenience. Bearing this in mind we can see there is an increased interest in autonomous devices, such as vehicles which do not require a human driver, as well as robots applied to agricultural activities and military actions. However, the use of autonomous technologies leads to three issues: mapping the environment, locating and navigating [1].

The solutions for mapping, as described by [2], can be grouped into two paradigms, the first is where the total space to be explored is a coordinate system, this is a geometric paradigm, in the second, the environment is represented as a graph, composed of vertices in which possible paths are represented by the connection between these vertices, this is the topological paradigm. Both methods are equally efficient; however, the topological approach uses smaller and simpler maps, and therefore less computational resources are needed. Moreover, it is possible to create a hybrid solution by combining elements of both paradigms.

In terms of location and navigation, Global Positioning Systems (GPS) are widely used. Although GPS has a good degree of precision, its use is limited to outside environments and even then common objects, in this case obstacles, of an urban environment, such as trees and buildings, may affect the accuracy of the information obtained [3]. However, various solutions have already been proposed to overcome these problems. Some of these solutions depend on local infrastructures, such as the use of Wi-fi, Bluetooth, and components of ultrasound, among others. Nevertheless these local infrastructures can suffer interference from the environment, from scattered objects and from environmental surfaces, which can impair navigation. Although the authors in [4], proposed a solution based on Wi-fi to facilitate the calibration of the system, errors still occurred, and the authors in [5] suggested a combination of ultrasonic sensors and laser distance meters even though they have a relatively high cost.

Other solutions that are independent of the local infrastructures, such as those based on image processing have been widely studied and researched. Some of these have obtained very precise results for various applications, such as in [6] that uses image processing to detect and read speed limits on road signs or in [7], which uses computational vision to classify faults in samples of goat leather, both with excellent rates of reliability.

Thus, based on the above outcomes, it can be inferred that obtaining consistent results for robot locating is no different; this can be confirmed by searching relevant publications in the literature. The authors in [8] used machine learning to perform the autonomous navigation of a robot, where the perception of the environment was carried out using six ultrasonic sensors, the neural network was able to avoid damage interferences and good results were obtained. The authors in (()) used deep learning to detect pedestrians and obtained, with the use of Convolutional Neural Networks (CNN), results that were precise and fast enough to be used in robot navigation. Finally, the authors in [10] used CNN to locate elements in a robot football competition with good results.

Filtering the search for works on robot localization using ”computational vision” as a key word gives results that clearly indicate the need for computational elements with high processing power. In less recent publications, authors chose to load portable computers onto the robots, as can be seen in several articles [11], [12], [13], [14]. In more recent works, which already apply new technologies, some authors chose to use RGB cameras and more complex devices like Kinect as in [15], or to use video cards with a high image processing power, as presented in [16].

Even with the good quality of the results obtained in the above mentioned studies, it is important to highlight three aspects. The first one is related to the weight of a portable computer added to the robots; this makes small projects or aerial devices, such as drones impracticable. Another aspect to be considered is the value of components used. A high powered video card for image processing can cost more than a thousand dollars. And the third concern is associated with battery life, because high processing requires a considerable amount of power.

Thus a discussion is born, the results can be improved with deep learning but this increases the battery consumption, because of the need for more powerful hardware, which is critical in small mobile robots. On the other hand, a large battery increases autonomy, but decreases mobility by making the equipment heavier and, in the case of drones for example, makes navigation control complex or even impossible.

This work addresses such problems by proposing an approach using an IoT (Internet of Things) system, which is designed so that the robot can send information in real time through a location web service, thereby passing the weight of the processing to a cloud solution, and leaving only the communications and controls with the robot itself. The robot navigation system consists of computer vision processing, which incorporates a topological mapping method and CNN, as well as machine learning techniques. In this work images captured from a GoPro camera and an omnidirectional camera were used.

The development of an IoT system allows the robotic navigation to be performed remotely, as well as making better use of the robotic hardware, giving it greater autonomy. Results have shown that CNN is an efficient technique for tasks of localization and consequently navigation of mobile robots. CNN achieved 100% in Accuracy and F1-Score in its architectural combinations with all the machine learning methods used, and images acquired from a GoPro camera.

Thus, the contributions of this work are:

    Decreases the amount of equipment coupled to the robot, making it lighter;

    Allows the use of a Graphic Process Unit (GPU) for IoT-based image processing and analysis;

    Generates a scalable low-cost solution for robotic locating.

That is, low-cost is due to the fact that a dedicated GPU is not necessary for each robot, being this feature provided by the framework, which allows several devices to use the application server simultaneously. In addition, because the robot does not require a coupled GPU, we also eliminate the need for a battery to power the GPU, which requires exclusive power. This aspect also contributes to the low cost of the solution, besides making the vehicle lighter.

Therefore, all the contributions presented are the basis of the innovation offered by our work, making the approach an efficient and scalable mechanism for the robotic locating task.

This article is structured as explained below. Section 2 gives a brief summary of the machine learning techniques employed. Section 3 presents the main features of the CNN and its application in our approach. Section 4 details the methodology adopted. Section 5 reports on the implementation of IoT system created to perform the remote navigation of the robot. Session 6 presents the results, emphasizing and discussing the best values. Finally, Section 7 presents the conclusion and the future works envisaged.

Section snippets

Overview of machine learning methods

The classification tasks can be carried out from the attributes returned by the CNN. In this section, the six machine learning methods used in this work are described.

The Bayesian Classifier consists of a machine learning technique based on the Bayes Decision Theory. It is inserted as a probabilistic and supervised type method and is used to categorize the samples according to the percentage of each one belonging to a certain class [17]. The Bayesian classifier labels the samples according to

Overview of Convolutional Neural Network

In this section, a brief summary about the essential building blocks of modern CNN architectures is given. At the end of this section there is a description of how to extract features using pre-trained CNNs.

Methodology

Fig. 1 presents the flowchart of the proposed methodology using both robotic set-ups. The initial step is to capture the image where the vehicle is located and then this image is sent to the CNN to extract the attributes, which are the inputs for the machine learning algorithms to carry out the classification tasks, thus enabling robot navigation based on the information of the topological map.

Figs. 2(a) and 2(b) present the robot equipped with the GoPro camera and with a camera adapted with

IoT System for robot localization

The IoT system created to carry out the remote navigation of the robot, from the images captured and sent by the robot, was called Lapisco Image Interface for Development of Applications (LINDA). This system is composed of two subsystems. The first one was developed in Java language and implements a web service, which is responsible for exchanging information between the device and the cloud platform. The application is integrated with a Free Relational Database Management System (RDBMS), the

Results

In this section, we present the results of the machine learning methods on the three data sets generated by the CNN. All data sets were equally preprocessed.

The patterns were partitioned randomly into 10 groups, four fifths for training and the remainder for testing. The training set was then normalized (zero-mean and unit variance) and the test sets were also normalized using the same normalization measures as the training set. The proportions of the classes were kept balanced in both training

Conclusion

In this article, an approach for mobile robots localization is proposed. The approach uses IoT to create a system capable of performing this activity online. A topological map, CNN and machine learning techniques are used so that the robot can navigate through computer vision. Based on the results obtained, CNN is confirmed as a valuable alternative for the localization and navigation functions of mobile robots, since it reached 100% accuracy and F1-Score in the combinations of its

Acknowledgments

VHCA received support from the Brazilian National Council for Research and Development (CNPq, Grant 304315/2017-6 and 430274/2018-1).

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Carlos M.J.M. Dourado Junior received his Master’s and degrees Teleinformatics Engineering from the Federal University of Ceará (2009) and Bachelor’s degree in Electronic Engineering from the University of Fortaleza (2004). He is currently an effective professor at the Federal Institute of Education, Science and Technology of Ceará (IFCE) and IT Director at IFCE. He is currently a member of the Laboratory of Image Processing and Computational Simulation (LAPISCO) and has research in the area of

References (70)

  • SokolovaM. et al.

    A systematic analysis of performance measures for classification tasks

    Inf. Process. Manage.

    (2009)
  • VillaniJ. et al.

    A machine learning approach to identify NIH-funded applied prevention research

    Am. J. Prev. Med.

    (2018)
  • PereiraA.A. et al.

    Platform for controlling and getting data from network connected drones in indoor environments

    Future Gener. Comput. Syst.

    (2019)
  • GriecoL. et al.

    Iot-aided robotics applications: Technological implications, target domains and open issues

    Comput. Commun.

    (2014)
  • S.P.P. da Silva, L.B. Marinho, J.S. Almeida, P.P.R. Filho, A Novel Approach for Mobile Robot Localization in...
  • MoralesY. et al.

    Dgps, RTK-gps and starfire DGPS performance under tree shading environments

    Proc. IEEE Int. Conf. Integr. Technol.

    (2007)
  • M. Ocana, L.M. Bergasa, M.A. Sotelo, J. Nuevo, R. Flores, Indoor Robot Localization System Using WiFi Signal Measure...
  • KoN.Y. et al.

    Fusing range measurements from ultrasonic beacons and a laser range finder for localization of a mobile robot

    Sensors

    (2015)
  • GomesS.L. et al.

    Embedded real-time speed limit sign recognition using image processing and machine learning techniques

    Neural Comput. Appl.

    (2017)
  • R.F. Pereira, M.L.D. Dias, C.M. de S. Medeiros, P.P.R. Filho, Classification of Failures in Goat Leather Samples Using...
  • X. Song, H. Fang, X. Jiao, Y. Wang, Autonomous mobile robot navigation using machine learning, in: 2012 IEEE 6th...
  • D. Ribeiro, A. Mateus, P. Miraldo, J.C. Nascimento, A real-time Deep Learning pedestrian detector for robot navigation,...
  • S. Luo, H. Lu, J. Xiao, Q. Yu, Z. Zheng, Robot detection and localization based on deep learning, in: 2017 Chinese...
  • I. Ulrich, I. Nourbakhsh, Appearance-based place recognition for topological localization, in: Proceedings 2000 ICRA....
  • GasparJ. et al.

    Vision-based navigation and environmental representations with an omnidirectional camera

    IEEE Trans. Robot. Autom.

    (2000)
  • N. Winters, J. Gaspar, G. Lacey, J. Santos-Victor, Omni-directional vision for robot navigation, in: Proceedings IEEE...
  • GoedeméT. et al.

    Omnidirectional vision based topological navigation

    Int. J. Comput. Vis.

    (2007)
  • TheodoridisS. et al.

    Pattern recognition (fourth edition)

    (2008)
  • BreimanL.

    Random forests

    Mach. Learn.

    (2001)
  • H. Zhang, X. Dai, F. Sun, J. Yuan, Terrain classification in field environment based on Random Forest for the mobile...
  • FukunagaK. et al.

    A branch and bound algorithm for computing k-nearest neighbors

    IEEE Trans. Comput.

    (1975)
  • M.A. Markom, A.H. Adom, S.A.A. Shukor, N.A. Rahim, E.S.M.M. Tan, A. Irawan, Scan matching and KNN classification for...
  • VapnikV.N.

    Statistical learning theory

    (1998)
  • DuanK. et al.

    Which is the best multiclass SVM method? an empirical study

  • HaykinS.

    Neural networks and learning machines

    (2008)
  • Cited by (21)

    • Enhancing ensemble diversity based on multiscale dilated convolution in image classification

      2022, Information Sciences
      Citation Excerpt :

      In the field of image classification, in the pursuit of greater accuracy, building deeper and larger CNN models has gradually become a significant trend [20]. However, these complex CNN models usually require billions of floating-point operations per second (FLOPS), limiting their use in Internet of Things (IoT) applications, such as on mobile phone platforms or mobile robot platforms [21]. In addition, previous studies have shown that oversized deep neural network models tend to produce many redundant features [22].

    • The fundamental hardware modules of an ARW

      2022, Aerial Robotic Workers: Design, Modeling, Control, Vision and their Applications
    View all citing articles on Scopus

    Carlos M.J.M. Dourado Junior received his Master’s and degrees Teleinformatics Engineering from the Federal University of Ceará (2009) and Bachelor’s degree in Electronic Engineering from the University of Fortaleza (2004). He is currently an effective professor at the Federal Institute of Education, Science and Technology of Ceará (IFCE) and IT Director at IFCE. He is currently a member of the Laboratory of Image Processing and Computational Simulation (LAPISCO) and has research in the area of classification of pulmonary nodules through techniques of Digital Image Processing and Artificial Intelligence.

    Suane P.P. da Silva received her Master’s (2018) and Bachelor’s (2016) degrees in Computer Science from the Federal Institute of Education, Science and Technology of Ceará (IFCE). She is currently a member of the Laboratory of Image Processing and Computational Simulation (LAPISCO) and has research in the area of classification of pulmonary nodules through techniques of Digital Image Processing and Artificial Intelligence.

    Raul Victor M. da Nóbrega received his Master’s (2018) and Bachelor’s (2016) degrees in Computer Science from the Federal Institute of Education, Science and Technology of Ceará (IFCE). He is currently a member of the Laboratory of Image Processing and Computational Simulation (LAPISCO) and has research in the area of classification of pulmonary nodules through techniques of Digital Image Processing and Artificial Intelligence.

    Antônio C.S. Barros received the Ph.D. degree in Computer Science from University of Fortaleza, Fortaleza, Brazil, in 2017, and he is a professor at UNILAB, Ceara, Brazil. His current research interest is applications in Computational Vision, mainly using medical images.

    Arun K. Sangaiah received his Ph.D. from VIT University and Master of Engineering from Anna University, in 2007 and 2014, respectively. He is currently Associate Professor at School of Computing Science and Engineering, VIT University, Vellore, India. He was a visiting professor at School of computer engineering at Nanhai Dongruan Information Technology Institute in China (September. 2016–Jan. 2017). He has published more than 130 scientific papers in high standard SCI journals like IEEE-TII, IEEE-Communication Magazine, IEEE systems, IEEE-IoT, IEEE TSC, IEEE ETC and etc. In addition he has authored/edited over 8 books (Elsevier, Springer, Wiley, Taylor and Francis) and 50 journal special issues such as IEEE-Communication Magazine, IEEE-IoT, IEEE consumer electronic magazine etc. His area of interest includes software engineering, computational intelligence, wireless networks, bio-informatics, and embedded systems. Also, he was registered a one Indian patent in the area of Computational Intelligence. Besides, Prof. Sangaiah is responsible for Editorial Board Member/Associate Editor of various international SCI journals.

    Pedro Pedrosa Rebouças Filho received the Ph.D. degree in Teleinformatics Engineering from Federal University of Ceara, Fortaleza, Brazil, in 2013, and is a professor at Federal Institute of Science and Technology, Maracanau, Ceara, Brazil. His current research interest is applications in Computational Vision, mainly using medical images.

    Victor Hugo C. de Albuquerque has a Ph.D. in Mechanical Engineering with emphasis on Materials from the Federal University of Paraíba (UFPB, 2010), an MSc in Teleinformatics Engineering from the Federal University of Ceará (UFC, 2007), and he graduated in Mechatronics Technology at the Federal Center of Technological Education of Ceará (CEFETCE, 2006). He is currently Assistant VI Professor of the Graduate Program in Applied Informatics at the University of Fortaleza (UNIFOR). He has experience in Computer Systems, mainly in the research fields of: Applied Computing, Intelligent Systems, Visualization and Interaction, with specific interest in Pattern Recognition, Artificial Intelligence, Image Processing and Analysis, as well as Automation with respect to biological signal/image processing, image segmentation, biomedical circuits and human/brainmachine interaction, including Augmented and Virtual Reality Simulation Modeling for animals and humans. Additionally, he has research at the microstructural characterization field through the combination of non-destructive techniques with signal/image processing and analysis, and pattern recognition.

    View full text