single-rb.php

JRM Vol.32 No.6 pp. 1200-1210
doi: 10.20965/jrm.2020.p1200
(2020)

Development Report:

Garbage Detection Using YOLOv3 in Nakanoshima Challenge

Jingwei Xue, Zehao Li, Masahito Fukuda, Tomokazu Takahashi, Masato Suzuki, Yasushi Mae, Yasuhiko Arai, and Seiji Aoyagi

Kansai University
3-3-35 Yamate-cho, Suita, Osaka 564-8680, Japan

Received:
July 7, 2020
Accepted:
October 20, 2020
Published:
December 20, 2020
Keywords:
deep learning, object detector
Abstract

Object detectors using deep learning are currently used in various situations, including robot demonstration experiments, owing to their high accuracy. However, there are some problems in the creation of training data, such as the fact that a lot of labor is required for human annotations, and the method of providing training data needs to be carefully considered because the recognition accuracy decreases due to environmental changes such as lighting. In the Nakanoshima Challenge, an autonomous mobile robot competition, it is challenging to detect three types of garbage with red labels. In this study, we developed a garbage detector by semi-automating the annotation process through detection of labels using colors and by preparing training data by changing the lighting conditions in three ways depending on the brightness. We evaluated the recognition accuracy on the university campus and addressed the challenge of using the discriminator in the competition. In this paper, we report these results.

Detected objects by recognition system with YOLOv3

Detected objects by recognition system with YOLOv3

Cite this article as:
J. Xue, Z. Li, M. Fukuda, T. Takahashi, M. Suzuki, Y. Mae, Y. Arai, and S. Aoyagi, “Garbage Detection Using YOLOv3 in Nakanoshima Challenge,” J. Robot. Mechatron., Vol.32 No.6, pp. 1200-1210, 2020.
Data files:
References
  1. [1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, Real-Time Object Detection,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
  2. [2] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, “Scalable object detection using deep neural networks,” 2014 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014.
  3. [3] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.37, Issue 9, pp. 1904-1916, 2015.
  4. [4] N. Chavali, H. Agrawal, A. Mahendru, and D. Batra, “Object-Proposal Evaluation Protocol is ‘Gameable’,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
  5. [5] C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks for object detection,” Neural Information Processing Systems (NIPS), Vol.13, No.2, pp. 2553-2561, 2013.
  6. [6] A. R. Pathak, M. Pandey, and S. Rautaray, “Application of Deep Learning for Object Detection,” Procedia Computer Science, Vol.132, pp. 1706-1717, 2018.
  7. [7] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv: 1804.02767, 2018.
  8. [8] K. Tarun and V. Karun, “A Theory Based on Conversion of RGB image to Gray image,” Int. J. of Computer Applications, Vol.7, No.2, pp. 1140-1439, 2010.
  9. [9] C. Saravanan, “Color Image to Grayscale Image Conversion,” 2010 Second Int. Conf. on Computer Engineering and Applications, Vol.2, pp. 196-199, 2010.
  10. [10] M. H. Toufiq Imam, J. Gleason, S. Mishra, and M. Oishi, “Estimation of solar heat gain using illumination sensor measurements,” Solar Energy, Vol.174, pp. 296-304, 2018.
  11. [11] A. Kaminska and A. Ożadowicz, “Lighting Control Including Daylight and Energy Efficiency Improvements Analysis,” Energies, Vol.11, pp. 2166-2184, 2018.
  12. [12] P. Ganesan, V. Rajini, B. S. Sathish, and B. S. Khamar, “HSV Color Space Based Segmentation of Region of Interest in Satellite Images,” 2014 Int. Conf. on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), pp. 196-199, 2014.
  13. [13] S. Shailaja and K. K. Ramesh, “Shadow Suppression Using RGB and HSV Color Space in Moving Object Detection,” Int. J. of Advanced Computer Science and Applications, Vol.4, No.1, 2013.
  14. [14] B. T. Polyak, “Some methods of speeding up the convergence of iteration methods,” USSR Computational Mathematics and Mathematical Physics, Vol.4, pp. 1-17, 1964.
  15. [15] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” Int. Conf. on Machine Learning, pp. 1139-1147, 2013.
  16. [16] R. M. Zur, Y. Jiang, and L. L. Pesce, “Noise injection for training artificial neural networks: A Comparison with Weight Decay and Early Stopping,” Medical Physics, Vol.36, pp. 4810-4818, 2009.
  17. [17] G. Zhang, C. Wang, B. Xu, and R. Grosse, “Three mechanisms of weight decay regularization,” Int. Conf. on Learning Representations, 2019.
  18. [18] C. Liu, P. M. Berry, T. Dawson, and R. G. Pearson, “Selecting thresholds of occurrence in the prediction of species distributions,” Ecography, Vol.28, No.3, pp. 385-393, 2005.
  19. [19] P. Ren, W. Fang, and S. Djahel, “A Novel YOLO-Based real-time people counting approach,” 2017 Int. Smart Cities Conf. (ISC2), pp. 1-2, 2017.
  20. [20] M. Adámek, M. Matýsek, and P. Neumann, “Security of Biometric Systems,” Procedia Engineering, Vol.100, pp. 169-176, 2015.
  21. [21] K. Sivaraman and A. Murthy, “Object Recognition under Lighting Variations using Pre-Trained Networks,” 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pp. 1-7, 2018.
  22. [22] N. P. Ramaiah, E. P. Ijjina, and C. K. Mohan, “Illumination invariant face recognition using convolutional neural networks,” 2015 IEEE Int. Conf. on Signal Processing, Informatics, Communication and Energy Systems (SPICES), pp. 1-4, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024