skip to main content
10.1145/3009977.3010003acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvgipConference Proceedingsconference-collections
research-article

Reinforced random forest

Published:18 December 2016Publication History

ABSTRACT

Reinforcement learning improves classification accuracy. But use of reinforcement learning is relatively unexplored in case of random forest classifier. We propose a reinforced random forest (RRF) classifier that exploits reinforcement learning to improve classification accuracy. Our algorithm is initialized with a forest. Then the entire training data is tested using the initial forest. In order to reinforce learning, we use mis-classified data points to grow certain number of new trees. A subset of the new trees is added to the existing forest using a novel graph-based approach. We show that addition of these trees ensures improvement in classification accuracy. This process is continued iteratively until classification accuracy saturates. The proposed RRF has low computational burden. We achieve at least 3% improvement in F-measure compared to random forest in three breast cancer datasets. Results on benchmark datasets show significant reduction in average classification error.

References

  1. http://ludo17.free.fr/mitos_2012/. Available as on 08.03.2016.Google ScholarGoogle Scholar
  2. http://amida13.isi.uu.nl/?q=node/62. Available as on 08.03.2016.Google ScholarGoogle Scholar
  3. http://mitos-atypia-14.grand-challenge.org/dataset/. Available as on 08.03.2016.Google ScholarGoogle Scholar
  4. A. Asuncion and D. Newman. Uci machine learning repository, 2007.Google ScholarGoogle Scholar
  5. L. Breiman. Random forests. Machine learning, 45(1):5--32, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. O. Chapelle, B. Schölkopf, A. Zien, et al. Semi-supervised learning. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. T. F. Cootes, M. C. Ionita, C. Lindner, and P. Sauer. Robust and accurate shape model fitting using random forest regression voting. In Computer Vision-ECCV 2012, pages 278--291. Springer, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Cuzzocrea, S. L. Francis, and M. M. Gaber. An information-theoretic approach for setting the optimal number of decision trees in random forests. In Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on, pages 1013--1019. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. Fanelli, M. Dantone, J. Gall, A. Fossati, and L. Van Gool. Random forests for real time 3d face analysis. International Journal of Computer Vision, 101(3):437--458, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y. Freund, R. E. Schapire, et al. Experiments with a new boosting algorithm. In ICML, volume 96, pages 148--156, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. I. Guyon, S. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the nips 2003 feature selection challenge. In Advances in neural information processing systems, pages 545--552, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. J. Hull. A database for handwritten text recognition research. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 16(5):550--554, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, pages 237--285, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. M. Khoshgoftaar, M. Golawala, and J. V. Hulse. An empirical study of learning from imbalanced data using random forest. In Tools with Artificial Intelligence, 2007. ICTAI 2007. 19th IEEE International Conference on, volume 2, pages 310--317. IEEE, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Latinne, O. Debeir, and C. Decaestecker. Limiting the number of trees in random forests. In Multiple Classifier Systems, pages 178--187. Springer, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. X.-Y. Liu, J. Wu, and Z.-H. Zhou. Exploratory undersampling for class-imbalance learning. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 39(2):539--550, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Luo, Z. Wang, S. Wang, J. Zhang, and J. Yu. Locating facial landmarks using probabilistic random forest. Signal Processing Letters, IEEE, 22(12):2324--2328, Dec 2015.Google ScholarGoogle ScholarCross RefCross Ref
  18. T. M. Oshiro, P. S. Perez, and J. A. Baranauskas. How many trees in a random forest? In MLDM, pages 154--168. Springer, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. Paul, A. Dey, D. P. Mukherjee, J. Sivaswamy, and V. Tourani. Regenerative random forest with automatic feature selection to detect mitosis in histopathological breast cancer images. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, pages 94--102. Springer, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  20. A. Paul and D. Mukherjee. Mitosis detection for invasive breast cancer grading in histopathological images. Image Processing, IEEE Transactions on, 24(11):4041--4054, Nov 2015.Google ScholarGoogle Scholar
  21. A. Paul and D. P. Mukherjee. Enhanced random forest for mitosis detection. In Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing, page 85. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. Quadrianto and Z. Ghahramani. A very simple safe-bayesian random forest. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(6):1297--1303, June 2015.Google ScholarGoogle Scholar
  23. S. Ren, X. Cao, Y. Wei, and J. Sun. Global refinement of random forest. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 723--730, 2015.Google ScholarGoogle Scholar
  24. M. R. Segal. Machine learning benchmarks and random forest regression. Center for Bioinformatics & Molecular Biostatistics, 2004.Google ScholarGoogle Scholar
  25. S. Skiena. Dijkstra's algorithm. Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica, Reading, MA: Addison-Wesley, pages 225--227, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. V. Svetnik, A. Liaw, C. Tong, J. C. Culberson, R. P. Sheridan, and B. P. Feuston. Random forest: a classification and regression tool for compound classification and qsar modeling. Journal of chemical information and computer sciences, 43(6):1947--1958, 2003.Google ScholarGoogle Scholar

Index Terms

  1. Reinforced random forest

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICVGIP '16: Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing
      December 2016
      743 pages
      ISBN:9781450347532
      DOI:10.1145/3009977

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 December 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ICVGIP '16 Paper Acceptance Rate95of286submissions,33%Overall Acceptance Rate95of286submissions,33%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader