Skip to main content

Using Random Forest Distances for Outlier Detection

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Abstract

In recent years, a great variety of outlier detectors have been proposed in the literature, many of which are based on pairwise distances or derived concepts. However, in such methods, most of the efforts have been devoted to the outlier detection mechanisms, not paying attention to the distance measure – in most cases the basic Euclidean distance is used. Instead, in the clustering field, data-dependent measures have shown to be very useful, especially those based on Random Forests: actually, Random Forests are partitioners of the space able to naturally encode the relation between two objects. In the outlier detection field, these informative distances have received scarce attention. This manuscript is aimed at filling this gap, studying the suitability of these measures in the identification of outliers. In our scheme, we build an unsupervised Random Forest model, from which we extract pairwise distances; these distances are then input to an outlier detector. In particular, we study the impact of several Random Forest-based distances, including advanced and recent ones, on different outlier detectors. We evaluate thoroughly our methodology on nine benchmark datasets for outlier detection, focusing on different aspects of the pipeline, such as the parametrization of the forest, the type of distance-based outlier detector, and most importantly, the impact of the adopted distance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Available at https://archive.ics.uci.edu/ml/index.php.

  2. 2.

    Please note that we can extend this reasoning to Zhu2 and Zhu3: the resulting matrix may not be sparse, but it may contain many low similar values, thus impacting on the final outlier detection step.

References

  1. Aryal, S., Ting, K.M., Washio, T., Haffari, G.: Data-dependent dissimilarity measure: an effective alternative to geometric distance measures. Knowl. Inf. Syst. 53(2), 479–506 (2017). https://doi.org/10.1007/s10115-017-1046-0

    Article  Google Scholar 

  2. Aryal, S., Ting, K.M., Washio, T., Haffari, G.: A comparative study of data-dependent approaches without learning in measuring similarities of data objects. Data Min. Knowl. Discov. 34(1), 124–162 (2019). https://doi.org/10.1007/s10618-019-00660-0

    Article  MathSciNet  MATH  Google Scholar 

  3. Bicego, M., Escolano, F.: On learning random forests for random forest clustering. In: Proceedings of the International Conference on Pattern Recognition, pp. 3451–3458 (2020). https://doi.org/10.1109/ICPR48806.2021.9412014

  4. Bicego, M., Cicalese, F., Mensi, A.: RatioRF: a novel measure for random forest clustering based on the Tversky’s ratio model. IEEE Trans. Knowl. Data Eng. 1 (2021). https://doi.org/10.1109/TKDE.2021.3086147

  5. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001). https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  6. Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: LOF: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, pp. 93–104 (2000). https://doi.org/10.1145/335191.335388

  7. Campos, G.O., et al.: On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Min. Knowl. Discov. 30(4), 891–927 (2016). https://doi.org/10.1007/s10618-015-0444-8

    Article  MathSciNet  Google Scholar 

  8. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  9. Englund, C., Verikas, A.: A novel approach to estimate proximity in a random forest: an exploratory study. Expert Syst. Appl. 39(17), 13046–13050 (2012). https://doi.org/10.1016/j.eswa.2012.05.094

    Article  Google Scholar 

  10. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006). https://doi.org/10.1007/s10994-006-6226-1

    Article  MATH  Google Scholar 

  11. Goix, N., Drougard, N., Brault, R., Chiapino, M.: One class splitting criteria for random forests. In: Zhang, M.L., Noh, Y.K. (eds.) Proceedings of 9th Asian Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 77, pp. 343–358 (2017)

    Google Scholar 

  12. Hawkins, D.M.: Identification of Outliers, vol. 11. Springer, Dordrecht (1980). https://doi.org/10.1007/978-94-015-3994-4

    Book  MATH  Google Scholar 

  13. Kaufman, L., Rousseeuw, P.J.: Finding Groups in Data: An Introduction to Cluster Analysis, vol. 344. Wiley, Hoboken (2009). https://doi.org/10.1002/9780470316801

    Book  MATH  Google Scholar 

  14. Lin, D.: An information-theoretic definition of similarity. In: Proceedings of the International Conference on Machine Learning, vol. 98, pp. 296–304 (1998)

    Google Scholar 

  15. Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: IEEE International Conference on Data Mining, pp. 413–422 (2008). https://doi.org/10.1109/ICDM.2008.17

  16. Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation-based anomaly detection. ACM Trans. Knowl. Discov. Data 6(1), 3:1–3:39 (2012). https://doi.org/10.1145/2133360.2133363

  17. Mensi, A., Bicego, M., Tax, D.M.: Proximity isolation forests. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8021–8028. IEEE (2021). https://doi.org/10.1109/ICPR48806.2021.9412322

  18. Mensi, A., Franzoni, A., Tax, D.M.J., Bicego, M.: An alternative exploitation of isolation forests for outlier detection. In: Torsello, A., Rossi, L., Pelillo, M., Biggio, B., Robles-Kelly, A. (eds.) S+SSPR 2021. LNCS, vol. 12644, pp. 34–44. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73973-7_4

    Chapter  Google Scholar 

  19. Shi, T., Horvath, S.: Unsupervised learning with random forest predictors. J. Comput. Graph. Stat. 15 (2005). https://doi.org/10.1002/sam.11498

  20. Tax, D.: One-class classification; concept-learning in the absence of counter-examples. Ph.D. thesis, Delft University of Technology (2001)

    Google Scholar 

  21. Ting, K., Zhu, Y., Carman, M., Zhu, Y., Zhou, Z.H.: Overcoming key weaknesses of distance-based neighbourhood methods using a data dependent dissimilarity measure. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 1205–1214 (2016). https://doi.org/10.1145/2939672.2939779

  22. Tversky, A.: Features of similarity. Psychol. Rev. 84(4), 327 (1977). https://doi.org/10.1037/0033-295X.84.4.327

    Article  Google Scholar 

  23. Zhu, X., Loy, C., Gong, S.: Constructing robust affinity graphs for spectral clustering. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 1450–1457 (2014). https://doi.org/10.1109/CVPR.2014.188

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonella Mensi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mensi, A., Cicalese, F., Bicego, M. (2022). Using Random Forest Distances for Outlier Detection. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13233. Springer, Cham. https://doi.org/10.1007/978-3-031-06433-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06433-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06432-6

  • Online ISBN: 978-3-031-06433-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics