Rule extraction in unsupervised anomaly detection for model explainability: Application to OneClass SVM

https://doi.org/10.1016/j.eswa.2021.116100Get rights and content
Under a Creative Commons license
open access

Highlights

  • Rule extraction for unsupervised outlier detection models using OCSVM.

  • Design and evaluate alternatives over rule extraction algorithms.

  • XAI metric evaluation: comprehensibility, representativeness, stability and diversity.

  • Quantify quality of the explanations with XAI metrics for P@1 rules.

  • Measure the kernel influence in the number of rules generated.

Abstract

OneClass SVM is a popular method for unsupervised anomaly detection. As many other methods, it suffers from the black box problem: it is difficult to justify, in an intuitive and simple manner, why the decision frontier is identifying data points as anomalous or non anomalous. This problem is being widely addressed for supervised models. However, it is still an uncharted area for unsupervised learning. In this paper, we evaluate several rule extraction techniques over OneClass SVM models, while presenting alternative designs for some of those algorithms. Furthermore, we propose algorithms for computing metrics related to eXplainable Artificial Intelligence (XAI) regarding the “comprehensibility”, “representativeness”, “stability” and “diversity” of the extracted rules. We evaluate our proposals with different data sets, including real-world data coming from industry. Consequently, our proposal contributes to extending XAI techniques to unsupervised machine learning models.

Keywords

XAI
OneClass SVM
Unsupervised learning
Rule extraction
Anomaly detection
Metrics

Cited by (0)

The code (and data) in this article has been certified as Reproducible by Code Ocean: (https://codeocean.com/). More information on the Reproducibility Badge Initiative is available at https://www.elsevier.com/physical-sciences-and-engineering/computer-science/journals.