Abstract
Tsetlin Machines (TMs) capture patterns using conjunctive clauses in propositional logic, thus facilitating interpretation. However, recent TM-based approaches mainly rely on inspecting the full range of clauses individually. Such inspection does not necessarily scale to complex prediction problems that require a large number of clauses. In this paper, we propose closed-form expressions for understanding why a TM model makes a specific prediction (local interpretability). Additionally, the expressions capture the most important features of the model overall (global interpretability). We further introduce expressions for measuring the importance of feature value ranges for continuous features making it possible to capture the role of features in real-time as well as during the learning process as the model evolves. We compare our proposed approach against SHAP and state-of-the-art interpretable machine learning techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We will understand black-box models as models which lack intrinsic interpretability features, such as ensemble approaches, neural networks, and random forests.
- 2.
Any systematic division of clauses can be used as long as the cardinality of the positive and negative polarity sets are equal.
- 3.
The choice of hyperparameters of the IWTM can be summarized as picking the number of clauses randomly three times, between 50 and 500 clauses, with a threshold of twice the number of clauses. The best model in terms of accuracy was chosen of the three configurations.
- 4.
DNN with 10 hidden layers containing 100 units each with ReLU activation and Adam optimizer.
References
Abeyrathna, K.D., Granmo, O.C., Goodwin, M.: Extending the Tsetlin Machine with integer-weighted clauses for increased interpretability. IEEE Access 9, 8233–8248 (2020)
Agarwal, R., Frosst, N., Zhang, X., Caruana, R., Hinton, G.E.: Neural additive models: interpretable machine learning with neural nets (2020)
Granmo, O.C.: The Tsetlin Machine - a game theoretic bandit driven approach to optimal pattern recognition with propositional logic (2018). https://arxiv.org/abs/1804.01508
Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions (2017)
Pace, K., Barry, R.: Sparse spatial autoregressions. Stat. Probab. Lett. 33(3), 291–297 (1997)
Pozzolo, A.D., Bontempi, G.: Adaptive machine learning for credit card fraud detection (2015)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Tsetlin, M.L.: On behaviour of finite automata in random medium. Avtomat. i Telemekh 22(10), 1345–1354 (1961)
Tulio Ribeiro, M., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. arXiv e-prints arXiv:1602.04938 (February 2016)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Blakely, C.D., Granmo, OC. (2021). Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines. In: Fujita, H., Selamat, A., Lin, J.CW., Ali, M. (eds) Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science(), vol 12798. Springer, Cham. https://doi.org/10.1007/978-3-030-79457-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-79457-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79456-9
Online ISBN: 978-3-030-79457-6
eBook Packages: Computer ScienceComputer Science (R0)