Vulnerability of Deep Forest to Adversarial Attacks | IEEE Journals & Magazine | IEEE Xplore

Vulnerability of Deep Forest to Adversarial Attacks


Abstract:

Machine learning classifiers are vulnerable to adversarial examples, which are carefully crafted inputs designed to compromise their classification performance. Recently,...Show More

Abstract:

Machine learning classifiers are vulnerable to adversarial examples, which are carefully crafted inputs designed to compromise their classification performance. Recently, a new machine learning classifier was proposed that is composed of forests of decision trees, inspired by the architecture of deep neural networks. However, deep neural networks are vulnerable to adversarial attacks. Therefore, in this work, we launch a series of adversarial attacks on deep forests, including black-box and white-box attacks, to assess its vulnerability to adversarial attacks for the first time. Prior work has shown that adversarial examples crafted on one model transfer across various models with different learning techniques. We demonstrate empirically that deep forest is vulnerable to cross-technique-based transferability attacks. On the other hand, to improve the performance of deep forest under adversarial settings, our work includes experiments that demonstrate that training non-differentiable models such as deep forests on randomly or adversarially perturbed inputs increases its adversarial robustness to such attacks. Furthermore, a heuristic white-box method to attack deep forests is proposed by implementing a faster and more efficient decision tree attack algorithm. By attacking both deep forest components, namely the cascade forest and multi-grained layer, we show that deep forests are susceptible to the proposed white-box adversarial attack.
Page(s): 5464 - 5475
Date of Publication: 17 May 2024

ISSN Information:


References

References is not available for this document.