Skip to main content

Timing Attack on Random Forests for Generating Adversarial Examples

  • Conference paper
  • First Online:
Advances in Information and Computer Security (IWSEC 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12231))

Included in the following conference series:

  • 476 Accesses

Abstract

The threat of implementation attacks to machine learning has become an issue recently. These attacks include side-channel attacks that use information acquired from implemented devices and fault attacks that inject faults into implemented devices using external tools such as lasers. Thus far, these attacks have targeted mainly deep neural networks; however, other popular methods such as random forests can also be targets. In this paper, we investigate the threat of implementation attacks to random forests. Specifically, we propose a novel timing attack that generates adversarial examples, and experimentally evaluate its attack success rate. The proposed attack exploits a fundamental property of random forests: the response time from the input to the output depends on the number of conditional branches invoked during prediction. More precisely, we generate adversarial examples by optimizing the response time. This optimization affects predictions because changes in the response time imply changes in the results of the conditional branches. For the optimization, we use an evolution strategy that tolerates measurement error in the response time. Experiments are conducted in a black-box setting where attackers can use only prediction labels and response times. Experimental results show that the proposed attack generates adversarial examples with higher probability than a state-of-the-art attack that uses only predicted labels. This suggests the attacker motivation for implementation attacks on random forests.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Batina, L., Bhasin, S., Jap, D., Picek, S.: CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In: Proceedings of the 28th USENIX Security Symposium, USENIX Security 2019, pp. 515–532 (2019)

    Google Scholar 

  2. Breier, J., Hou, X., Jap, D., Ma, L., Bhasin, S., Liu, Y.: Practical fault attack on deep neural networks. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, pp. 2204–2206 (2018)

    Google Scholar 

  3. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  4. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: Proceedings of the 6th International Conference on Learning Representations. ICLR 2018 (2018)

    Google Scholar 

  5. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec 17, pp. 15–26 (2017)

    Google Scholar 

  6. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 785–794 (2016)

    Google Scholar 

  7. Duddu, V., Samanta, D., Rao, D.V., Balas, V.E.: Stealing neural networks via timing side channels. arXiv preprint arXiv:1812.11720 (2018)

  8. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2015, pp. 1322–1333 (2015)

    Google Scholar 

  9. Gartner: Top Trends on the Gartner Hype Cycle for Artificial Intelligence (2019). https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner-hype-cycle-for-artificial-intelligence-2019/

  10. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  11. Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. In: Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence. UAI 2019 (2019)

    Google Scholar 

  12. Guo, C., Gardner, J., You, Y., Wilson, A.G., Weinberger, K.: Simple black-box adversarial attacks. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 2484–2493 (2019)

    Google Scholar 

  13. Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)

    Article  Google Scholar 

  14. Hong, S., et al.: Security analysis of deep neural networks operating in the presence of cache side-channel attacks. arXiv preprint arXiv:1810.03487 (2018)

  15. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: Proceedings of the 2018 IEEE Symposium on Security and Privacy, S&P 2018, pp. 19–35 (2018)

    Google Scholar 

  16. Kantchelian, A., Tygar, J.D., Joseph, A.: Evasion and hardening of tree ensemble classifiers. In: Proceedings of the 33rd International Conference on Machine Learning, ICML 2016, pp. 2387–2396 (2016)

    Google Scholar 

  17. Liu, Y., Wei, L., Luo, B., Xu, Q.: Fault injection attack on deep neural network. In: Proceedings of the 2017 IEEE/ACM International Conference on Computer-Aided Design. pp. 131–138. ICCAD ’17 (2017)

    Google Scholar 

  18. Oracle: An introduction to building a classification model using random forests in python. https://blogs.oracle.com/datascience/an-introduction-to-building-a-classification-model-using-random-forests-in-python

  19. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  20. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: Proceedings of the 2017 IEEE Symposium on Security and Privacy, S&P 2017, pp. 3–18 (2017)

    Google Scholar 

  21. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  22. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX Security Symposium, USENIX Security 2016, pp. 601–618 (2016)

    Google Scholar 

  23. Wang, B., Gong, N.Z.: Stealing hyperparameters in machine learning. In: Proceedings of the 2018 IEEE Symposium on Security and Privacy. S&P 2018, pp. 36–52 (2018)

    Google Scholar 

  24. Wei, L., Luo, B., Li, Y., Liu, Y., Xu, Q.: I know what you see: power side-channel attack on convolutional neural network accelerators. In: Proceedings of the 34th Annual Computer Security Applications Conference, ACSAC 2018, pp. 393–406 (2018)

    Google Scholar 

  25. Yan, M., Fletcher, C.W., Torrellas, J.: Cache telepathy: leveraging shared resource attacks to learn DNN architectures. arXiv preprint arXiv:1808.04761 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuichiro Dan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dan, Y., Shibahara, T., Takahashi, J. (2020). Timing Attack on Random Forests for Generating Adversarial Examples. In: Aoki, K., Kanaoka, A. (eds) Advances in Information and Computer Security. IWSEC 2020. Lecture Notes in Computer Science(), vol 12231. Springer, Cham. https://doi.org/10.1007/978-3-030-58208-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58208-1_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58207-4

  • Online ISBN: 978-3-030-58208-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics