Skip to main content

Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets

  • Conference paper
  • First Online:
AI 2022: Advances in Artificial Intelligence (AI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13728))

Included in the following conference series:

  • 1472 Accesses

Abstract

Adversarial examples are a threat to systems that use machine learning models. Considerable research has focused on adversarial exploits using homogeneous datasets (vision, sound, and text) while primarily attacking deep learning models. However, many industries such as healthcare, business analytics, finance, and cybersecurity rely upon heterogeneous (tabular) datasets. The attacks which perform well on homogeneous datasets do not extend to heterogeneous datasets due to feature constraints. Therefore, tabular datasets require different forms of attack and defense mechanisms. In this work, we propose a novel defense against adversarial examples built from tabular datasets. We use an uncertainty metric, the Minimum Prediction Deviation (MPD), to detect adversarial examples generated by a tabular evasion attack algorithm, the Feature Importance Guided Attack (FIGA). Using MPD as a defense we are able to detect 98% of the adversarial samples with a 7.8% false positive rate on average.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (2018)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)

    Google Scholar 

  3. Chen, K., et al.: A survey on adversarial examples in deep learning. J. Big Data 2(2), 71 (2020)

    Article  Google Scholar 

  4. Darling, M.C.: Using uncertainty to interpret supervised machine learning predictions. In: (2019)

    Google Scholar 

  5. Deng, Z., et al.: Libre: a practical bayesian approach to adversarial detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 972–982 (2021)

    Google Scholar 

  6. Gao, R., et al.: Convergence of adversarial training in overparametrized neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  7. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  8. Gressel, G., et al.: Feature importance guided attack: a model agnostic adversarial attack. arXiv preprint arXiv:2106.14815 (2021)

  9. Lyu, C., Huang, K., Liang, H.-N.: A unified gradient regularization family for adversarial examples’. In: IEEE International Conference on Data Mining, vol. 2015, pp. 301–309. IEEE (2015)

    Google Scholar 

  10. Madry, A., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  11. Mathov, Y., et al.: Not all datasets are born equal: on heterogeneous tabular data and adversarial examples. Knowl.-Based Syst. 242, 108377 (2022)

    Article  Google Scholar 

  12. Qin, Y., et al.: Detecting and diagnosing adversarial images with class conditional capsule reconstructions. In: International Conference on Learning Representations (2020)

    Google Scholar 

  13. Sharma, Y., Chen, P.-Y.: Attacking the madry defense model with \(L_{1}\)-based adversarial examples (2018)

    Google Scholar 

  14. Sheikholeslami, F., Jain, S., Georgios, B., Giannakis: Minimum uncertainty based detection of adversaries in deep neural net- works. In: Information Theory and Applications Workshop (ITA), vol. 2020, pp. 1–16. IEEE (2020)

    Google Scholar 

  15. Smith, L., Gal, Y.: Understanding measures of uncertainty for adversarial example detection. In: Uncertainty in Artificial Intelligence (2018)

    Google Scholar 

  16. Szegedy, C., et al.: Intriguing properties of neural networks. CoRR abs/1312.6199 (2014)

    Google Scholar 

  17. Tuna, O.F., Catak, F.O., Eskil, M.T.: Closeness and uncertainty aware adversarial examples detection in adversarial machine learning. Comput. Electr. Eng. 101, 107986 (2022)

    Article  Google Scholar 

  18. Vinayakumar, R., Soman, K.P., Poornachandran, P.: Detecting malicious domain names using deep learning approaches at scale. J. Intell. Fuzzy Syst. 34(3), 1355–1367 (2018)

    Article  Google Scholar 

  19. Vinayakumar, R., Soman, K.P., Poornachandran, P.: Evaluating deep learning approaches to characterize and classify malicious URL’s. J. Intell. Fuzzy Syst. 34(3), 1333–1343 (2018)

    Article  Google Scholar 

  20. Vinayakumar, R., et al.: A deep-dive on machine learning for cyber security use cases. In: Machine Learning for Computer and Cyber Security, pp. 122–158. CRC Press (2019)

    Google Scholar 

  21. Xie, C., et al.: Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 501–509 (2019)

    Google Scholar 

  22. Zhang, H., et al.: The limitations of adversarial training and the Blindspot attack. In: International Conference on Learning Representations (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gilad Gressel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Santhosh, P., Gressel, G., Darling, M.C. (2022). Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets. In: Aziz, H., Corrêa, D., French, T. (eds) AI 2022: Advances in Artificial Intelligence. AI 2022. Lecture Notes in Computer Science(), vol 13728. Springer, Cham. https://doi.org/10.1007/978-3-031-22695-3_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22695-3_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22694-6

  • Online ISBN: 978-3-031-22695-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics