skip to main content
10.1145/3308558.3313533acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Securing the Deep Fraud Detector in Large-Scale E-Commerce Platform via Adversarial Machine Learning Approach

Published:13 May 2019Publication History

ABSTRACT

Fraud transactions are one of the major threats faced by online e-commerce platforms. Recently, deep learning based classifiers have been deployed to detect fraud transactions. Inspired by findings on adversarial examples, this paper is the first to analyze the vulnerability of deep fraud detector to slight perturbations on input transactions, which is very challenging since the sparsity and discretization of transaction data result in a non-convex discrete optimization. Inspired by the iterative Fast Gradient Sign Method (FGSM) for the L8 attack, we first propose the Iterative Fast Coordinate Method (IFCM) for discrete L1 and L2 attacks which is efficient to generate large amounts of instances with satisfactory effectiveness. We then provide two novel attack algorithms to solve the discrete optimization. The first one is the Augmented Iterative Search (AIS) algorithm, which repeatedly searches for effective “simple” perturbation. The second one is called the Rounded Relaxation with Reparameterization (R3), which rounds the solution obtained by solving a relaxed and unconstrained optimization problem with reparameterization tricks. Finally, we conduct extensive experimental evaluation on the deployed fraud detector in TaoBao, one of the largest e-commerce platforms in the world, with millions of real-world transactions. Results show that (i) The deployed detector is highly vulnerable to attacks as the average precision is decreased from nearly 90% to as low as 20% with little perturbations; (ii) Our proposed attacks significantly outperform the adaptions of the state-of-the-art attacks. (iii) The model trained with an adversarial training process is significantly robust against attacks and performs well on the unperturbed data.

References

  1. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning (ICML'18). 274-283.Google ScholarGoogle Scholar
  2. Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. 2018. Thermometer Encoding: One Hot Way to Resist Adversarial Examples. (2018).Google ScholarGoogle Scholar
  3. Nicholas Carlini, Guy Katz, Clark Barrett, and David L Dill. 2017. Provably Minimally-Distorted Adversarial Examples. arXiv1709(2017).Google ScholarGoogle Scholar
  4. Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP'17). 39-57.Google ScholarGoogle Scholar
  5. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI'18). 10-17.Google ScholarGoogle ScholarCross RefCross Ref
  6. Kang Fu, Dawei Cheng, Yi Tu, and Liqing Zhang. 2016. Credit Card Fraud Detection Using Convolutional Neural Networks. In Neural Information Processing - 23rd International Conference (ICONIP'16). 483-490.Google ScholarGoogle Scholar
  7. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep Learning. Vol. 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRRabs/1412.6572(2014).Google ScholarGoogle Scholar
  9. Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. 2016. Adversarial Perturbations Against Deep Neural Networks for Malware Classification. CoRRabs/1606.04435(2016).Google ScholarGoogle Scholar
  10. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. CoRRabs/1503.02531(2015).Google ScholarGoogle Scholar
  11. Earlence Fernandes Tadayoshi Kohno Bo Li Atul Prakash Amir Rahmati Ivan Evtimov, Kevin Eykholt and Dawn Song. 2017. Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945(2017).Google ScholarGoogle Scholar
  12. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRRabs/1412.6980(2014).Google ScholarGoogle Scholar
  13. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial Examples in the Physical World. CoRRabs/1607.02533(2016).Google ScholarGoogle Scholar
  14. Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. 2015. The Treasure Beneath Convolutional Layers: Cross-Convolutional-Layer Pooling for Image Classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR'15). 4749-4757.Google ScholarGoogle Scholar
  15. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. CoRRabs/1706.06083(2017).Google ScholarGoogle Scholar
  16. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'17). 86-94.Google ScholarGoogle ScholarCross RefCross Ref
  17. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'16). 2574-2582.Google ScholarGoogle ScholarCross RefCross Ref
  18. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning (ICML'10). 807-814. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. David L Olson and Dursun Delen. 2008. Advanced Data Mining Techniques. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Nicolas Papernot and Patrick D. McDaniel. 2016. On the Effectiveness of Defensive Distillation. CoRRabs/1607.05113(2016).Google ScholarGoogle Scholar
  21. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In IEEE European Symposium on Security and Privacy (EuroS&P'16). 372-387.Google ScholarGoogle Scholar
  22. Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In IEEE Symposium on Security and Privacy (SP'16). 582-597.Google ScholarGoogle ScholarCross RefCross Ref
  23. Herbert Robbins and Sutton Monro. 1985. A stochastic approximation method. In Herbert Robbins Selected Papers. 102-109.Google ScholarGoogle Scholar
  24. Andrew Slavin Ross and Finale Doshi-Velez. 2018. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI'18). 1660-1669.Google ScholarGoogle ScholarCross RefCross Ref
  25. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS'16). 1528-1540. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ning Su, Yiqun Liu, Zhao Li, Yuli Liu, Min Zhang, and Shaoping Ma. 2018. Detecting Crowdturfing ”Add to Favorites” Activities in Online Shopping. In Proceedings of the 2018 World Wide Web Conference on World Wide Web (WWW'18). 1673-1682. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. CoRRabs/1312.6199(2013).Google ScholarGoogle Scholar
  28. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick D. McDaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. CoRRabs/1705.07204(2017).Google ScholarGoogle Scholar
  29. Haiqin Weng, Zhao Li, Shouling Ji, Chen Chu, Haifeng Lu, Tianyu Du, and Qinming He. 2018. Online E-Commerce Fraud: A Large-Scale Detection and Analysis. In 34th IEEE International Conference on Data Engineering (ICDE'18).Google ScholarGoogle Scholar
  30. Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In 25th Annual Network and Distributed System Security Symposium (NDSS'18).Google ScholarGoogle Scholar
  31. Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. CoRRabs/1712.07107(2017).Google ScholarGoogle Scholar
  32. Ashkan Zakaryazad and Ekrem Duman. 2016. A Profit-Driven Artificial Neural Network (ANN) with Applications to Fraud Detection and Direct Marketing. Neurocomputing175(2016), 121-131. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Mengchen Zhao, Zhao Li, Bo An, Haifeng Lu, Yifan Yang, and Chen Chu. 2018. Impression Allocation for Combating Fraud in E-commerce Via Deep Reinforcement Learning with Action Norm Penalty. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI'18). 3940-3946. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Panpan Zheng, Shuhan Yuan, Xintao Wu, Jun Li, and Aidong Lu. 2018. One-Class Adversarial Nets for Fraud Detection. CoRRabs/1803.01798(2018).Google ScholarGoogle Scholar
  35. Stephan Zheng, Yang Song, Thomas Leung, and Ian J. Goodfellow. 2016. Improving the Robustness of Deep Neural Networks via Stability Training. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'16). 4480-4488.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    WWW '19: The World Wide Web Conference
    May 2019
    3620 pages
    ISBN:9781450366748
    DOI:10.1145/3308558

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 May 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,899of8,196submissions,23%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format