Skip to main content
Log in

A symbolic execution-based method to perform untargeted attack on feed-forward neural networks

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

DeepCheck is a symbolic execution-based method to attack feed-forward neural networks. However, in the untargeted attack, DeepCheck suffers from a low success rate due to the limitation of preserving neuron activation patterns and the weakness of solving the constraint by SMT solvers. Therefore, this paper proposes a method to improve the success rate of DeepCheck. Compared to DeepCheck, the proposed method has two main differences including (i) does not force to preserve neuron activation patterns and (ii) uses a heuristic solver rather than SMT solvers. The experimental results on MNIST, Fashion-MNIST, and A-Z handwritten alphabets show three promising results. In the 1-pixel attack, while DeepCheck obtains an average of 0.7% success rate, the proposed method could achieve an average of 54.3% success rate. In the n-pixel attack, while DeepCheck obtains an average of at most 16.9% success rate for using the Z3 solver and at most 26.8% for using the SMTInterpol solver, the proposed method achieves an average of at most 98.7% success rate. In terms of solving cost, while the average running time of the proposed heuristic solver is around 0.4 s per attack, the average running time of DeepCheck is usually larger significantly. These results show the effectiveness of the proposed method to deal with the limitation of DeepCheck.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availibility

All data and material are available.

Notes

  1. https://github.com/VNU-HeuristicSolver.

  2. kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format.

  3. keras.io.

References

  • Baluja, S., Fischer, I.: Adversarial transformation networks: Learning to generate adversarial examples. arXiv:1703.09387, (2017)

  • Bruttomesso, R., Cimatti, A., Franzén, A., et al.: The mathsat 4 smt solver. In: Proceedings of the 20th International Conference on Computer Aided Verification. Springer-Verlag, Berlin, Heidelberg, CAV ’08, p 299–303 (2008). https://doi.org/10.1007/978-3-540-70545-1_28

  • Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. CoRR (2016). arXiv:1608.04644

  • Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: Bengio Y, LeCun Y (eds) ICLR (Poster) (2016). http://dblp.uni-trier.de/db/conf/iclr/iclr2016.html#ClevertUH15

  • De Moura, L., Bjørner, N.: Z3: An efficient smt solver. In: Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer-Verlag, Berlin, Heidelberg, TACAS’08/ETAPS’08, pp 337–340. (2008)

  • Dutertre, B., de Moura, L.: A fast linear-arithmetic solver for dpll (t). pp. 81–94, (2006) https://doi.org/10.1007/11817963_11

  • Godefroid, P., Klarlund, N., Sen, K.: Dart: Directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation. Association for Computing Machinery, New York, NY, USA, PLDI ’05, pp. 213–223 (2005). https://doi.org/10.1145/1065010.1065036

  • Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2015). arXiv:1412.6572 [stat.ML]

  • Gopinath, D., Păsăreanu, C.S., Wang, K., et al.: Symbolic execution for attribution and attack synthesis in neural networks. In: Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings. IEEE Press, ICSE ’19, pp. 282–283 (2019a). https://doi.org/10.1109/ICSE-Companion.2019.00115

  • Gopinath, D., Zhang, M., Wang, K., et al.: Symbolic execution for importance analysis and adversarial generation in neural networks. In: 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pp 313–322 (2019b). https://doi.org/10.1109/ISSRE.2019.00039

  • Hendrycks D, Gimpel K (2016) Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR. arXiv:1606.08415

  • Hoenicke, J., Schindler, T.: Solving and interpolating constant arrays based on weak equivalences. In: Enea C, Piskac R (eds) Verification, Model Checking, and Abstract Interpretation—20th International Conference, VMCAI 2019, Cascais, Portugal, January 13-15, 2019, Proceedings, Lecture Notes in Computer Science, vol 11388. Springer, pp 297–317 (2019). https://doi.org/10.1007/978-3-030-11245-5_14

  • Isola, P., Zhu, J., Zhou, T., et al.: Image-to-image translation with conditional adversarial networks. CoRR (2016). arXiv:1611.07004

  • Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. CoRR (2016). arXiv:1603.08155

  • Katz, G., Barrett, C.W., Dill, D.L., et al.: Reluplex: An efficient SMT solver for verifying deep neural networks. CoRR (2017). arXiv:1702.01135

  • King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252

    Article  MathSciNet  MATH  Google Scholar 

  • Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). http://arxiv.org/abs/1412.6980, cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015

  • Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. CoRR (2016). arXiv:1607.02533

  • Kurakin, A., Goodfellow, I.J., Bengio, S., et al. Adversarial attacks and defences competition. CoRR (2018). arXiv:1804.00097

  • Lecun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)

  • Ma, L., Juefei-Xu, F., Sun, J., et al.: Deepgauge: Comprehensive and multi-granularity testing criteria for gauging the robustness of deep learning systems. CoRR (2018). arXiv:1803.07519

  • Mangal, R., Nori A. V., Orso, A.: Robustness of neural networks: A probabilistic and practical approach. CoRR (2019). arXiv:1902.05983

  • Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. The MIT Press (2012)

  • Monniaux, D.: A survey of satisfiability modulo theory. CoRR (2016). arXiv:1606.04786

  • Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. CoRR (2015). arXiv:1511.04599

  • Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning. Omnipress, Madison, WI, USA, ICML’10, pp. 807–814 (2010)

  • Pei, K., Cao, Y., Yang, J., et al. Deepxplore: Automated whitebox testing of deep learning systems. CoRR (2017). arXiv:1705.06640

  • Perry, D.M., Mattavelli, A., Zhang, X., et al.: Accelerating array constraints in symbolic execution. In: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. Association for Computing Machinery, New York, NY, USA, ISSTA 2017, pp. 68–78 (2017). https://doi.org/10.1145/3092703.3092728

  • Sen, K., Marinov, D., Agha, G.: Cute: A concolic unit testing engine for C. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. Association for Computing Machinery, New York, NY, USA, ESEC/FSE-13, pp. 263–272 (2005). https://doi.org/10.1145/1081706.1081750

  • Su, T., Pu, G., Fang, B., et al. Automated coverage-driven test data generation using dynamic symbolic execution. In: 2014 Eighth International Conference on Software Security and Reliability (SERE), pp 98–107 (2014). https://doi.org/10.1109/SERE.2014.23

  • Sun, Y., Huang, X., Kroening, D.: Testing deep neural networks. CoRR(2018a). arXiv:1803.04792

  • Sun, Y., Wu, M., Ruan, W., et al.: Concolic testing for deep neural networks. CoRR (2018b). arXiv:1805.00089

  • Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks (2014). arXiv:1312.6199 [cs.CV]

  • Tian, Y., Pei, K., Jana, S., et al.: Deeptest: Automated testing of deep-neural-network-driven autonomous cars. CoRR (2017). arXiv:1708.08559

  • Usman, M., Noller, Y., Pasareanu, C. S., et al. NEUROSPF: a tool for the symbolic analysis of neural networks. CoRR (2021). arXiv:2103.00124

  • Williams, N., Marre, B., Mouy, P., et al. Pathcrawler: Automatic generation of path tests by combining static and dynamic analysis. In: Proceedings of the 5th European Conference on Dependable Computing. Springer-Verlag, Berlin, Heidelberg, EDCC’05, pp. 281–292 (2005). https://doi.org/10.1007/11408901_21

  • Xiao, C., Zhu, J., Li, B., et al. Spatially transformed adversarial examples. CoRR (2018.) arXiv:1801.02612

  • Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR (2017). arXiv:1708.07747

  • Zhang, J., Harman, M., Ma, L., et al.: Machine learning testing: survey, landscapes and horizons (2019)

Download references

Acknowledgements

This work is supported by Ministry of Science and Technology, Vietnam under project number KC-4.0-07/19-25, Program KC4.0/19-25. Duc-Anh Nguyen was funded by Vingroup JSC and supported by the Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.TS.105. Kha Do Minh was funded by Vingroup JSC and supported by the Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.ThS.24

Funding

Duc-Anh Nguyen was funded by Vingroup JSC and supported by the Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.TS.105. Kha Do Minh was funded by Vingroup JSC and supported by the Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.ThS.24.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: DAN, PNH Methodology: DAN, PNH, KDM Formal analysis and investigation: DAN Writing—original draft preparation: DAN Writing—review and editing: all authors

Corresponding author

Correspondence to Pham Ngoc Hung.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Human and animals participants

The research does not involve human participants and animals.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen, DA., Do Minh, K., Nguyen, M.L. et al. A symbolic execution-based method to perform untargeted attack on feed-forward neural networks. Autom Softw Eng 29, 46 (2022). https://doi.org/10.1007/s10515-022-00345-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10515-022-00345-x

Keywords

Navigation