Skip to main content

Boosting-Based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks

  • Conference paper
  • First Online:
Discovery Science (DS 2023)

Abstract

Understanding the characteristics of neural networks is important but difficult due to their complex structures and behaviors. Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest. This approach is promising since rich results of verification techniques for circuits and other Boolean expressions can be readily applied. The bottleneck is the time complexity of the transformation. More precisely, (i) each neuron of the network, i.e., a linear threshold function, is converted to a Binary Decision Diagram (BDD), and (ii) they are further combined into some final form, such as Boolean circuits. For a linear threshold function with n variables, an existing method takes \(O(n2^{\frac{n}{2}})\) time to construct an ordered BDD of size \(O(2^{\frac{n}{2}})\) consistent with some variable ordering. However, it is non-trivial to choose a variable ordering producing a small BDD among n! candidates.

We propose a method to convert a linear threshold function to a specific form of a BDD based on the boosting approach in the machine learning literature. Our method takes \(O(2^n \text {poly}(1/\rho ))\) time and outputs BDD of size \(O(\frac{n^2}{\rho ^4}\ln {\frac{1}{\rho }})\), where \(\rho \) is the margin of some consistent linear threshold function. Our method does not need to search for good variable orderings and produces a smaller expression when the margin of the linear threshold function is large. More precisely, our method is based on our new boosting algorithm, which is of independent interest. We also propose a method to combine them into the final Boolean expression representing the neural network. In our experiments on verification tasks of neural networks, our methods produce smaller final Boolean expressions, on which the verification tasks are done more efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    All proofs will be found in the arXiv version http://arxiv.org/abs/2306.05211.

References

  1. Bova, S.: SDDs are exponentially more succinct than OBDDs. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 12–17 February 2016, Phoenix, Arizona, USA, pp. 929–935. AAAI Press (2016)

    Google Scholar 

  2. Bshouty, N.H., Tamon, C., Wilson, D.K.: On learning width two branching programs. Inf. Process. Lett. 65(4), 217–222 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chan, H., Darwiche, A.: Reasoning about Bayesian network classifiers. In: UAI 2003, Proceedings of the 19th Conference in Uncertainty in Artificial Intelligence, Acapulco, Mexico, 7–10 August 2003, pp. 107–115. Morgan Kaufmann (2003)

    Google Scholar 

  4. Chorowski, J., Zurada, J.M.: Top-down induction of reduced ordered decision diagrams from neural networks. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6792, pp. 309–316. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21738-8_40

    Chapter  Google Scholar 

  5. Darwiche, A.: SDD: A new canonical representation of propositional knowledge bases. In: Walsh, T. (ed.) IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, 16–22 July 2011, pp. 819–826. IJCAI/AAAI (2011)

    Google Scholar 

  6. Liu, B., Malon, C., Xue, L., Kruus, E.: Improving neural network robustness through neighborhood preserving layers. In: Pattern Recognition. ICPR International Workshops and Challenges, pp. 179–195 (2020)

    Google Scholar 

  7. Mangal, R., Nori, A.V., Orso, A.: Robustness of neural networks: a probabilistic and practical approach. In: Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER), pp. 93–96. IEEE/ACM (2019)

    Google Scholar 

  8. Mansour, Y., McAllester, D.: Boosting using branching programs. J. Comput. Syst. Sci. 64(1), 103–112 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Narodytska, N., Kasiviswanathan, S.P., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32(AAAI-18) (2018)

    Google Scholar 

  10. Shi, W., Shih, A., Darwiche, A., Choi, A.: On tractable representations of binary neural networks. In: Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, Rhodes, Greece, 12–18 September 2020, pp. 882–892 (2020)

    Google Scholar 

  11. Shih, A., Darwiche, A., Choi, A.: Verifying binarized neural networks by angluin-style learning. In: Janota, M., Lynce, I. (eds.) SAT 2019. LNCS, vol. 11628, pp. 354–370. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24258-9_25

    Chapter  Google Scholar 

  12. Weng, T., et al.: Evaluating the robustness of neural networks: an extreme value theory approach. In: 6th International Conference on Learning Representations, ICLR 2018 (2018)

    Google Scholar 

  13. Yu, F., Qin, Z., Liu, C., Zhao, L., Wang, Y., Chen, X.: Interpreting and evaluating neural network robustness. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 4199–4205 (2019)

    Google Scholar 

  14. Zheng, S., Song, Y., Leung, T., Goodfellow, I.J.: Improving the robustness of deep neural networks via stability training. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pp. 4480–4488 (2016)

    Google Scholar 

Download references

Acknowledgement

We thank Sherief Hashima of RIKEN AIP and the reviewers for their helpful comments. This work was supported by JSPS KAKENHI Grant Numbers JP23H03348, JP20H05967, JP19H04174 and JP22H03649, respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kohei Hatano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, Y., Hatano, K., Takimoto, E. (2023). Boosting-Based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks. In: Bifet, A., Lorena, A.C., Ribeiro, R.P., Gama, J., Abreu, P.H. (eds) Discovery Science. DS 2023. Lecture Notes in Computer Science(), vol 14276. Springer, Cham. https://doi.org/10.1007/978-3-031-45275-8_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45275-8_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45274-1

  • Online ISBN: 978-3-031-45275-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics