Abstract
Many static analysis methods and tools have been developed for program bug detection. They are based on diverse theoretical principles, such as pattern matching, abstract interpretation, model checking and symbolic execution. Unfortunately, none of them can meet most requirements for bug finding. Individual tool always faces high false negatives and/or false positives, which is the main obstacle for using them in practice. A direct and promising way to improve the capability of static analysis is to integrate diverse bug finders. In this paper, we first selected five state-of-the-art C/C++ static analysis tools implemented with different theories. We then evaluated them over different defect types and code structures in detail. To increase the precision and recall for tool integration, we studied how to properly employ machine learning algorithms based on features of programs and tools. Evaluation results show that: (1) the abilities of diverse tools are quite different for defect types and code structures, and their overlaps are quite small; (2) the integration based on machine learning can obviously improve the overall performance of static analysis. Finally, we investigated the defect types and code structures which are still challenging for existing tools. They should be addressed in future research on static analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Clang Static Analyzer. http://clang-analyzer.llvm.org/
Cppcheck. http://cppcheck.sourceforge.net/
Frama-C. http://frama-c.com/
Ayewah, N., Penix, J., Morgenthaler, J.D., Pugh, W., Hovemeyer, D.: Using static analysis to find bugs. IEEE Softw. 25, 22–29 (2008). https://doi.org/10.1109/MS.2008.130
Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Usenix Conference on Operating Systems Design and Implementation, pp. 209–224 (2009)
CAS: CAS static analysis tool study methodology. NSA (2012)
CAS: Juliet test suite v1.2 for C/verb/C++ user guide. NSA (2012)
Chen, C., Li, J., Kong, D.: Source code static analysis based on data fusion. Comput. Eng. 34(20), 66–68 (2008)
Chess, B., West, J.: Secure Programming with Static Analysis. Addison-Wesley Professional, Boston (2007)
Cousot, P., Cousot, R.: Abstract interpretation frameworks. J. Log. Comput. 2(4), 511–547 (1992)
Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and debugging. Commun. ACM 52(11), 74–84 (2009)
Heckman, S.S., Williams, L.A.: A systematic literature review of actionable alert identification techniques for automated static code analysis. Inf. Softw. Technol. 53(4), 363–387 (2011)
Johnson, B., Song, Y., Murphy-Hill, E.R., Bowdidge, R.W.: Why don’t software developers use static analysis tools to find bugs? In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) 35th International Conference on Software Engineering, ICSE 2013, San Francisco, CA, USA, 18–26 May 2013, pp. 672–681. IEEE Computer Society (2013)
kgirard: The tool output integration framework (TOIF) is a powerful composite vulnerability detection platform (2016). https://github.com/KdmAnalytics/toif
King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)
Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Aspects Comput. 27(3), 573–609 (2015)
Kroening, D., Tautschnig, M.: CBMC – C bounded model checker. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 389–391. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54862-8_26
McLean, R.K.: Comparing static security analysis tools using open source software. In: IEEE Sixth International Conference on Software Security and Reliability Companion, pp. 68–74. IEEE (2012)
Meng, N., Wang, Q., Wu, Q., Mei, H.: An approach to merge results of multiple static analysis tools (short paper). In: Zhu, H. (ed.) Proceedings of the Eighth International Conference on Quality Software, pp. 169–174. IEEE Computer Society (2008)
Omidiora, E.O., Adeyanju, I.A., Fenwa, O.D.: Comparison of machine learning classifiers for recognition of online and offline handwritten digits. Comput. Eng. Intell. Syst. 4(13), 39–47 (2013)
Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(10), 2825–2830 (2012)
Rutar, N., Almazan, C.B., Foster, J.S.: A comparison of bug finding tools for java. In: International Symposium on Software Reliability Engineering, pp. 245–256 (2004)
Thung, F.: To what extent could we detect field defects? An empirical study of false negatives in static bug finding tools. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, pp. 50–59 (2012)
Thung, F., Lucia, Lo, D., Jiang, L., Devanbu, P.T.: To what extent could we detect field defects? An empirical study of false negatives in static bug finding tools. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, pp. 50–59. SelectedWorks (2013)
Wagner, S., Deissenboeck, F., Aichner, M., Wimmer, J., Schwalb, M.: An evaluation of two bug pattern tools for java. In: International Conference on Software Testing, Verification, and Validation, pp. 248–257 (2008)
Wagner, S., Jürjens, J., Koller, C., Trischberger, P.: Comparing bug finding tools with reviews and tests. In: Khendek, F., Dssouli, R. (eds.) TestCom 2005. LNCS, vol. 3502, pp. 40–55. Springer, Heidelberg (2005). https://doi.org/10.1007/11430230_4
Zhang, S., Shang, Z.: Software defect pattern analysis and location based on Cppcheck. Comput. Eng. Appl. 51(3), 69–73 (2015)
Acknowledgments
This work was funded by the National Nature Science Foundation of China (No.61690203, No.61802415, No.61532007).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Lu, B., Dong, W., Yin, L., Zhang, L. (2018). Evaluating and Integrating Diverse Bug Finders for Effective Program Analysis. In: Bu, L., Xiong, Y. (eds) Software Analysis, Testing, and Evolution. SATE 2018. Lecture Notes in Computer Science(), vol 11293. Springer, Cham. https://doi.org/10.1007/978-3-030-04272-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-04272-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04271-4
Online ISBN: 978-3-030-04272-1
eBook Packages: Computer ScienceComputer Science (R0)