skip to main content
10.1145/3238147.3238225acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Android testing via synthetic symbolic execution

Published:03 September 2018Publication History

ABSTRACT

Symbolic execution of Android applications is challenging as it involves either building a customized VM for Android or modeling the Android libraries. Since the Android Runtime evolves from one version to another, building a high-fidelity symbolic execution engine involves modeling the effect of the libraries and their evolved versions. Without simulating the behavior of Android libraries, path divergence may occur due to constraint loss when the symbolic values flow into Android framework and these values later affect the subsequent path taken. Previous works such as JPF-Android have relied on the modeling of execution environment such as libraries. In this work, we build a dynamic symbolic execution engine for Android apps, without any manual modeling of execution environment. Environment (or library) dependent control flow decisions in the application will trigger an on-demand program synthesis step to automatically deduce a representation of the library.This representation is refined on-the-fly by running the corresponding library multiple times.The overarching goal of the refinement is to enhance behavioral coverage and to alleviate the path divergence problem during symbolic execution. Moreover, our library synthesis can be made context-specific. Compared to traditional synthesis approaches which aim to synthesize the complete library code, our context-specific synthesis engine can generate more precise expressions for a given context. The evaluation of our dynamic symbolic execution engine, built on top of JDART, shows that the library models obtained from program synthesis are often more accurate than the semi-manual models in JPF-Android. Furthermore, our symbolic execution engine could reach more branch targets, as compared to using the JPF-Android models.

References

  1. Saswat Anand, Mayur Naik, Mary Jean Harrold, and Hongseok Yang. 2012. Automated Concolic Testing of Smartphone Apps. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE 2012). ACM, Article 59, 11 pages. Android Testing via Synthetic Symbolic Execution ASE ’18, September 3–7, 2018, Montpellier, France Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Shay Artzi, Julian Dolby, Frank Tip, and Marco Pistoia. 2010. Directed test generation for effective fault localization. In Proceedings of the 19th international symposium on Software testing and analysis (ISSTA 2010). ACM, 49–60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexandre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick McDaniel. 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. FlowDroid: Precise Context, Flow, Field, Object-sensitive and Lifecycle-aware Taint Analysis for Android Apps. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2014). ACM, 259–269. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Heila Botha, Oksana Tkachuk, Brink van der Merwe, and Willem Visser. {n. d.}. Addressing Challenges in Obtaining High Coverage when Model Checking Android Applications. In Proceedings of the 24th ACM SIGSOFT International SPIN Symposium on Model Checking of Software (SPIN 2017). ACM, 31–40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Cristian Cadar, Daniel Dunbar, and Dawson Engler. 2008. KLEE: Unassisted and Automatic Generation of High-coverage Tests for Complex Systems Programs. In Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI 2008). USENIX Association, 209–224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Vitaly Chipounov, Volodymyr Kuznetsov, and George Candea. 2011. S2E: A platform for in-vivo multi-path analysis of software systems. In Proceedings of the 16th international conference on Architectural support for programming languages and operating systems (ASPLOS 2011), Vol. 46. ACM, 265–278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Shauvik Roy Choudhary, Alessandra Gorla, and Alessandro Orso. 2015. Automated test input generation for android: Are we there yet?(e). In 30th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015). IEEE, 429–440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2008/ETAPS 2008). Springer, 337–340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation (PLDI 2005), Vol. 40. ACM, 213–223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Patrice Godefroid, Michael Y Levin, David A Molnar, et al. 2008. Automated whitebox fuzz testing. In The Network and Distributed System Security Symposium (NDSS 2010), Vol. 8. 151–166.Google ScholarGoogle Scholar
  12. Casper S. Jensen, Mukul R. Prasad, and Anders Møller. 2013. Automated Testing with Targeted Event Sequence Generation. In Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA 2013). ACM, 67–77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jinseong Jeon, Kristopher K Micinski, and Jeffrey S Foster. 2012. SymDroid: Symbolic execution for Dalvik bytecode.Google ScholarGoogle Scholar
  14. Jinseong Jeon, Xiaokang Qiu, Jonathan Fetter-Degges, Jeffrey S. Foster, and Armando Solar-Lezama. 2016. Synthesizing Framework Models for Symbolic Execution. In Proceedings of the 38th International Conference on Software Engineering (ICSE 2016). ACM, 156–167. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Alexander Kohan, Mitsuharu Yamamoto, Cyrille Artho, Yoriyuki Yamagata, Lei Ma, Masami Hagiya, and Yoshinori Tanabe. 2017. Java Pathfinder on Android Devices. SIGSOFT Software Engineering Notes 41, 6 (Jan. 2017), 1–5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Kasper Luckow, Marko Dimjašević, Dimitra Giannakopoulou, Falk Howar, Malte Isberner, Temesghen Kahsai, Zvonimir Rakamarić, and Vishwanath Raman. 2016.Google ScholarGoogle Scholar
  17. JDart: A dynamic symbolic analysis framework. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016). Springer, 442–459. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Rupak Majumdar and Koushik Sen. 2007. Hybrid concolic testing. In Proceedings of the 29th international conference on Software Engineering (ICSE 2007). IEEE, 416–426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective Automated Testing for Android Applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA 2016). ACM, 94–105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Tyler McDonnell, Baishakhi Ray, and Miryung Kim. 2013. An empirical study of api stability and adoption in the android ecosystem. In 29th IEEE International Conference on Software Maintenance (ICSM 2013). IEEE, 70–79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Sergey Mechtaev, Xiang Gao, Shin Hwei Tan, and Abhik Roychoudhury. 2018. Test-equivalence Analysis for Automatic Patch Generation. ACM Trans. Softw. Eng. Methodol. (2018), To Appear.Google ScholarGoogle Scholar
  22. Sergey Mechtaev, Alberto Griggio, Alessandro Cimatti, and Abhik Roychoudhury. 2018. Symbolic Execution with Existential Second-Order Constraints. In Proceedings of The 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Sergey Mechtaev, Jooyong Yi, and Abhik Roychoudhury. 2016. Angelix: Scalable multiline program patch synthesis via symbolic analysis. In Proceedings of IEEE/ACM 38th International Conference on Software Engineering (ICSE 2016). IEEE, 691–701. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Nariman Mirzaei, Hamid Bagheri, Riyadh Mahmood, and Sam Malek. 2015. SIGDroid: Automated system input generation for Android applications. In 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE 2015). IEEE, 461–471. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Nariman Mirzaei, Sam Malek, Corina S Păsăreanu, Naeem Esfahani, and Riyadh Mahmood. 2012. Testing android apps through symbolic execution. ACM SIGSOFT Software Engineering Notes 37, 6 (2012), 1–5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. NASA. 2013. PathDroid. https://ti.arc.nasa.gov/opensource/projects/pathdroid/Google ScholarGoogle Scholar
  27. Hoang Duong Thien Nguyen, Dawei Qi, Abhik Roychoudhury, and Satish Chandra. 2013. Semfix: Program repair via semantic analysis. In Proceedings of the 2013 International Conference on Software Engineering (ICSE 2013). IEEE, 772–781. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Corina S Păsăreanu and Neha Rungta. 2010. Symbolic PathFinder: symbolic execution of Java bytecode. In Proceedings of the IEEE/ACM international conference on Automated software engineering (ASE 2010). ACM, 179–180. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Corina S Păsăreanu, Willem Visser, David Bushnell, Jaco Geldenhuys, Peter Mehlitz, and Neha Rungta. 2013. Symbolic PathFinder: integrating symbolic execution with model checking for Java bytecode analysis. Automated Software Engineering 20, 3 (2013), 391–425.Google ScholarGoogle ScholarCross RefCross Ref
  30. Dawei Qi, William N Sumner, Feng Qin, Mai Zheng, Xiangyu Zhang, and Abhik Roychoudhury. 2012. Modeling software execution environment. In 19th Working Conference on Reverse Engineering (WCRE 2012). IEEE, 415–424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Julian Schütte, Rafael Fedler, and Dennis Titze. 2015. ConDroid: Targeted Dynamic Analysis of Android Applications. In 2015 IEEE 29th International Conference on Advanced Information Networking and Applications. 571–578.Google ScholarGoogle Scholar
  32. Ting Su, Guozhu Meng, Yuting Chen, Ke Wu, Weiming Yang, Yao Yao, Geguang Pu, Yang Liu, and Zhendong Su. 2017. Guided, Stochastic Model-based GUI Testing of Android Apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). ACM, 245–256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Shin Hwei Tan, Zhen Dong, Xiang Gao, and Abhik Roychoudhury. 2018. Repairing Crashes in Android Apps. In Proceedings of the 40th International Conference on Software Engineering (ICSE 2018). IEEE, 187–198. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Shin Hwei Tan and Abhik Roychoudhury. 2015. relifix: Automated repair of software regressions. In Proceedings of the 37th International Conference on Software Engineering (ICSE 2015). IEEE, 471–482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Shin Hwei Tan, Hiroaki Yoshida, Mukul R Prasad, and Abhik Roychoudhury. 2016. Anti-patterns in search-based program repair. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, 727–738. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Oksana Tkachuk. 2013. OCSEGen: Open components and systems environment generator. In Proceedings of the 2nd ACM SIGPLAN International Workshop on State Of the Art in Java Program analysis. ACM, 9–12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Raja Vallée-Rai, Phong Co, Etienne Gagnon, Laurie Hendren, Patrick Lam, and Vijay Sundaresan. 2010. Soot: A Java bytecode optimization framework. In CASCON First Decade High Impact Papers. IBM Corp., 214–224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Heila van der Merwe, Brink van der Merwe, and Willem Visser. 2014. Execution and property specifications for jpf-android. ACM SIGSOFT Software Engineering Notes 39, 1 (2014), 1–5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Willem Visser, Klaus Havelund, Guillaume Brat, SeungJoon Park, and Flavio Lerda. 2003. Model checking programs. Automated software engineering 10, 2 (2003), 203–232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Zhemin Yang, Min Yang, Yuan Zhang, Guofei Gu, Peng Ning, and X. Sean Wang. 2013. AppIntent: analyzing sensitive data transmission in android for privacy leakage detection. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security (CCS 2013). ACM, 1043–1054. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Android testing via synthetic symbolic execution

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ASE '18: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering
      September 2018
      955 pages
      ISBN:9781450359375
      DOI:10.1145/3238147

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 September 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate82of337submissions,24%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader