skip to main content
10.1145/3377811.3380347acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Artifacts Available / v1.1

Multiple-entry testing of Android applications by constructing activity launching contexts

Authors Info & Claims
Published:01 October 2020Publication History

ABSTRACT

Existing GUI testing approaches of Android apps usually test apps from a single entry. In this way, the marginal activities far away from the default entry are difficult to be covered. The marginal activities may fail to be launched due to requiring a great number of activity transitions or involving complex user operations, leading to uneven coverage on activity components. Besides, since the test space of GUI programs is infinite, it is difficult to test activities under complete launching contexts using single-entry testing approaches.

In this paper, we address these issues by constructing activity launching contexts and proposing a multiple-entry testing framework. We perform an inter-procedural, flow-, context- and path-sensitive analysis to build activity launching models and generate complete launching contexts. By activity exposing and static analysis, we could launch activities directly under various contexts without performing long event sequence on GUI. Besides, to achieve an in-depth exploration, we design an adaptive exploration framework which supports the multiple-entry exploration and dynamically assigns weights to entries in each turn.

Our approach is implemented in a tool called Fax, with an activity launching strategy Faxla and an exploration strategy Faxex. The experiments on 20 real-world apps show that Faxla can cover 96.4% and successfully launch 60.6% activities, based on which Faxex further achieves a relatively 19.7% improvement on method coverage compared with the most popular tool Monkey. Our tool also behaves well in revealing hidden bugs. Fax can trigger over seven hundred unique crashes, including 180 Errors and 539 Warnings, which is significantly higher than those of other tools. Among the 46 bugs reported to developers on Github, 33 have been fixed up to now.

References

  1. ADB shell - Android ADB Commands Manual. 2019. http://adbshell.com/. (2019).Google ScholarGoogle Scholar
  2. Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, and Atif M. Memon. 2012. Using GUI ripping for automated testing of Android applications. In ASE 2012. 258--261.Google ScholarGoogle Scholar
  3. Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Bryan Dzung Ta, and Atif M. Memon. 2015. MobiGUITAR: Automated Model-Based Testing of Mobile Apps. IEEE Software 32, 5 (2015), 53--59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Saswat Anand, Mayur Naik, Mary Jean Harrold, and Hongseok Yang. 2012. Automated concolic testing of smartphone apps. In SIGSOFT/FSE 2012. 1--11.Google ScholarGoogle Scholar
  5. Ant. 2019. https://ant.apache.org/. (2019).Google ScholarGoogle Scholar
  6. Apktool - A tool for reverse engineering. 2019. http://ibotpeaches.github.io/Apktool/. (2019).Google ScholarGoogle Scholar
  7. Tanzirul Azim and Iulian Neamtiu. 2013. Targeted and depth-first exploration for systematic testing of Android apps. In OOPSLA 2013, part of SPLASH 2013. 641--660.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Bundle | Android Developers. 2019. https://developer.android.com/reference/android/os/Bundle.html. (2019).Google ScholarGoogle Scholar
  9. Wontae Choi, George C. Necula, and Koushik Sen. 2013. Guided GUI testing of Android apps with minimal restart and approximate learning. In OOPSLA 2013, part of SPLASH 2013. 623--640.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Shauvik Roy Choudhary, Alessandra Gorla, and Alessandro Orso. 2015. Automated Test Input Generation for Android: Are We There Yet?. In ASE 2015. 429--440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Leonardo Mendonça de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In ETAPS 2008. 337--340.Google ScholarGoogle ScholarCross RefCross Ref
  12. F-Droid. 2019. https://f-droid.org/. (2019).Google ScholarGoogle Scholar
  13. Fax. 2019. https://github.com/hanada31/Fax. (2019).Google ScholarGoogle Scholar
  14. Xiang Gao, Shin Hwei Tan, Zhen Dong, and Abhik Roychoudhury. 2018. Android testing via synthetic symbolic execution. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018. 419--429.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Tianxiao Gu, Chengnian Sun, Xiaoxing Ma, Chun Cao, Chang Xu, Yuan Yao, Qirun Zhang, Jian Lu, and Zhendong Su. 2019. Practical GUI testing of Android applications via model abstraction and refinement. In ICSE 2019. 269--280.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Shuai Hao, Bin Liu, Suman Nath, William G. J. Halfond, and Ramesh Govindan. 2014. PUMA: programmable UI-automation for large-scale dynamic analysis of mobile apps. In MobiSys 2014. 204--217.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Cuixiong Hu and Iulian Neamtiu. 2011. Automating GUI testing for Android applications. In AST 2011. 77--83.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Intent Fuzzer. 2019. https://www.nccgroup.trust/us/our-research/intent-fuzzer/. (2019).Google ScholarGoogle Scholar
  19. IntentBench. 2019. https://github.com/hanada31/Fax/tree/master/IntentBench. (2019).Google ScholarGoogle Scholar
  20. Intents and Intent Filters | Android Developers. 2016. https://developer.android.com/guide/components/intents-filters.html. (2016).Google ScholarGoogle Scholar
  21. Java Path Finder. 2019. http://javapathfinder.sourceforge.net/. (2019).Google ScholarGoogle Scholar
  22. Casper Svenning Jensen, Mukul R. Prasad, and Anders Møller. 2013. Automated testing with targeted event sequence generation. In ISSTA 2013. 67--77.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jierui Liu, Tianyong Wu, Jun Yan, and Jian Zhang. 2017. InsDal: A safe and extensible instrumentation tool on Dalvik byte-code for Android applications. In IEEE 24th International Conference on Software Analysis, Evolution and Reengineering, SANER 2017. 502--506.Google ScholarGoogle ScholarCross RefCross Ref
  24. Aravind Machiry, Rohan Tahiliani, and Mayur Naik. 2013. Dynodroid: an input generation system for Android apps. In ESEC/FSE 2013. 224--234.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Riyadh Mahmood, Naeem Esfahani, Thabet Kacem, Nariman Mirzaei, Sam Malek, and Angelos Stavrou. 2012. A whitebox approach for automated security testing of Android applications on the cloud. In AST 2012. 22--28.Google ScholarGoogle ScholarCross RefCross Ref
  26. Amiya Kumar Maji, Fahad A. Arshad, Saurabh Bagchi, and Jan S. Rellermeyer. 2012. An empirical study of the robustness of Inter-component Communication in Android. In DSN 2012. 1--12.Google ScholarGoogle Scholar
  27. Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: multi-objective automated testing for Android applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis, ISSTA, 2016. 94--105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Nariman Mirzaei, Joshua Garcia, Hamid Bagheri, Alireza Sadeghi, and Sam Malek. 2016. Reducing combinatorics in GUI testing of android applications. In ICSE 2016. 559--570.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Monkey. 2019. https://developer.android.com/studio/test/monkey. (2019).Google ScholarGoogle Scholar
  30. Nielson, Flemming, Hanne R. Nielson, and Chris Hankin. 2015. Principles of program analysis. Springer.Google ScholarGoogle Scholar
  31. Damien Octeau, Daniel Luchaup, Matthew Dering, Somesh Jha, and Patrick McDaniel. 2015. Composite Constant Propagation: Application to Android Inter-Component Communication Analysis. In ICSE 2015. 77--88.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Issuse of AnkiDroid. 2019. https://github.com/ankidroid/Anki-Android/issues/5401. (2019).Google ScholarGoogle Scholar
  33. Issuse of AntennaPod. 2019. https://github.com/AntennaPod/AntennaPod/issues/3304. (2019).Google ScholarGoogle Scholar
  34. Issuse of Conversations. 2019. https://github.com/siacs/Conversations/issues/3512. (2019).Google ScholarGoogle Scholar
  35. Issuse of EteSync. 2019. https://github.com/etesync/android/issues/84. (2019).Google ScholarGoogle Scholar
  36. Issuse of iNaturalist. 2019. https://github.com/inaturalist/iNaturalistAndroid/issues/684. (2019).Google ScholarGoogle Scholar
  37. Issuse of K9Mail. 2019. https://github.com/k9mail/k-9/issues/4160. (2019).Google ScholarGoogle Scholar
  38. Issuse of Padland. 2019. https://github.com/mikifus/padland/issues/54. (2019).Google ScholarGoogle Scholar
  39. Issuse of PassAndroid. 2019. https://github.com/ligi/PassAndroid/issues/228. (2019).Google ScholarGoogle Scholar
  40. Issuse of SuntimesWidget. 2019. https://github.com/forrestguice/SuntimesWidget/issues/353. (2019).Google ScholarGoogle Scholar
  41. Issuse of Synthing. 2019. https://github.com/syncthing/syncthing-android/issues/1382. (2019).Google ScholarGoogle Scholar
  42. K9Mail on Github. 2019. https://github.com/k9mail/k-9/tree/GH-701_fix_special_use_folders_with_prefix. (2019).Google ScholarGoogle Scholar
  43. Linjie Pan, Baoquan Cui, Jiwei Yan, Xutong Ma, Jun Yan, and Jian Zhang. 2019. Androlic: an extensible flow, context, object, field, and path-sensitive static analysis framework for Android. In ISSTA 2019. 394--397.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Siegfried Rasthofer, Steven Arzt, Stefan Triller, and Michael Pradel. 2017. Making malory behave maliciously: targeted fuzzing of android execution environments. In ICSE 2017. 300--311.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Raimondas Sasnauskas and John Regehr. 2014. Intent fuzzer: crafting intents of death. In WODA+PERTEA 2014. 1--5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Soot. 2019. http://www.bodden.de/2008/09/22/soot-intra. (2019).Google ScholarGoogle Scholar
  47. Java String. 2019. https://docs.oracle.com/javase/8/docs/api/java/lang/String.html. (2019).Google ScholarGoogle Scholar
  48. Ting Su, Guozhu Meng, Yuting Chen, Ke Wu, Weiming Yang, Yao Yao, Geguang Pu, Yang Liu, and Zhendong Su. 2017. Guided, stochastic model-based GUI testing of Android apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE, 2017. 245--256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Sutton, Michael, Adam Greene, and Pedram Amini. 2007. Fuzzing: brute force vulnerability discovery. Pearson Education.Google ScholarGoogle Scholar
  50. Cong Tian, Congli Xia, and Zhenhua Duan. 2018. Android inter-component communication analysis with intent revision. In ICSE 2018. 254--255.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. IntentFuzzer Tool. 2019. https://github.com/MindMac/IntentFuzzer. (2019).Google ScholarGoogle Scholar
  52. Heila van der Merwe, Brink van der Merwe, and Willem Visser. 2012. Verifying android applications using Java PathFinder. ACM SIGSOFT Software Engineering Notes 37, 6 (2012), 1--5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Jue Wang, Yanyan Jiang, Chang Xu, Xiaoxing Ma, and Jian Lu. 2019. Automatic test-input generation for Android applications (in Chinese). SCIENCE CHINA Informationis 49, 10 (2019), 1234--1266. Google ScholarGoogle ScholarCross RefCross Ref
  54. Tianyong Wu, Xi Deng, Jun Yan, and Jian Zhang. 2019. Analyses for specific defects in android applications: a survey. Frontiers Comput. Sci. 13, 6 (2019), 1210--1227.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Jiwei Yan, Xi Deng, Ping Wang, Tianyong Wu, Jun Yan, and Jian Zhang. 2018. Characterizing and identifying misexposed activities in Android applications. In ASE 2018. 691--701.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Kun Yang, Jianwei Zhuge, Yongke Wang, Lujue Zhou, and Hai-Xin Duan. 2014. IntentFuzzer: detecting capability leaks of android applications. In ASIA CCS 2014. 531--536.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Shengqian Yang, Hailong Zhang, Haowei Wu, Yan Wang, Dacong Yan, and Atanas Rountev. 2015. Static Window Transition Graphs for Android. In ASE 2015. 658--668.Google ScholarGoogle Scholar
  58. Wei Yang, Mukul R. Prasad, and Tao Xie. 2013. A Grey-Box Approach for Automated GUI-Model Generation of Mobile Applications. In ETAPS 2013. 250--265.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Hui Ye, Shaoyin Cheng, Lanbo Zhang, and Fan Jiang. 2013. DroidFuzzer: Fuzzing the Android Apps with Intent-Filter Tag. In MoMM 2013. 68--74.Google ScholarGoogle Scholar
  60. Xia Zeng, Dengfeng Li, Wujie Zheng, Fan Xia, Yuetang Deng, Wing Lam, Wei Yang, and Tao Xie. 2016. Automated test input generation for Android: are we really there yet in an industrial case?. In FSE 2016. 987--992.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Yunhui Zheng, Xiangyu Zhang, and Vijay Ganesh. 2013. Z3-str: a z3-based string solver for web application analysis. In ESEC/FSE 2013. 114--124.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Multiple-entry testing of Android applications by constructing activity launching contexts

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICSE '20: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering
      June 2020
      1640 pages
      ISBN:9781450371216
      DOI:10.1145/3377811

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 October 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate276of1,856submissions,15%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader