skip to main content
10.1145/2950290.2950316acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

DiagDroid: Android performance diagnosis via anatomizing asynchronous executions

Published:01 November 2016Publication History

ABSTRACT

Rapid UI responsiveness is a key consideration to Android app developers. However, the complicated concurrency model of Android makes it hard for developers to understand and further diagnose the UI performance. This paper presents DiagDroid, a tool specifically designed for Android UI performance diagnosis. The key notion of DiagDroid is that UI-triggered asynchronous executions contribute to the UI performance, and hence their performance and their runtime dependency should be properly captured to facilitate performance diagnosis. However, there are tremendous ways to start asynchronous executions, posing a great challenge to profiling such executions and their runtime dependency. To this end, we properly abstract five categories of asynchronous executions as the building basis. As a result, they can be tracked and profiled based on the specifics of each category with a dynamic instrumentation approach carefully tailored for Android. DiagDroid can then accordingly profile the asynchronous executions in a task granularity, equipping it with low-overhead and high compatibility merits. The tool is successfully applied in diagnosing 33 real-world open-source apps, and we find 14 of them contain 27 performance issues. It shows the effectiveness of our tool in Android UI performance diagnosis. The tool is open-source released online.

References

  1. S. Alimadadi, A. Mesbah, and K. Pattabiraman. Understanding asynchronous interactions in full-stack JavaScript. In Proc. of ICSE ’16, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Altman, M. Arnold, S. Fink, and N. Mitchell. Performance analysis of idle programs. In Proc. of OOPSLA ’10, pages 739–753, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Amalfitano, A. R. Fasolino, P. Tramontana, S. De Carmine, and A. M. Memon. Using gui ripping for automated testing of android applications. In Proc. of ASE ’12, pages 258–261, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Amrutkar, M. Hiltunen, T. Jim, K. Joshi, O. Spatscheck, P. Traynor, and S. Venkataraman. Why is my smartphone slow? on the fly diagnosis of underperformance on the mobile internet. In Proc. of DSN ’13, pages 1–8, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Anand, M. Naik, M. J. Harrold, and H. Yang. Automated concolic testing of smartphone apps. In Proc. of FSE ’12, pages 1–11, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Apktool: A tool for reverse engineering Android apk files. http://ibotpeaches.github.io/Apktool/.Google ScholarGoogle Scholar
  7. AsyncTask. http://developer.android.com/reference/ android/os/AsyncTask.html.Google ScholarGoogle Scholar
  8. Automating User Interface Tests. http://developer. android.com/training/testing/ui-testing/index.html.Google ScholarGoogle Scholar
  9. T. Azim and I. Neamtiu. Targeted and depth-first exploration for systematic testing of android apps. In Proc. of OOPLSA ’13, pages 641–660, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Banerjee. Static analysis driven performance and energy testing. In Proc. of FSE ’14, pages 791–794, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Banerjee, S. Chattopadhyay, and A. Roychoudhury. Static analysis driven cache performance testing. In Proc. of RTSS ’13, pages 319–329, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. G. Bavota, M. Linares-Vásquez, C. E. Bernal-Cárdenas, M. Di Penta, R. Oliveto, and D. Poshyvanyk. The impact of api change- and fault-proneness on the user ratings of android apps. Software Engineering, IEEE Transactions on, 41(4):384–407, Apr 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Burnim and K. Sen. Asserting and checking determinism for multithreaded programs. In Proc. of FSE ’09, pages 3–12, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Cai and L. Cao. Effective and precise dynamic detection of hidden races for java programs. In Proc. of FSE ’15, pages 450–461, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Q. A. Chen, H. Luo, S. Rosen, Z. M. Mao, K. Iyer, J. Hui, K. Sontineni, and K. Lau. Qoe doctor: Diagnosing mobile app qoe with automated ui control and cross-layer analysis. In Proc. of IMC ’14, pages 151–164, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. W. Choi, G. Necula, and K. Sen. Guided GUI testing of android apps with minimal restart and approximate learning. In Proc. of OOPSLA ’13, pages 623–640, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. R. Choudhary, A. Gorla, and A. Orso. Automated test input generation for android: Are we there yet? In Proc. of ASE ’15, pages 429–440, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Configuring ART. https://source.android.com/ devices/tech/dalvik/configure.html.Google ScholarGoogle Scholar
  19. DiagDroid - Android Performance Diagnosis via Anatomizing Asynchronous Executions. http://www.cudroid.com/DiagDroid.Google ScholarGoogle Scholar
  20. D. Engler, D. Y. Chen, S. Hallem, A. Chou, and B. Chelf. Bugs as deviant behavior: A general approach to inferring errors in systems code. In Proc. of SOSP ’01, pages 57–72, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F-Droid. https://f-droid.org/.Google ScholarGoogle Scholar
  22. FBReader - Favorite Book Reader. https://fbreader.org/.Google ScholarGoogle Scholar
  23. Y. Fratantonio, A. Machiry, A. Bianchi, C. Kruegel, and G. Vigna. Clapp: Characterizing loops in android applications. In Proc. of FSE ’15, pages 687–697, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. Gan, C. Ma, and J. Wu. Data Clustering: Theory, Algorithms, and Applications (ASA-SIAM Series on Statistics and Applied Probability). 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. L. Gomez, I. Neamtiu, T. Azim, and T. Millstein. Reran: Timing- and touch-sensitive record and replay for android. In Proc. of ICSE ’13, pages 72–81, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. S. Gordon, D. K. Hong, P. M. Chen, J. Flinn, S. Mahlke, and Z. M. Mao. Accelerating mobile applications through flip-flop replication. In Proc. of MobiSys ’15, pages 137–150, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. K. Herzig, M. Greiler, J. Czerwonka, and B. Murphy. The art of testing less without sacrificing quality. In Proc. of ICSE ’15, pages 483–493, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Huang and L. Rauchwerger. Finding schedule-sensitive branches. In Proc. of FSE ’15, pages 439–449, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. C. S. Jensen, M. R. Prasad, and A. Møller. Automated testing with targeted event sequence generation. In Proc. of ISSTA ’13, pages 67–77, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Ji, E. W. Felten, and K. Li. Performance measurements for multithreaded programs. SIGMETRICS Perform. Eval. Rev., 26(1):161–170, June 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. M. Jovic, A. Adamoli, and M. Hauswirth. Catch me if you can: Performance bug detection in the wild. In Proc. of OOPLSA ’11, pages 155–170, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. V. Kahlon, N. Sinha, E. Kruus, and Y. Zhang. Static data race detection for concurrent programs with asynchronous calls. In Proc. of FSE ’09, pages 13–22, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. K. Lee, D. Chu, E. Cuervo, J. Kopf, Y. Degtyarev, S. Grizan, A. Wolman, and J. Flinn. Outatime: Using speculation to enable low-latency continuous interaction for mobile cloud gaming. In Proc. of MobiSys ’15, pages 151–165, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Y. Lei and R. H. Carver. Reachability testing of concurrent programs. IEEE Trans. Softw. Eng., 32(6):382–403, June 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Y. Lin, S. Okur, and D. Dig. Study and refactoring of android asynchronous programming. In Proc. of ASE ’15, pages 224–235, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Y. Lin, C. Radoi, and D. Dig. Retrofitting concurrency for android applications through refactoring. In Proc. of FSE ’14, pages 341–352, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Y.-D. Lin, J. Rojas, E.-H. Chu, and Y.-C. Lai. On the accuracy, efficiency, and reusability of automated test oracles for android devices. Software Engineering, IEEE Transactions on, 40(10):957–970, Oct 2014.Google ScholarGoogle Scholar
  38. Y. Liu, C. Xu, and S.-C. Cheung. Characterizing and detecting performance bugs for smartphone applications. In Proc. of ICSE ’14, pages 1013–1024, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Y. Liu, C. Xu, and S.-C. Cheung. Diagnosing energy efficiency and performance for mobile internetware applications. Software, IEEE, 32(1):67–75, Jan 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Low RAM Configuration. https://source.android.com/ devices/tech/config/low-ram.html.Google ScholarGoogle Scholar
  41. S. Lu, S. Park, E. Seo, and Y. Zhou. Learning from mistakes: A comprehensive study on real world concurrency bug characteristics. In Proc. of ASPLOS ’08, pages 329–339, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. A. Machiry, R. Tahiliani, and M. Naik. Dynodroid: An input generation system for android apps. In Proc. of FSE ’13, pages 224–234, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. R. Mahmood, N. Mirzaei, and S. Malek. Evodroid: Segmented evolutionary testing of android apps. In Proc. of FSE ’14, pages 599–609, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. N. Mirzaei, S. Malek, C. S. Păsăreanu, N. Esfahani, and R. Mahmood. Testing android apps through symbolic execution. ACM SIGSOFT Software Engineering Notes, 37(6):1–5, Nov. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Monkeyrunner. http://developer.android.com/tools/ help/monkeyrunner concepts.html.Google ScholarGoogle Scholar
  46. Y. Moon, D. Kim, Y. Go, Y. Kim, Y. Yi, S. Chong, and K. Park. Practicalizing delay-tolerant mobile apps with cedos. In Proc. of MobiSys ’15, pages 419–433, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. F. F.-H. Nah. A study on tolerable waiting time: how long are web users willing to wait? Behaviour & Information Technology, 23(3):153–163, 2004.Google ScholarGoogle Scholar
  48. D. T. Nguyen, G. Zhou, G. Xing, X. Qi, Z. Hao, G. Peng, and Q. Yang. Reducing smartphone application delay through read/write isolation. In Proc. of Mobisys ’15, pages 287–300, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. J. Nielsen. Usability Engineering. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1993. Google ScholarGoogle ScholarCross RefCross Ref
  50. S. Niida, S. Uemura, and H. Nakamura. Mobile services. IEEE Vehicular Technology Magazine, 5(3):61–67, Sep 2010.Google ScholarGoogle ScholarCross RefCross Ref
  51. A. Nistor, P. C. Chang, C. Radoi, and S. Lu. Caramel: Detecting and fixing performance problems that have non-intrusive fixes. In Proc. of ICSE ’15, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. OpenLaw - Die Gesetze App. https://openlaw.jdsoft.de/.Google ScholarGoogle Scholar
  53. Processes and Threads. http://developer.android. com/guide/components/processes-and-threads.html.Google ScholarGoogle Scholar
  54. Profiling with Traceview and dmtracedump. http://developer.android.com/tools/debugging/ debugging-tracing.html.Google ScholarGoogle Scholar
  55. Z. Qin, Y. Tang, E. Novak, and Q. Li. Mobiplay: A remote execution based record-and-replay tool for mobile applications. In Proc. of ICSE ’16, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. L. Ravindranath, S. Nath, J. Padhye, and H. Balakrishnan. Automatic and scalable fault detection for mobile applications. In Proc. of Mobisys ’14, pages 190–203, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. L. Ravindranath, J. Padhye, S. Agarwal, R. Mahajan, I. Obermiller, and S. Shayandeh. Appinsight: Mobile app performance monitoring in the wild. In Proc. of OSDI ’12, pages 107–120, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. L. Ravindranath, J. Padhye, R. Mahajan, and H. Balakrishnan. Timecard: Controlling user-perceived delays in server-based mobile applications. In Proc. of SOSP ’13, pages 85–100, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. RobotiumTech. Robotium: User scenario testing for Android. http://www.robotium.org.Google ScholarGoogle Scholar
  60. V. Roto and A. Oulasvirta. Need for non-visual feedback with long response times in mobile hci. In Proc. of WWW ’05, pages 775–781, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. A. Sarkar, J. Guo, N. Siegmund, S. Apel, and K. Czarnecki. Cost-efficient sampling for performance prediction of configurable systems. In Proc. of ASE ’15, pages 342–352, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. M. Selakovic and M. Pradel. Performance issues and optimizations in JavaScript: An empirical study. In Proc. of ICSE ’16, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. StrictMode. http://developer.android.com/reference/ android/os/StrictMode.html.Google ScholarGoogle Scholar
  64. V. Terragni, S.-C. Cheung, and C. Zhang. Recontest: Effective regression testing of concurrent programs. In Proc. of ICSE ’15, pages 246–256, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Testing Support Library - UI Automator. https://developer.android.com/tools/ testing-support-library/index.html#UIAutomator.Google ScholarGoogle Scholar
  66. R. A. to search for countries based on several parameters. https://github.com/abhi2rai/RestC.Google ScholarGoogle Scholar
  67. UI/Application Exerciser Monkey. http: //developer.android.com/tools/help/monkey.html.Google ScholarGoogle Scholar
  68. A. Wert, J. Happe, and L. Happe. Supporting swift reaction: Automatically uncovering performance problems by systematic experiments. In Proc. of ICSE ’13, pages 552–561, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Xposed Module overview. http://repo.xposed.info/module-overview.Google ScholarGoogle Scholar
  70. Xposed Module Repository. http://repo.xposed.info.Google ScholarGoogle Scholar
  71. X. Yu, S. Han, D. Zhang, and T. Xie. Comprehending performance from real-world execution traces: A device-driver case. SIGPLAN Not., 49(4):193–206, Feb. 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. L. Zhang, D. R. Bild, R. P. Dick, Z. M. Mao, and P. Dinda. Panappticon: Event-based tracing to measure mobile application and platform performance. In Proc. of CODES+ISSS ’13, pages 1–10, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Y. Zhang, J. Guo, E. Blais, and K. Czarnecki. Performance prediction of configurable software systems by fourier learning. In Proc. of ASE ’15, pages 365–373, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. DiagDroid: Android performance diagnosis via anatomizing asynchronous executions

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      FSE 2016: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering
      November 2016
      1156 pages
      ISBN:9781450342186
      DOI:10.1145/2950290

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 November 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate17of128submissions,13%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader