Skip to main content
Log in

Design and Implementation of a Toolkit for Usability Testing of Mobile Apps

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

The usability of mobile applications is critical for their adoption because of the relatively small screen and awkward (sometimes virtual) keyboard, despite the recent advances of smartphones. Traditional laboratory-based usability testing is often tedious, expensive, and does not reflect real use cases. In this paper, we propose a toolkit that embeds into mobile applications the ability to automatically collect user interface (UI) events as the user interacts with the applications. The events are fine-grained and useful for quantified usability analysis. We have implemented the toolkit on Android devices and we evaluated the toolkit with a real deployed Android application by comparing event analysis (state-machine based) with traditional laboratory testing (expert based). The results show that our toolkit is effective at capturing detailed UI events for accurate usability analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://www.flurry.com/

References

  1. Kerr R (2009) US mobile internet usage to near 50 % in 2013. In: Vator news

  2. Budiu R, Nielsen J (2009) Usability of mobile websites: 85 design guidelines for improving access to web-based content and services through mobile devices. Nielsen Norman Group Research Report

  3. Gomez Inc (2009) Why the mobile web is disappointing end-users. Equation Research Report

  4. Tullis T, Fleischman S, McNulty M, Cianchette C, Bergel M (2002) An empirical comparison of lab and remote usability testing of web sites. In: Usability professional association conference, Orlando

  5. Bastien JMC (2010) Usability testing: a review of some methodological and technical aspects of the method. Int J Med Inform 79:e18–23

    Article  Google Scholar 

  6. West R, Lehman K (2006) Automated summative usability studies: an empirical evaluation. In: Proceedings of the SIGCHI conference on human factors in computing systems, ser. CHI ’06. ACM, New York, pp 631–639

    Google Scholar 

  7. Waterson S, Landay JA (2002) In the lab and out in the wild: remote web usability testing for mobile devices. In: Conference on human factors in computing systems, pp 296–297

  8. Hilbert DM, Redmiles DF (2000) Extracting usability information from user interface events. ACM Comput Surv 32:384–421

    Article  Google Scholar 

  9. Yan B, Chen G (2011) Appjoy: personalized mobile application discovery. In: Proceedings of the 9th international conference on mobile systems, applications, and services, ser. MobiSys ’11. ACM, New York, pp 113–126. Available: http://doi.acm.org/10.1145/1999995.2000007

    Chapter  Google Scholar 

  10. Rosenbaum S, Rohn J, Humburg J (2000) A toolkit for strategic usability: results from workshops, panels, and surveys. In: Proceedings of the ACM CHI 2000 conference on human factors in computing systems, New York, pp 337–344

  11. Usability Professionals’ Association (2008) UPA 2007 Salary Survey

  12. Ericsson KA, Simon HA (1980) Verbal reports as data. Psychol Rev 87:215–251

    Article  Google Scholar 

  13. Nieslen J (2007) Severity ratings for usability problems. Retrieved June 4th from UseIt. Available: http://www.useit.com/papers/heuristic/severityrating.html

  14. Akers D (2009) Backtracking events as indicators of software usability problems. Ph.D. dissertation

Download references

Acknowledgement

This work was partly supported by the National Science Foundation under Grant No. 1016823. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoxiao Ma.

Appendix

Appendix

In above sections, we only dealt with the number of usability problems, without specifically pointing out what these usability problems are. Here we summarize the usability problems identified from the user study and categorize them by their severity levels.

  • Cosmetic

    1. 1.

      At the home activity of AppJoy, the users tried to trigger a menu by long-clicking an item, but the long-click listener was not implemented.

    2. 2.

      At location-based search activity, if the search result is empty, the activity shows a blank screen, instead of cueing “no result found.”

    3. 3.

      At recommended application activity, there is an “install” button which misled the users to think that by clicking this button, the installation process will be finished (actually there are several steps).

    4. 4.

      The position of menu items are not consistent.

    5. 5.

      At Help web page, some links do not function.

  • Minor

    1. 1.

      At location-based search activity, users clicked on an non-editable box to alter location.

    2. 2.

      At location-based search activity, recommended free applications did not provide information about whether they have been installed.

    3. 3.

      At my downloads activity, several users complained the lack of search functionality.

    4. 4.

      Some recommended applications cannot be found in Android Market.

  • Major

    1. 1.

      After the users completed installing an application, the “install” button did not disappear so that they thought the installation was not successful.

    2. 2.

      The Help web page was outdated.

    3. 3.

      The most recent activity was supposed to contain only applications that were not downloaded (according to the developer’s design), but that was not the fact.

    4. 4.

      At the detailed information activity, the text font was too small so that some important information was easily overlooked.

  • Catastrophe

    1. 1.

      The meaning of recommendation options at the home activity is not clear, and the users cannot well understand it across all tasks.

The severity level of a usability problem are mostly related with the frequency of its occurrence and the impact if it happens. We gathered up the notes taken by the evaluators, and summed up in which task and through which participant did we notice each usability problem in Table 9.

Table 9 Occurrence of usability problems

One cosmetic usability problem was discovered by the evaluators while the users performing the tasks, none of the users actually triggered the usability problem, so we did not specify a particular participant number here. Some usability problems occurred in almost all tasks or across almost all users, so we note All here for it.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ma, X., Yan, B., Chen, G. et al. Design and Implementation of a Toolkit for Usability Testing of Mobile Apps. Mobile Netw Appl 18, 81–97 (2013). https://doi.org/10.1007/s11036-012-0421-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-012-0421-z

Keywords

Navigation