ABSTRACT
Most UI testing tools for mobile games are designed to help us create and run the test cases with scripts. However, these scripts must be manually updated for new test cases, which increases the test cost. In this paper, we propose a method to implement humanlike UI automation through automatic exploration in mobile games. Our method can automatically explore most UIs by recognizing and operating the UI elements similar to manual UI testing. First, we design a lightweight convolutional neural network to detect the buttons in the UI image captured from the mobile phone. Next, we build a directed graph model to store the visited UIs during automatic exploration. Finally, according to our exploration strategy, we choose one button from the UI image and send a click action to the mobile phone. Our method obtains over 85% UI and button coverage rates on three popular mobile games.
- Z. Yu, F. M. Fahid, T. Menzies, G. Rothermel, K. Patrick, and S. Cherian. 2019. TERMINATOR: Better Automated UI Test Case Prioritization. arXiv preprint arXiv:1905.07019. Google ScholarDigital Library
- AirTest. 2019. http://airtest.netease.com, Accessed December 13.Google Scholar
- UI Automator. 2019. https://developer.android.com/training/testing/uiautomator, Accessed December 13.Google Scholar
- Appium. 2019. http://appium.io, Accessed December 13.Google Scholar
- C. Clark and A. Storkey. 2015. Training deep convolutional neural networks to play go. In Proceedings of International Conference on Machine Learning (ICML), pp. 1766--1774. Google ScholarDigital Library
- S. F. Gudmundsson, P. Eisen, E. Poromaa, A. Nodet, S. Purmonen, B. Kozakowski, R. Meurling, and L. Cao. 2018. Human-like playtesting with deep learning. In Proceedings of IEEE Conference on Computational Intelligence and Games (CIG), pp. 1--8.Google Scholar
- L. Mugrai, F. Silva, C. Holmgard, and J. Togelius. 2019. Automated playtesting of matching tile games. In Proceedings of IEEE Conference on Games (CoG), pp. 1--7.Google Scholar
- J. Redmon and A. Farhadi. 2018.Yolov3: An incremental improvement. arXiv preprint arXiv:1804. 02767.Google Scholar
- R. Szeliski. 2010. Computer vision: algorithms and applications. Springer Science & Business Media. Google ScholarDigital Library
- X. Li, Y. Mao, Y. Liu, and C. Zhu. 2017. Memory-based pedestrian detection through sequence learning. In Proceedings of IEEE International Conference on Multimedia and Expo (ICME), pp. 1129--1134.Google Scholar
- W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, A. Berg. 2016. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 21--37.Google ScholarCross Ref
- S. Ren, K. He, G. Ross, J. Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems (NeurIPS), pp. 91--99. Google ScholarDigital Library
- H. Law, J. Deng. 2018. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734--750.Google ScholarDigital Library
- Y. Li, Y. Chen, N. Wang, Z. Zhang. 2019. Scale-aware trident networks for object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6054--6063Google ScholarCross Ref
Index Terms
- Human-like UI Automation through Automatic Exploration
Recommendations
Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning
IUI '21 Companion: Companion Proceedings of the 26th International Conference on Intelligent User InterfacesDuring the User interface (UI) design process, designers use UI design patterns for conceptualizing different UI wireframes for an application. This paper introduces Akin, a UI wireframe generator that allows designers to chose a UI design pattern and ...
Rataplan: Resilient Automation of User Interface Actions with Multi-modal Proxies
We present Rataplan, a robust and resilient pixel-based approach for linking multi-modal proxies to automated sequences of actions in graphical user interfaces (GUIs). With Rataplan, users demonstrate a sequence of actions and answer human-readable ...
Comments