Abstract:
UI animation is a widely adopted design element in the UI of Android apps. There are many animation APIs available for a variety of purposes, and developers can utilize t...Show MoreMetadata
Abstract:
UI animation is a widely adopted design element in the UI of Android apps. There are many animation APIs available for a variety of purposes, and developers can utilize them to realize the UI animations to avoid reinventing the wheel and thus improve the development efficiency. However, the number of animation APIs is as high as thousands and it is non-trivial for developers to systematically master their use. Facing such a problem, we construct a multi-modal real-time animation API recommendation model called U-A2A in this paper, which can provide the available animation API for developers of Android apps in real-time throughout the animation realization according to the multi-modal information, that is, the information of UI animation task and the animation API context of current program (i.e., the animation API sequence that has been used). The reason for considering the animation API context is that realizing a UI animation requires the use of multiple animation APIs and relevant animation APIs roughly follow a sequence. U-A2A consists of two important parts: feature extractor and predictor. The feature extractor, which is constructed based on 3D CNN and GRU, can gain the combined feature of UI animation task as well as animation API context. The predictor consists of a fully connected layer as well as a softmax layer, and it can predict and recommend the next available animation API according to the result from feature extractor. Furthermore, we use the development experience about animation APIs of existing app products as the basis to adjust the parameters of U-A2A, thereby completing the training work of recommendation model. The experimental result shows that when 1, 3, 5, and 10 animation APIs are considered, U-A2A can achieve 45.13%, 65.72%, 72.97% and 81.85% accuracy respectively, which is much higher than the baseline LUPE.
Published in: IEEE Transactions on Software Engineering ( Volume: 50, Issue: 1, January 2024)