Abstract
Code generation from graphical user interface images is a promising area of research. Recent progress on machine learning methods made it possible to transform user interface into the code using several methods. The encoder–decoder framework represents one of the possible ways to tackle code generation tasks. Our model implements the encoder–decoder framework with an attention mechanism that helps the decoder to focus on a subset of salient image features when needed. Our attention mechanism also helps the decoder to generate token sequences with higher accuracy. Experimental results show that our model outperforms previously proposed models on the pix2code benchmark dataset.
Similar content being viewed by others
References
Girshick, R.B.: Fast R-CNN. CoRR arXiv:1504.08083 (2015)
Beltramelli, T.: pix2code: Generating code from a graphical user interface screenshot. CoRR arxiv:1705.07962 (2017)
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A.C., Salakhutdinov, R., Zemel, R.S., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. CoRR arXiv:1502.03044 (2015)
Liu, Y., Hu, Q., Shu, K.: Improving pix2code based bi-directional lstm. 2018 IEEE international conference on automation, electronics and electrical engineering (AUTEEE) p. 220–223 (2018)
Zhu, Z., Xue, Z., Yuan, Z.: Automatic graphics program generation using attention-based hierarchical decoder. CoRR arXiv:1810.11536 (2018)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2014)
Tang, H.L., Chien, S.C., Cheng, W.H., Chen, Y.Y., Hua, K.L.: Multi-cue pedestrian detection from 3d point cloud data. In: 2017 IEEE international conference on multimedia and expo (ICME), pp. 1279–1284. IEEE (2017)
Hua, K.L., Hidayati, S.C., He, F.L., Wei, C.P., Wang, Y.C.F.: Context-aware joint dictionary learning for color image demosaicking. J. Vis. Commun. Image Represent. 38, 230–245 (2016)
Tan, D.S., Chen, W.Y., Hua, K.L.: Deepdemosaicking: adaptive image demosaicking via multiple deep fully convolutional networks. IEEE Trans. Image Process. 27(5), 2408–2419 (2018)
Sanchez-Riera, J., Hua, K.L., Hsiao, Y.S., Lim, T., Hidayati, S.C., Cheng, W.H.: A comparative study of data fusion for rgb-d based visual recognition. Pattern Recogn. Lett. 73, 1–6 (2016)
Hidayati, S.C., Hua, K.L., Cheng, W.H., Sun, S.W.: What are the fashion trends in new york? In: Proceedings of the 22nd ACM international conference on multimedia, pp. 197–200 (2014)
Sharma, V., Srinivasan, K., Chao, H.C., Hua, K.L., Cheng, W.H.: Intelligent deployment of uavs in 5g heterogeneous communication environment for improved coverage. J. Netw. Comput. Appl. 85, 94–105 (2017)
Chen, X., Zitnick, C.L.: Learning a recurrent visual representation for image caption generation. CoRR arXiv:1411.5654 (2014)
Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (m-rnn) (2015)
Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Chua, T.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. CoRR arxiv:1611.05594 (2016)
Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via A visual sentinel for image captioning. CoRR arXiv:1612.01887 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. CoRR arxiv:1607.06450 (2016)
Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR arXiv:1406.1078 (2014)
Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. CoRR arXiv:1508.04025 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Li, F.: Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575 (2014)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by B.-K. Bao.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chen, WY., Podstreleny, P., Cheng, WH. et al. Code generation from a graphical user interface via attention-based encoder–decoder model. Multimedia Systems 28, 121–130 (2022). https://doi.org/10.1007/s00530-021-00804-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00530-021-00804-7