Skip to main content

DesignEva: A Design-Supported Tool with Multi-faceted Perceptual Evaluation

  • Conference paper
  • First Online:
Cross-Cultural Design. Interaction Design Across Cultures (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13311))

Included in the following conference series:

  • 1087 Accesses

Abstract

Perceptual design evaluation helps designers recognize how others perceive their work and iterate their design process. Organizing user studies to gather human perceptual evaluation is time-consuming. Thus, computational evaluation methods are proposed to provide rapid and reliable feedback for designers. In recent years, the development of deep neural networks has enabled Artificial Intelligence (AI) to conduct perceptual quality evaluation as human beings. This article proposes to utilize AI to provide designers with real-time evaluations of their designs and to facilitate the iterative design. To achieve this, we developed a prototype, DesignEva, a design-supported tool to offer multi-faceted perceptual evaluation on design works, including aesthetics, visual importance, memorability, and sentiment. In addition, based on designers’ current works, DesignEva also searches for similar examples from the material library as references to inspire designers. We conducted a user study to verify the effectiveness of our proposed prototype. The experimental results showed that DesignEva could help designers reflect on their designs from different perspectives in a timely way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Krause, M., Garncarz, T., Song, J., Gerber, E.M., Bailey, B.P., Dow, S.P.: Critique style guide: improving crowdsourced design feedback with a natural language model. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4627–4639 (2017)

    Google Scholar 

  2. Xu, A., Rao, H., Dow, S.P., Bailey, B.P.: A classroom study of using crowd feedback in the iterative design process. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 1637–1648 (2015)

    Google Scholar 

  3. Rosenholtz, R., Dorai, A., Freeman, R.: Do predictions of visual perception aid design? ACM Trans. Appl. Percept. (TAP) 8, 1–20 (2011)

    Article  Google Scholar 

  4. Wu, Z., Jiang, Y., Liu, Y., Ma, X.: Predicting and diagnosing user engagement with mobile UI animation via a data-driven approach. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2020)

    Google Scholar 

  5. AlRahayfeh, A., Faezipour, M.: Eye tracking and head movement detection: a state-of-art survey. IEEE J. Transl. Eng. Health Med. 1, 2100212 (2013)

    Article  Google Scholar 

  6. Kim, N.W., et al.: BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention. ACM Trans. Comput.-Hum. Interact. 24 (2017). https://doi.org/10.1145/3131275

  7. Fraser, C.A., Ngoon, T.J., Weingarten, A.S., Dontcheva, M., Klemmer, S.: CritiqueKit: a mixed-initiative, real-time interface for improving feedback. In: Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 7–9 (2017)

    Google Scholar 

  8. Miniukovich, A., De Angeli, A.: Computation of interface aesthetics. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1163–1172 (2015)

    Google Scholar 

  9. Gygli, M., Grabner, H., Riemenschneider, H., Nater, F., Van Gool, L.: The interestingness of images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1633–1640 (2013)

    Google Scholar 

  10. Khosla, A., Das Sarma, A., Hamid, R.: What makes an image popular? In: Proceedings of the 23rd International Conference on World Wide Web, pp. 867–876 (2014)

    Google Scholar 

  11. Pang, X., Cao, Y., Lau, R.W., Chan, A.B.: Directing user attention via visual flow on web designs. ACM Trans. Graph. (TOG) 35, 1–11 (2016)

    Article  Google Scholar 

  12. Yang, J., Sun, Y., Liang, J., Yang, Y.-L., Cheng, M.-M.: Understanding image impressiveness inspired by instantaneous human perceptual cues. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

    Google Scholar 

  15. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  16. Fajtl, J., Argyriou, V., Monekosso, D., Remagnino, P.: AmNet: memorability estimation with attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6363–6372 (2018)

    Google Scholar 

  17. Talebi, H., Milanfar, P.: NIMA: neural image assessment. IEEE Trans. Image Process. 27, 3998–4011 (2018)

    Article  MathSciNet  Google Scholar 

  18. Fitch, S.: Art critiques: a guide. Stud. Art Educ. 57, 185–187 (2016)

    Article  Google Scholar 

  19. Dow, S.P., Glassco, A., Kass, J., Schwarz, M., Schwartz, D.L., Klemmer, S.R.: Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Trans. Comput.-Hum. Interact. (TOCHI) 17, 1–24 (2010)

    Article  Google Scholar 

  20. Dutton, T.A.: Design and studio pedagogy. J. Arch. Educ. 41, 16–25 (1987)

    Google Scholar 

  21. Dow, S.P., Heddleston, K., Klemmer, S.R.: The efficacy of prototyping under time constraints. In: Proceedings of the Seventh ACM Conference on Creativity and Cognition, pp. 165–174 (2009)

    Google Scholar 

  22. Vredenburg, K., Mao, J.-Y., Smith, P.W., Carey, T.: A survey of user-centered design practice. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 471–478 (2002)

    Google Scholar 

  23. Dow, S., Fortuna, J., Schwartz, D., Altringer, B., Schwartz, D., Klemmer, S.: Prototyping dynamics: sharing multiple designs improves exploration, group rapport, and results. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2807–2816 (2011)

    Google Scholar 

  24. Lok, S., Feiner, S., Ngai, G.: Evaluation of visual balance for automated layout. In: Proceedings of the 9th International Conference on Intelligent User Interfaces, pp. 101–108 (2004)

    Google Scholar 

  25. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  26. Reinecke, K., et al.: Predicting users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2049–2058 (2013)

    Google Scholar 

  27. Tariq, T., Tursun, O.T., Kim, M., Didyk, P.: Why are deep representations good perceptual quality features? In: European Conference on Computer Vision, pp. 445–461. Springer (2020)

    Google Scholar 

  28. Swearngin, A., Li, Y.: Modeling mobile interface tappability using crowdsourcing and deep learning. In: Li, Y., Hilliges, O. (eds.) Artificial Intelligence for Human Computer Interaction: A Modern Approach. HIS, pp. 73–96. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82681-9_3

    Chapter  Google Scholar 

  29. Rock, I., Engelstein, P.: A study of memory for visual form. Am. J. Psychol. 72, 221–229 (1959)

    Article  Google Scholar 

  30. Isola, P., Xiao, J., Torralba, A., Oliva, A.: What makes an image memorable? In: CVPR 2011, pp. 145–152. IEEE (2011)

    Google Scholar 

  31. Rensink, R.A.: The Management of Visual Attention in Graphic Displays. Cambridge University Press, Cambridge (2011)

    Book  Google Scholar 

  32. Pilli, S., Patwardhan, M., Pedanekar, N., Karande, S.: Predicting sentiments in image advertisements using semantic relations among sentiment labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 408–409 (2020)

    Google Scholar 

  33. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  34. Murray, N., Marchesotti, L., Perronnin, F.: AVA: A large-scale database for aesthetic visual analysis. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2408–2415. IEEE (2012)

    Google Scholar 

  35. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  36. Khosla, A., Raju, A.S., Torralba, A., Oliva, A.: Understanding and predicting image memorability at a large scale. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2390–2398 (2015)

    Google Scholar 

  37. Bylinskii, Z., et al.: Learning visual importance for graphic designs and data visualizations. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 57–69 (2017)

    Google Scholar 

  38. O’Donovan, P., Agarwala, A., Hertzmann, A.: Learning layouts for single-pagegraphic designs. IEEE Trans. Vis. Comput. Graph. 20, 1200–1213 (2014)

    Article  Google Scholar 

  39. Vadicamo, L., et al.: Cross-media learning for image sentiment analysis in the wild. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 308–317 (2017)

    Google Scholar 

  40. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  41. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9 (2008)

    Google Scholar 

  42. Hasler, D., Suesstrunk, S.E.: Measuring colorfulness in natural images. In: Human Vision and Electronic Imaging VIII, pp. 87–95. International Society for Optics and Photonics (2003)

    Google Scholar 

  43. Brooke, J., et al.: SUS-A quick and dirty usability scale. Usabil. Eval. Ind. 189, 4–7 (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuanhui Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lou, Y., Gao, W., Chen, P., Liu, X., Yang, C., Sun, L. (2022). DesignEva: A Design-Supported Tool with Multi-faceted Perceptual Evaluation. In: Rau, PL.P. (eds) Cross-Cultural Design. Interaction Design Across Cultures. HCII 2022. Lecture Notes in Computer Science, vol 13311. Springer, Cham. https://doi.org/10.1007/978-3-031-06038-0_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06038-0_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06037-3

  • Online ISBN: 978-3-031-06038-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics