skip to main content
research-article

Visualizing Topics and Opinions Helps Students Interpret Large Collections of Peer Feedback for Creative Projects

Published: 10 June 2023 Publication History

Abstract

We deployed a feedback visualization tool to learn how students used the tool for interpreting feedback from peers and teaching assistants. The tool visualizes the topic and opinion structure in a collection of feedback and provides interaction for reviewing providers’ backgrounds. A total of 18 teams engaged with the tool to interpret feedback for course projects. We surveyed students (N = 69) to learn about their sensemaking goals, use of the tool to accomplish those goals, and perceptions of specific features. We interviewed students (N = 12) and TAs (N = 2) to assess the tool’s impact on students’ review processes and course instruction. Students discovered valuable feedback, assessed project quality, and justified design decisions to teammates by exploring specific icon patterns in the visualization. The interviews revealed that students mimicked strategies implemented in the tool when reviewing new feedback without the tool. Students found the benefits of the visualization outweighed the cost of labeling feedback.

References

[1]
Claire Aitchison. 2014. Learning from multiple voices: Feedback and authority in doctoral writing groups. In Writing Groups for Doctoral Education and Beyond. Routledge, 67–80.
[2]
Salvatore Andolina, Hendrik Schneider, Joel Chan, Khalil Klouche, Giulio Jacucci, and Steven Dow. 2017. Crowdboard: Augmenting in-person idea generation with real-time crowds. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 106–118.
[3]
Robert G. Bing-You, Jay Paterson, and Mark A. Levine. 1997. Feedback falling on deaf ears: Residents’ receptivity to feedback tempered by sender credibility. Medical Teacher 19, 1 (1997), 40–44.
[4]
Kirsten R. Butcher and Tamara Sumner. 2011. Self-directed learning and the sensemaking paradox. Human–Computer Interaction 26, 1–2 (2011), 123–159.
[5]
Julia Cambre, Scott Klemmer, and Chinmay Kulkarni. 2018. Juxtapeer: Comparative peer review yields higher quality feedback and promotes deeper reflection. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[6]
Kwangsu Cho and Charles MacArthur. 2010. Student revision with peer and expert reviewing. Learning and Instruction 20, 4 (2010), 328–338.
[7]
Amy Cook, Steven Dow, and Jessica Hammer. 2020. Designing interactive scaffolds to encourage reflection on peer feedback. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1143–1153.
[8]
Amy Cook, Jessica Hammer, Salma Elsayed-Ali, and Steven Dow. 2019. How guiding questions facilitate feedback exchange in project-based learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[9]
Patrick A. Crain and Brian P. Bailey. 2017. Share once or share often? Exploring how designers approach iteration in a large online community. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 80–92.
[10]
Monique A. Dijks, Leonie Brummer, and Danny Kostons. 2018. The anonymous reviewer: The relationship between perceived expertise and the perceptions of peer feedback in higher education. Assessment & Evaluation in Higher Education 43, 8 (2018), 1258–1271.
[11]
Anne Dohrenwend. 2002. Serving up the feedback sandwich. Family Practice Management 9, 10 (2002), 43.
[12]
James Elkins. 2012. Art Critiques: A Guide. New Academia Publishing.
[13]
Eureka Foong, Darren Gergle, and Elizabeth M. Gerber. 2017. Novice and expert sensemaking of crowdsourced design feedback. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 1–18.
[14]
Graham Gibbs and Claire Simpson. 2005. Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education1 (2005), 3–31.
[15]
John Hattie and Helen Timperley. 2007. The power of feedback. Review of Educational Research 77, 1 (2007), 81–112.
[16]
Pamela J. Hinds. 1999. The curse of expertise: The effects of expertise and debiasing methods on prediction of novice performance. Journal of Experimental Psychology: Applied 5, 2 (1999), 205.
[17]
Karen Holtzblatt and Hugh Beyer. 1997. Contextual Design: Defining Customer-Centered Systems. Elsevier.
[18]
Maria Jackson and Leah Marks. 2016. Improving the effectiveness of feedback by use of assessed reflections and withholding of grades. Assessment & Evaluation in Higher Education 41, 4 (2016), 532–547.
[19]
Mahmood Jasim, Enamul Hoque, Ali Sarvghad, and Narges Mahyar. 2021. CommunityPulse: Facilitating community input analysis by surfacing hidden insights, reflections, and priorities. In Proceedings of the Designing Interactive Systems Conference 2021. 846–863.
[20]
Mahmood Jasim, Pooya Khaloo, Somin Wadhwa, Amy X. Zhang, Ali Sarvghad, and Narges Mahyar. 2021. CommunityClick: Capturing and reporting community feedback from town halls to improve inclusivity. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–32.
[21]
Mikael Jern. 1997. Information drill-down using web tools. In Visualization in Scientific Computing’97. W. Lefer and M. Grave (Eds.), Springer, 9–20.
[22]
Anders Jonsson. 2013. Facilitating productive use of feedback in higher education. Active Learning in Higher Education 14, 1 (2013), 63–76.
[23]
Hyeonsu B. Kang, Gabriel Amoako, Neil Sengupta, and Steven P. Dow. 2018. Paragon: An online gallery for enhancing design feedback with visual examples. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[24]
Duenpen Kochakornjarupong, Paul Brna, and Paul Vickers. 2005. Who helps the helper? A situated scaffolding system for supporting less experienced feedback givers. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology. 851–853.
[25]
Michał Kuniecki, Joanna Pilarczyk, and Szymon Wichary. 2015. The color red attracts attention in an emotional context. An ERP study. Frontiers in Human Neuroscience 9 (2015), 212.
[26]
J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics 33, 1 (1977), 159–174.
[27]
Janet Lefroy, Chris Watling, Pim W. Teunissen, and Paul Brand. 2015. Guidelines: The do’s, don’ts and don’t knows of feedback for clinical education. Perspectives on Medical Education 4, 6 (2015), 284–299.
[28]
Michael Xieyang Liu, Jane Hsieh, Nathan Hahn, Angelina Zhou, Emily Deng, Shaun Burley, Cynthia Taylor, Aniket Kittur, and Brad A. Myers. 2019. Unakite: Scaffolding developers’ decision-making using the web. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 67–80.
[29]
Kurt Luther, Jari-Lee Tolentino, Wei Wu, Amy Pavel, Brian P. Bailey, Maneesh Agrawala, Björn Hartmann, and Steven P. Dow. 2015. Structuring, aggregating, and evaluating crowdsourced design critique. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 473–485.
[30]
Narges Mahyar, Michael R. James, Michelle M. Ng, Reginald A. Wu, and Steven P. Dow. 2018. CommunityCrit: Inviting the public to improve and evaluate urban design ideas through micro-activities. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.
[31]
Rodolfo Mendoza-Denton, Michelle Goldman-Flythe, Janina Pietrzak, Geraldine Downey, and Mario J. Aceves. 2010. Group-value ambiguity: Understanding the effects of academic feedback on minority students’ self-esteem. Social Psychological and Personality Science 1, 2 (2010), 127–135.
[32]
Daniel Meulbroek, Daniel Ferguson, Mathew Ohland, and Frederick Berry. 2019. Forming more effective teams using CATME teammaker and the Gale-Shapley algorithm. In Proceedings of the 2019 IEEE Frontiers in Education Conference. IEEE, 1–5.
[33]
Kamila Misiejuk, Barbara Wasson, and Kjetil Egelandsdal. 2021. Using learning analytics to understand student perceptions of peer feedback. Computers in Human Behavior 117 (2021), 106658.
[34]
Gabriela Morales-Martinez, Paul Latreille, and Paul Denny. 2020. Nationality and gender biases in multicultural online learning environments: The effects of anonymity. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[35]
D. Royce Sadler. 1989. Formative assessment and the design of instructional systems. Instructional Science 18, 2 (1989), 119–144.
[36]
Amy Shannon, Jessica Hammer, Hassler Thurston, Natalie Diehl, and Steven Dow. 2016. PeerPresents: A web-based system for in-class peer feedback during student presentations. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 447–458.
[37]
Lorrie A. Shepard. 2005. Linking formative assessment to scaffolding. Educational Leadership 63, 3 (2005), 66–70.
[38]
David R. Thomas. 2006. A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation 27, 2 (2006), 237–246.
[39]
Gladman Thondhlana and Dina Zoe Belluigi. 2017. Students’ reception of peer assessment of group-work contributions: Problematics in terms of race and gender emerging from a South African case study. Assessment & Evaluation in Higher Education 42, 7 (2017), 1118–1131.
[40]
Yaacov Trope, Ben Gervey, and Niall Bolger. 2003. The role of perceived control in overcoming defensive self-evaluation. Journal of Experimental Social Psychology 39, 5 (2003), 407–419.
[41]
Tom R. Tyler. 2001. Why do people rely on others? Social identity and social aspects of trust. In Trust in Society. K. S. Cook (Ed.), Russell Sage Foundation, 285–306.
[42]
Iris Vardi. 2008. The relationship between feedback and change in tertiary student writing in the disciplines. International Journal of Teaching and Learning in Higher Education 20, 3 (2008), 350–361.
[43]
Karl E. Weick, Kathleen M. Sutcliffe, and David Obstfeld. 2005. Organizing and the process of sensemaking. Organization Science 16, 4 (2005), 409–421.
[44]
Naomi E. Winstone, Robert A. Nash, James Rowntree, and Michael Parker. 2017. ‘It’d be useful, but I wouldn’t use it’: Barriers to university students’ feedback seeking and recipience. Studies in Higher Education 42, 11 (2017), 2026–2041.
[45]
Y. Wayne Wu and Brian P. Bailey. 2017. Bitter sweet or sweet bitter? How valence order and source identity influence feedback acceptance. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 137–147.
[46]
Y. Wayne Wu and Brian P. Bailey. 2018. Soften the pain, increase the gain: Enhancing users’ resilience to negative valence feedback. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1–20.
[47]
Anbang Xu, Shih-Wen Huang, and Brian Bailey. 2014. Voyant: Generating structured feedback on visual designs using a crowd of non-experts. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1433–1444.
[48]
Anbang Xu, Huaming Rao, Steven P. Dow, and Brian P. Bailey. 2015. A classroom study of using crowd feedback in the iterative design process. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1637–1648.
[49]
Yueting Xu and David Carless. 2017. ‘Only true friends could be cruelly honest’: Cognitive scaffolding and social-affective support in teacher feedback literacy. Assessment & Evaluation in Higher Education 42, 7 (2017), 1082–1094.
[50]
Koji Yatani, Michael Novati, Andrew Trusty, and Khai N. Truong. 2011. Review spotlight: A user interface for summarizing user-generated reviews using adjective-noun word pairs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1541–1550.
[51]
Yu-Chun Yen. 2021. Scaffolding Feedback Interpretation Process for Creative Work Through Reflection, Paraphrasing, and Information Visualization. Ph.D. Dissertation. University of Illinois at Urbana-Champaign.
[52]
Yu-Chun Grace Yen, Steven P. Dow, Elizabeth Gerber, and Brian P. Bailey. 2017. Listen to others, listen to yourself: Combining feedback review and reflection to improve iterative design. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 158–170.
[53]
Yu-Chun Grace Yen, Joy O. Kim, and Brian P. Bailey. 2020. Decipher: An interactive visualization tool for interpreting unstructured design feedback from multiple providers. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[54]
Ling Ying-Leh, Abdul Ghani Kanesan Abdullah, and Aziah Ismail. 2015. Feedback environment and coaching communication in Malaysia education organizations. Asian Journal of Social Sciences & Humanities 4, 1 (2015), 66–73.
[55]
Alvin Yuan, Kurt Luther, Markus Krause, Sophie Isabel Vennix, Steven P. Dow, and Bjorn Hartmann. 2016. Almost an expert: The effects of rubrics and expertise on perceived value of crowdsourced design critiques. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 1005–1017.

Cited By

View all
  • (2024)Human-centred learning analytics and AI in education: A systematic literature reviewComputers and Education: Artificial Intelligence10.1016/j.caeai.2024.1002156(100215)Online publication date: Jun-2024

Index Terms

  1. Visualizing Topics and Opinions Helps Students Interpret Large Collections of Peer Feedback for Creative Projects

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Transactions on Computer-Human Interaction
          ACM Transactions on Computer-Human Interaction  Volume 30, Issue 3
          June 2023
          544 pages
          ISSN:1073-0516
          EISSN:1557-7325
          DOI:10.1145/3604411
          Issue’s Table of Contents

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 10 June 2023
          Online AM: 20 December 2022
          Accepted: 29 October 2022
          Revised: 02 September 2022
          Received: 20 April 2022
          Published in TOCHI Volume 30, Issue 3

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Feedback sensemaking
          2. feedback support
          3. formative feedback
          4. visualization design
          5. learning

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)200
          • Downloads (Last 6 weeks)21
          Reflects downloads up to 17 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Human-centred learning analytics and AI in education: A systematic literature reviewComputers and Education: Artificial Intelligence10.1016/j.caeai.2024.1002156(100215)Online publication date: Jun-2024

          View Options

          Login options

          Full Access

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          Full Text

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media