skip to main content
10.1145/3679318.3685334acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnordichiConference Proceedingsconference-collections
research-article
Open access

Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder Exploration

Published: 13 October 2024 Publication History

Abstract

Automated Grading Systems (AGSs) are increasingly used in higher education assessment practices, raising issues about the responsibilities of the various stakeholders involved both in their design and use. This study explores how teachers, students, exam administrators, and developers of AGSs perceive and enact responsibilities around such systems. Drawing on a focus group and interview data, we applied Fuchsberger and Frauenberger’s [27] notion of Doing Responsibilities as an analytical lens. This notion, framing responsibility as shared among human and nonhuman actors (e.g., technologies and data), has guided our analysis of how responsibilities are continuously configured and enacted in university assessment practices. The findings illustrate the stakeholders’ perceived and enacted responsibilities at different phases, contributing to the HCI literature on Responsible AI and AGSs by presenting a practical application of the ‘Doing Responsibilities’ framework before, during and after design. We discuss how the findings enrich this notion, emphasising the importance of engaging with nonhumans, considering regulatory aspects of responsibility, and addressing relational tensions within automation.

References

[1]
Richard Adams, Sally Weale, and Caelainn Barr. 2020. A-level results: almost 40% of teacher assessments in England downgraded. https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-results-downgraded. Accessed April 16, 2024.
[2]
Karen Barad. 2007. Meeting the Universe Halfway. Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press, Durham, NC.
[3]
David Beer. 2017. The social power of algorithms. Information, Communication & Society 20, 1 (2017), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
[4]
Sonja Bekker. 2021. Fundamental Rights in Digital Welfare States: The Case of SyRI in the Netherlands. In Netherlands Yearbook of International Law 2019: Yearbooks in International Law: History, Function and Future, Otto Spijkers, Wouter G. Werner, and Ramses A. Wessel (Eds.). T.M.C. Asser Press, The Hague, 289–307. https://doi.org/10.1007/978-94-6265-403-7_24
[5]
Garfield Benjamin. 2022. #FuckTheAlgorithm: algorithmic imaginaries and political resistance. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 46–57. https://doi.org/10.1145/3531146.3533072
[6]
Abeba Birhane and Jelle van Dijk. 2020. Robot Rights?: Let’s Talk about Human Welfare Instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA) (AIES ’20). Association for Computing Machinery, New York, NY, USA, 207–213. https://doi.org/10.1145/3375627.3375855
[7]
Steven L. Blader and Tom R. Tyler. 2003. A Four-Component Model of Procedural Justice: Defining the Meaning of a “Fair” Process. Personality and Social Psychology Bulletin 29, 6 (2003), 747–758. https://doi.org/10.1177/0146167203029006007
[8]
Virginia Braun and Victoria Clarke. 2022. Thematic Analysis: A Practical Guide. SAGE Publications, Los Angeles, USA.
[9]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300271
[10]
Mark Coeckelbergh. 2010. Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology 12, 3 (2010), 209–221. https://doi.org/10.1007/s10676-010-9235-5
[11]
Mark Coeckelbergh. 2019. Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability. Science and Engineering Ethics 26, 4 (2019), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
[12]
Patricia Hill Collins. 1990. Black Feminist Thought in the Matrix of Domination. In Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Unwin Hyman, Boston, 221–238.
[13]
Liane Colonna. 2023. Teachers in the loop? An analysis of automatic assessment systems under Article 22 GDPR. International Data Privacy Law 14, 1 (2023), 3–18. https://doi.org/10.1093/idpl/ipad024
[14]
Rob Comber and Chiara Rossitto. 2023. Regulating Responsibility: Environmental Sustainability, Law, and the Platformisation of Waste Management. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 237, 19 pages. https://doi.org/10.1145/3544548.3581493
[15]
European Commission. 2024. Article 14: Human Oversight. Artificial Intelligence Act (Regulation (EU) 2024/1689). https://artificialintelligenceact.eu/article/14/ Accessed on July 31, 2024.
[16]
A Feder Cooper, Emanuel Moss, Benjamin Laufer, and Helen Nissenbaum. 2022. Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 864–876. https://doi.org/10.1145/3531146.3533150
[17]
Michael Davis. 2012. “Ain’t No One Here But Us Social Forces”: Constructing the Professional Responsibility of Engineers. Science and Engineering Ethics 18, 1 (2012), 13–34. https://doi.org/10.1007/s11948-010-9225-3
[18]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376638
[19]
Filippo Santoni de Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology 34 (2021), 1057–1084. https://doi.org/10.1007/s13347-021-00450-x
[20]
Katja de Vries. 2022. Algodicy: justifying algorithmic suffering. Can counterfactual explanations be used for individual empowerment of those subjected to algorithmic decision-making (ADM)? In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence, Liane Colonna and Stanley Greenstein (Eds.). Stiftelsen Juridisk Fakultetslitteratur (SJF) & The Swedish Law and Informatics Research Institute (IRI), Stockholm, Sweden, 133–166.
[21]
Virginia Dignum. 2019. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham, Switzerland. https://doi.org/10.1007/978-3-030-30371-6
[22]
Matthew J. Drake, Paul M. Griffin, Robert Kirkman, and Julie L. Swann. 2005. Engineering Ethical Curricula: Assessment and Comparison of Two Approaches. Journal of Engineering Education 94, 2 (April 2005), 223–231. https://doi.org/10.1002/j.2168-9830.2005.tb00843.x
[23]
Paul Ernest. 2018. The ethical obligations of the mathematics teacher. Journal of Pedagogical Research 3, 1 (2018), 80–91. https://doi.org/10.33902/JPR.2019.6
[24]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes Towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300724
[25]
Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York, NY, USA.
[26]
Clàudia Figueras, Harko Verhagen, and Teresa Cerratto Pargman. 2022. Exploring tensions in Responsible AI in practice: An interview study on AI practices in and for Swedish public organizations. Scandinavian Journal of Information Systems 34, 2 (2022), Article 6. https://aisel.aisnet.org/sjis/vol34/iss2/6
[27]
Verena Fuchsberger and Christopher Frauenberger. 2023. Doing responsibilities in entangled worlds. Human–Computer Interaction (2023), 1–24. https://doi.org/10.1080/07370024.2023.2269934
[28]
Avdelningen för kompetensutveckling och internationella relationer och Kommunikationavdelningen. 2019. Svensk/engelsk ordlista Swedish/English Glossary. Government Report. Domstolsverket. Dnr 938-2010.
[29]
Ajit G Pillai, A Baki Kocaballi, Tuck Wah Leong, Rafael A Calvo, Nassim Parvin, Katie Shilton, Jenny Waycott, Casey Fiesler, John C Havens, and Naseem Ahmadpour. 2021. Co-designing Resources for Ethics Education in HCI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 109, 5 pages. https://doi.org/10.1145/3411763.3441349
[30]
Ben Green. 2022. The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review 45 (2022), 105681. https://doi.org/10.1016/j.clsr.2022.105681
[31]
Georgiana Haldeman, Andrew Tjang, Monica Babeş-Vroman, Stephen Bartos, Jay Shah, Danielle Yucht, and Thu D. Nguyen. 2018. Providing Meaningful Feedback for Autograding of Programming Assignments. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (Baltimore, Maryland, USA) (SIGCSE ’18). Association for Computing Machinery, New York, NY, 278–283. https://doi.org/10.1145/3159450.3159502
[32]
Donna J Haraway. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press, Durham, NC. https://doi.org/10.1215/9780822373780
[33]
Jon Henley and Robert Booth. 2020. Welfare surveillance system violates human rights, Dutch court rules. The Guardian. https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules
[34]
Don Ihde. 1990. Technology and the Lifeworld: From Garden to Earth. Indiana University Press, Bloomington, Indiana, USA.
[35]
Petra Jääskeläinen, André Holzapfel, and Cecilia Åsberg. 2022. Exploring More-than-Human Caring in Creative-Ai Interactions. In Nordic Human-Computer Interaction Conference (Aarhus, Denmark) (NordiCHI ’22). Association for Computing Machinery, New York, NY, USA, Article 79, 7 pages. https://doi.org/10.1145/3546155.3547278
[36]
Steven J. Jackson, Tarleton Gillespie, and Sandy Payette. 2014. The Policy Knot: Re-integrating Policy, Practice and Design in CSCW Studies of Social Computing. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (Baltimore, Maryland, USA) (CSCW ’14). Association for Computing Machinery, New York, NY, USA, 588–602. https://doi.org/10.1145/2531602.2531674
[37]
Dean Kirby. 2020. A-level students launch legal challenge over results chaos as pressure grows on Gavin Williamson to resign. https://inews.co.uk/news/education/a-level-results-gavin-williamson-legal-challenge-resign-calls-580924 [Accessed 24-04-2024].
[38]
Rob Kitchin. 2017. Thinking critically about and researching algorithms. Information, Communication & Society 20, 1 (2017), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
[39]
Daan Kolkman. 2020. “F**k the algorithm”?: What the world can learn from the UK’s A-level grading fiasco. https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/ [Accessed 16-04-2024].
[40]
Bruno Latour. 2005. Reassembling the Social. An Introduction to Actor-Network-Theory. Oxford University Press, London, England.
[41]
Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW, Article 182 (nov 2019), 26 pages. https://doi.org/10.1145/3359284
[42]
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha. 2021. Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 235, 17 pages. https://doi.org/10.1145/3411764.3445260
[43]
Gary Marchionini. 2019. Search, sense making and learning: closing gaps. Information and Learning Sciences 120, 1/2 (2019), 74–86. https://doi.org/10.1108/ILS-06-2018-0049
[44]
Annette Markham. 2006. Ethic as Method, Method as Ethic: A Case for Reflexivity in Qualitative ICT Research. Journal of Information Ethics 15, 2 (Nov. 2006), 37–54. https://doi.org/10.3172/jie.15.2.37
[45]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6, 3 (Sept. 2004), 175–183.
[46]
Marcus Messer, Neil C. C. Brown, Michael Kölling, and Miaojing Shi. 2024. Automated Grading and Feedback Tools for Programming Education: A Systematic Review. ACM Trans. Comput. Educ. 24, 1, Article 10 (feb 2024), 43 pages. https://doi.org/10.1145/3636515
[47]
Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 2 (2016), 2053951716679679. https://doi.org/10.1177/2053951716679679
[48]
Iohanna Nicenboim, Elisa Giaccardi, Marie Louise Juul Søndergaard, Anuradha Venugopal Reddy, Yolande Strengers, James Pierce, and Johan Redström. 2020. More-Than-Human Design and AI: In Conversation with Agents. In Companion Publication of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS’ 20 Companion). Association for Computing Machinery, New York, NY, USA, 397–400. https://doi.org/10.1145/3393914.3395912
[49]
Laura Louise Nicklin, Luke Wilsdon, Darren Chadwick, Laura Rhoden, David Ormerod, Deborah Allen, Gemma Witton, and Joanne Lloyd. 2022. Accelerated HE digitalisation: Exploring staff and student experiences of the COVID-19 rapid online-learning transfer. Education and Information Technologies 27, 6 (2022), 7653–7678. https://doi.org/10.1007/s10639-022-10899-8
[50]
Association of Nordic Engineers. 2020. Addressing ethical dilemmas in AI: Listening to engineers. Technical Report. IEEE Standards Association. https://standards.ieee.org/initiatives/autonomous-intelligence-systems/ethical-dilemmas-ai-report/
[51]
Will Orr and Jenny L. Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society 23, 5 (2020), 719–735. https://doi.org/10.1080/1369118X.2020.1713842
[52]
Victor Papanek. 2019. Design for the real world (3 ed.). Thames & Hudson, London, England.
[53]
Kristina Popova, Clàudia Figueras, Kristina Höök, and Airi Lampinen. 2024. Who Should Act? Distancing and Vulnerability in Technology Practitioners’ Accounts of Ethical Responsibility. Proceedings of the ACM on Human-Computer Interaction 8, CSCW1, Article 157 (apr 2024), 27 pages. https://doi.org/10.1145/3637434
[54]
Inioluwa Deborah Raji, Morgan Klaus Scheuerman, and Razvan Amironesei. 2021. You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 515–525. https://doi.org/10.1145/3442188.3445914
[55]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 33–44. https://doi.org/10.1145/3351095.3372873
[56]
Sabine Roeser. 2010. Emotional Engineers: Toward Morally Responsible Design. Science and Engineering Ethics 18, 1 (Oct. 2010), 103–115. https://doi.org/10.1007/s11948-010-9236-0
[57]
Helena Roos and Anette Bagger. 2024. Ethical dilemmas and professional judgment as a pathway to inclusion and equity in mathematics teaching. ZDM Mathematics Education 56 (2024), 435–446. https://doi.org/10.1007/s11858-023-01540-0
[58]
Samar Sabie and Tapan Parikh. 2019. Cultivating Care through Ambiguity: Lessons from a Service Learning Course. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300507
[59]
Johannes Schneider, Robin Richner, and Micha Riser. 2023. Towards Trustworthy AutoGrading of Short, Multi-lingual, Multi-type Answers. International Journal of Artificial Intelligence in Education 33, 1 (2023), 88–118. https://doi.org/10.1007/s40593-022-00289-z
[60]
Neil Selwyn. 2022. Less work for teacher? The ironies of automated decision-making in schools. In Everyday automation: Experiencing and anticipating emerging technologies (1 ed.), Sarah Pink, Martin Berg, Deborah Lupton, and Minna Ruckenstein (Eds.). Routledge, Abingdon, Oxon, New York, NY, 73–86. https://www.taylorfrancis.com/books/9781003170884
[61]
Neil Selwyn, Thomas Hillman, Annika Bergviken-Rensfeldt, 2023. Making Sense of the Digital Automation of Education. Postdigital Science and Education 5 (2023), 1–14. https://doi.org/10.1007/s42438-022-00362-9
[62]
Irina Shklovski and Carolina Némethy. 2022. Nodes of certainty and spaces for doubt in AI ethics for engineers. Information, Communication & Society 26, 1 (2022), 37–53. https://doi.org/10.1080/1369118X.2021.2014547
[63]
Sarah Sterz, Kevin Baum, Sebastian Biewer, Holger Hermanns, Anne Lauber-Rönsberg, Philip Meinel, and Markus Langer. 2024. On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (Rio de Janeiro, Brazil) (FAccT ’24). Association for Computing Machinery, New York, NY, USA, 2495–2507. https://doi.org/10.1145/3630106.3659051
[64]
S. Taffel. 2023. AirPods and the earth: Digital technologies, planned obsolescence and the Capitalocene. Environment and Planning E: Nature and Space 6, 1 (2023), 433–454. https://doi.org/10.1177/25148486221076136
[65]
Ibo van de Poel. 2015. The Problem of Many Hands. In Moral Responsibility and the Problem of Many Hands, Ibo van de Poel, Lamber Royakkers, and Sjoerd D Zwart (Eds.). Routledge, New York, NY, USA, 50–92.
[66]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174014
[67]
Peter-Paul Verbeek. 2011. Moralizing Technology: Understanding and Designing the Morality of Things. University of Chicago Press, Chicago, USA.
[68]
Ruotong Wang, F. Maxwell Harper, and Haiyi Zhu. 2020. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376813
[69]
David Gray Widder and Dawn Nafus. 2023. Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society 10, 1 (2023), 20539517231177620. https://doi.org/10.1177/20539517231177620

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
NordiCHI '24: Proceedings of the 13th Nordic Conference on Human-Computer Interaction
October 2024
1236 pages
ISBN:9798400709661
DOI:10.1145/3679318
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 October 2024

Check for updates

Author Tags

  1. Autograding
  2. Automated Grading Systems
  3. Design
  4. Ethics
  5. Multistakeholder
  6. Responsibility

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Marianne och Marcus Wallenbergs Stiftelse

Conference

NordiCHI 2024

Acceptance Rates

Overall Acceptance Rate 379 of 1,572 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 134
    Total Downloads
  • Downloads (Last 12 months)134
  • Downloads (Last 6 weeks)68
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media