skip to main content
10.1145/3279720.3279741acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
short-paper

Analysis of Students' Peer Reviews to Crowdsourced Programming Assignments

Published:22 November 2018Publication History

ABSTRACT

We have used a tool called CrowdSorcerer that allows students to create programming assignments. The students are given a topic by a teacher, after which the students design a programming assignment: the assignment description, the code template, a model solution and a set of input-output -tests. The created assignments are peer reviewed by other students on the course. We study students' peer reviews to these student-generated assignments, focusing on examining the differences between novice and experienced programmers. We then analyze whether the exercises created by experienced programmers are rated better quality-wise than those created by novices. Additionally, we investigate the differences between novices and experienced programmers as peer reviewers: can novices review assignments as well as experienced programmers?

References

  1. Konstantina Chrysafiadi and Maria Virvou. 2013. Student modeling approaches: A literature review for the last decade. Expert Systems with Applications 40, 11 (2013), 4715--4729. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Paul Denny, Andrew Luxton-Reilly, and John Hamer. 2008. Student Use of the PeerWise System. SIGCSE Bull. 40, 3 (June 2008), 73--77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Paul Denny, Andrew Luxton-Reilly, and Beth Simon. 2009. Quality of Student Contributed Questions Using PeerWise. In Proc. of the 11th Australasian Conference on Computing Education - Volume 95 (ACE '09). Australian Computer Society, Inc., Darlinghurst, Australia, 55--63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Anhai Doan, Raghu Ramakrishnan, and Alon Y. Halevy. 2011. Crowdsourcing Systems on the World-Wide Web. Commun. ACM 54, 4 (April 2011), 86--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Enrique Estellés-Arolas and Fernando González-Ladrón-De-Guevara. 2012. Towards an Integrated Crowdsourcing Definition. J. Inf. Sci. 38, 2 (April 2012), 189--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ronald Aylmer Fisher. 2006. Statistical methods for research workers. Genesis Publishing Pvt Ltd.Google ScholarGoogle Scholar
  7. Edward F Gehringer, Karishma Navalakha, and Reejesh Kadanjoth. 2011. A Student-Written Wiki Textbook Supplement for a Parallel-Architecture Course.Google ScholarGoogle Scholar
  8. John Hamer, Helen C. Purchase, Paul Denny, and Andrew Luxton-Reilly. 2009. Quality of Peer Assessment in CS1. In Proc. of the 5th International Workshop on Computing Education Research Workshop (ICER '09). ACM, New York, NY, USA, 27--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Julie Hui, Amos Glenn, Rachel Jue, Elizabeth Gerber, and Steven Dow. 2015. Using Anonymity and Communal Efforts to Improve Quality of Crowdsourced Feedback. In HCOMP.Google ScholarGoogle Scholar
  10. Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H. Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, Miguel Ángel Rubio, Judy Sheard, Bronius Skupas, Jaime Spacco, Claudia Szabo, and Daniel Toll. 2015. Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies. In Proc. of the 2015 ITiCSE on Working Group Reports (ITICSE-WGR '15). ACM, New York, NY, USA, 41--63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Chinmay Kulkarni, Koh Pang Wei, Huy Le, Daniel Chia, Kathryn Papadopoulos, Justin Cheng, Daphne Koller, and Scott R. Klemmer. 2013. Peer and Self Assessment in Massive Online Classes. ACM Trans. Comput.-Hum. Interact. 20, 6, Article 33 (Dec. 2013), 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Leo Leppänen, Juho Leinonen, Petri Ihantola, and Arto Hellas. 2017. Using and collecting fine-grained usage data to improve online learning materials. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering and Education Track. IEEE Press, 4--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Lan Li, Xiongyi Liu, and Allen L. Steckelberg. 2009. Assessor or assessee: How student learning improves by giving and receiving peer feedback. 41 (06 2009), 525--536.Google ScholarGoogle Scholar
  14. Andrew Luxton-Reilly. 2009. A systematic review of tools that support peer assessment. Computer Science Education 19, 4 (2009), 209--232.Google ScholarGoogle ScholarCross RefCross Ref
  15. Nea Pirttinen, Vilma Kangas, Irene Nikkarinen, Henrik Nygren, Juho Leinonen, and Arto Hellas. 2018. Crowdsourcing Programming Assignments with CrowdSorcerer. In Proc. of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE 2018). ACM, New York, NY, USA, 326--331. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Clifford A Shaffer, Ville Karavirta, Ari Korhonen, and Thomas L Naps. 2011. OpenDSA: beginning a community active-ebook project. In Proc. of the 11th Koli Calling Int. Conference on Computing Education Research. ACM, 112--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Zbyněk Šidák. 1967. Rectangular confidence regions for the means of multivariate normal distributions. J. Amer. Statist. Assoc. 62, 318 (1967), 626--633.Google ScholarGoogle Scholar
  18. J. Sitthiworachart and M. Joy. 2003. Web-based peer assessment in learning computer programming. In Proc. 3rd IEEE International Conference on Advanced Technologies. 180--184.Google ScholarGoogle Scholar
  19. Jirarat Sitthiworachart and Mike Joy. 2004. Effective Peer Assessment for Learning Computer Programming. SIGCSE Bull. 36, 3 (June 2004), 122--126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Anne Venables and Raymond Summit. 2003. Enhancing scientific essay writing using peer assessment. Innovations in Education and Teaching International 40, 3 (2003), 281--290.Google ScholarGoogle ScholarCross RefCross Ref
  21. Arto Vihavainen, Craig S Miller, and Amber Settle. 2015. Benefits of Self-explanation in Introductory Programming. In Proc. of the 46th ACM Technical Symposium on Computer Science Education. ACM, 284--289. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Analysis of Students' Peer Reviews to Crowdsourced Programming Assignments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        Koli Calling '18: Proceedings of the 18th Koli Calling International Conference on Computing Education Research
        November 2018
        207 pages
        ISBN:9781450365352
        DOI:10.1145/3279720
        • Conference Chairs:
        • Mike Joy,
        • Petri Ihantola

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 November 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate80of182submissions,44%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader