skip to main content
research-article

Deterring Cheating in Online Environments

Published:24 September 2015Publication History
Skip Abstract Section

Abstract

Many Internet services depend on the integrity of their users, even when these users have strong incentives to behave dishonestly. Drawing on experiments in two different online contexts, this study measures the prevalence of cheating and evaluates two different methods for deterring it. Our first experiment investigates cheating behavior in a pair of online exams spanning 632 students in India. Our second experiment examines dishonest behavior on Mechanical Turk through an online task with 2,378 total participants. Using direct measurements that are not dependent on self-reports, we detect significant rates of cheating in both environments. We confirm that honor codes--despite frequent use in massive open online courses (MOOCs)--lead to only a small and insignificant reduction in online cheating behaviors. To overcome these challenges, we propose a new intervention: a stern warning that spells out the potential consequences of cheating. We show that the warning leads to a significant (about twofold) reduction in cheating, consistent across experiments. We also characterize the demographic correlates of cheating on Mechanical Turk. Our findings advance the understanding of cheating in online environments, and suggest that replacing traditional honor codes with warnings could be a simple and effective way to deter cheating in online courses and online labor marketplaces.

References

  1. Mauro Andreolini, Alessandro Bulgarelli, Michele Colajanni, and Francesca Mazzoni. 2005. Honeyspam: Honeypots fighting spam at the source. In Proceedings of the Steps to Reducing Unwanted Traffic on the Internet on Steps to Reducing Unwanted Traffic on the Internet Workshop. USENIX Association, 77--83. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Paul Baecher, Markus Koetter, Thorsten Holz, Maximillian Dornseif, and Felix Freiling. 2006. The Nepenthes platform: An efficient approach to collect malware. In Recent Advances in Intrusion Detection. Springer, New York, 165--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Adam J. Berinsky, Gregory A. Huber, and Gabriel S. Lenz. 2012. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Anal. 20, 3, 351--368.Google ScholarGoogle ScholarCross RefCross Ref
  4. Bear F. Braumoeller and Brian J. Gaines. 2001. Actions do speak louder than words: Deterring plagiarism with the use of plagiarism-detection software. Political Sci. Politics 34, 04, 835--839.Google ScholarGoogle Scholar
  5. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Edu. Psychol. Meas. 20, 1, 37--46.Google ScholarGoogle ScholarCross RefCross Ref
  6. Henry Corrigan-Gibbs, Nakull Gupta, Curtis Northcutt, Edward Cutrell, and William Thies. 2015. Measuring and maximizing the effectiveness of honor codes in online courses. In Learning @ Scale. ACM, 223--228. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. 2014. Coursera Student Support Center: What is the Honor Code? Retrieved 20 September 2014 from http://help.coursera.org/customer/portal/articles/1164381-what-is-the-h onor-code-.Google ScholarGoogle Scholar
  8. Thomas Crampton. 2007. Scamming the e-mail scammers. New York Times (Jul. 2007).Google ScholarGoogle Scholar
  9. Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2012. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch. 26--30.Google ScholarGoogle Scholar
  10. Jennifer Dirmeyer and Alexander C. Cartwright. 2012. Commentary: Honor codes work where honesty has already taken root. The Chronicle of Higher Education (Sep. 2012).Google ScholarGoogle Scholar
  11. Julie S. Downs, Mandy B. Holbrook, Steve Sheng, and Lorrie Faith Cranor. 2010. Are your participants gaming the system?: Screening Mechanical Turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2399--2402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. edX 2014. edX Honor Code Pledge. Retrieved 20 September 2014 from https://www.edx.org/edx-terms-service.Google ScholarGoogle Scholar
  13. Carsten Eickhoff and Arjen P. de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Inf. Retrieval 16, 2, 121--137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Federal Rules of Evidence. 1975. Rule 603. Oath or Affirmation to Testify Truthfully. Public Law 93-595, §1, 88 Stat. 1934. (Jan. 1975).Google ScholarGoogle Scholar
  15. Gerlinde Fellner, Rupert Sausgruber, and Christian Traxler. 2013. Testing enforcement strategies in the field: Threat, moral appeal and social information. J. Eur. Economic Assoc. 11, 3, 634--660.Google ScholarGoogle ScholarCross RefCross Ref
  16. Urs Fischbacher and Franziska Föllmi-Heusi. 2013. Lies in disguise—an experimental study on cheating. J. Eur. Economic Assoc. 11, 3, 525--547.Google ScholarGoogle ScholarCross RefCross Ref
  17. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings ACM Conference on Human Factors in Computing Systems. ACM, 1631--1640. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Francesca Gino, Maurice E. Schweitzer, Nicole L. Mead, and Dan Ariely. 2011. Unable to resist temptation: How self-control depletion promotes unethical behavior. Org. Behav. Hum. Decision Processes 115, 2, 191--203.Google ScholarGoogle ScholarCross RefCross Ref
  19. Joseph K. Goodman, Cynthia E. Cryder, and Amar Cheema. 2013. Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. J. Behav. Decision Making 26, 3, 213--224.Google ScholarGoogle ScholarCross RefCross Ref
  20. Philip J. Guo and Katharina Reinecke. 2014. Demographic differences in how students navigate through MOOCs. In Learning @ Scale. ACM, 21--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kevin A. Hallgren. 2012. Computing inter-rater reliability for observational data: An overview and tutorial. Tutor. Quant. Methods for Psychol. 8, 1, 23--34.Google ScholarGoogle ScholarCross RefCross Ref
  22. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian J. Stat. 6, 2, 65--70.Google ScholarGoogle Scholar
  23. Eric Hoover. 2002. Honor for Honor's Sake? The Chronicle of Higher Education (May 2002).Google ScholarGoogle Scholar
  24. John J. Horton, David G. Rand, and Richard J. Zeckhauser. 2011. The online laboratory: Conducting experiments in a real labor market. Exp. Economics 14, 3, 399--425.Google ScholarGoogle ScholarCross RefCross Ref
  25. Daniel Houser, Stefan Vetter, and Joachim Winter. 2012. Fairness and cheating. Eur. Economic Rev. 56, 8, 1645--1655.Google ScholarGoogle ScholarCross RefCross Ref
  26. Nicolas Jacquemet, Robert-Vincent Joule, Stéphane Luchini, and Jason F. Shogren. 2013. Preference elicitation under oath. J. Environ. Economics Manage. 65, 1, 110--132.Google ScholarGoogle ScholarCross RefCross Ref
  27. Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the Conference on Computer Supported Cooperative Work. ACM, 1301--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xuanchong Li, Kai-min Chang, Yueran Yuan, and Alexander Hauptmann. 2015. Massive open online proctor: Protecting the credibility of MOOCs certificates. In Proceedings of the Conference on Computer Supported Cooperative Work. ACM, 1129--1137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Frank M. LoSchiavo and Mark A. Shatz. 2011. The impact of an honor code on cheating in online courses. MERLOT Journal of Online Learning and Teaching 7, 2, 179--184.Google ScholarGoogle Scholar
  31. Howard Markel. 2004. “I Swear by Apollo”--on Taking the Hippocratic Oath. New Eng. J. Med. 350, 20, 2026--2029.Google ScholarGoogle ScholarCross RefCross Ref
  32. David F. Mastin, Jennifer Peszka, and Deborah R. Lilly. 2009. Online academic integrity. Teach. Psychol. 36, 3, 174--178.Google ScholarGoogle ScholarCross RefCross Ref
  33. Nina Mazar, On Amir, and Dan Ariely. 2008. The dishonesty of honest people: A theory of self-concept maintenance. J. Marketing Res. 45, 6, 633--644.Google ScholarGoogle ScholarCross RefCross Ref
  34. Donald L. McCabe and Linda Klebe Trevino. 1993. Academic dishonesty: Honor codes and other contextual influences. J. Higher Educ. 64, 5, 522--538.Google ScholarGoogle Scholar
  35. Donald L. McCabe, Linda Klebe Trevino, and Kenneth D. Butterfield. 1999. Academic integrity in honor code and non-honor code environments: A qualitative investigation. J. Higher Educ. 70, 2, 211--234.Google ScholarGoogle Scholar
  36. Nicole L. Mead, Roy F. Baumeister, Francesca Gino, Maurice E. Schweitzer, and Dan Ariely. 2009. Too tired to tell the truth: Self-control resource depletion and dishonesty. J. Exp. Soc. Psychol. 45, 3, 594--597.Google ScholarGoogle ScholarCross RefCross Ref
  37. Mechanical Turk Blog. 2012. Mechanical Turk Blog: Improving Quality with Qualifications—Tips for API Requesters. Retrieved from: http://mechanicalturk.typepad.com/blog/2012/08/. (Aug. 2012).Google ScholarGoogle Scholar
  38. David Oleson, Alexander Sorokin, Greg P. Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. In Human Computation. 43--48.Google ScholarGoogle Scholar
  39. Gabriele Paolacci and Jesse Chandler. 2014. Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions Psychol. Sci. 23, 3, 184--188.Google ScholarGoogle ScholarCross RefCross Ref
  40. Niels Provos. 2004. A virtual honeypot framework. In USENIX Security Symposium, Vol. 173. 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Gerald J. Pruckner and Rupert Sausgruber. 2013. Honesty on the streets: A field study on newspaper purchasing. J. Eur. Economic Assoc. 11, 3, 661--679.Google ScholarGoogle ScholarCross RefCross Ref
  42. John T. E. Richardson. 1994. The analysis of 2 × 1 and 2 × 2 contingency tables: An historical review. Stat. Methods Med. Res. 3, 2, 107--133.Google ScholarGoogle ScholarCross RefCross Ref
  43. Kevin W. Rockmann and Gregory B. Northcraft. 2008. To be or not to be trusted: The influence of media richness on defection and deception. Org. Behav. Hum. Decision Processes 107, 2, 106--122.Google ScholarGoogle ScholarCross RefCross Ref
  44. Stephen Mark Rosenbaum, Stephan Billinger, and Nils Stieglitz. 2014. Let's be honest: A review of experimental evidence of honesty and truth-telling. J. Economic Psychol. 45, 181--196.Google ScholarGoogle ScholarCross RefCross Ref
  45. Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. 2010. Who are the crowdworkers?: Shifting demographics in Mechanical Turk. In Proceedings of the Extended Abstracts on Human Factors in Computing Systems. 2863--2872. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Shaul Shalvi, Ori Eldar, and Yoella Bereby-Meyer. 2012. Honesty requires time (and lack of justifications). Psychol. Sci. 23, 10, 1264--1270.Google ScholarGoogle ScholarCross RefCross Ref
  47. Lisa L. Shu, Francesca Gino, and Max H. Bazerman. 2011a. Dishonest deed, clear conscience: When cheating leads to moral disengagement and motivated forgetting. Personality Soc. Psychol. Bull. 37, 3, 330--349.Google ScholarGoogle ScholarCross RefCross Ref
  48. Lisa L. Shu, Nina Mazar, Francesca Gino, Dan Ariely, and Max H. Bazerman. 2011b. When to Sign on the Dotted Line?: Signing First Makes Ethics Salient and Decreases Dishonest Self-reports. Technical Report 11-117. Harvard Business School.Google ScholarGoogle Scholar
  49. Debra Pogrund Stark and Jessica M. Choplin. 2009. A license to deceive: Enforcing contractual myths despite consumer psychological realities. New York Univ. J. Law Business 5, 2, 617--744.Google ScholarGoogle Scholar
  50. Siddharth Suri, Daniel G Goldstein, and Winter A Mason. 2011. Honesty in an online labor market. In Human Computation. 61--66.Google ScholarGoogle Scholar
  51. Alex B. Van Zant and Laura J. Kray. 2014. “I can't lie to your face”: Minimal face-to-face interaction promotes honesty. J. Exp. Soc. Psychol. 55, 1, 234--238.Google ScholarGoogle ScholarCross RefCross Ref
  52. Kathleen D. Vohs and Jonathan W. Schooler. 2008. The value of believing in free will encouraging a belief in determinism increases cheating. Psychol. Sci. 19, 1, 49--54.Google ScholarGoogle ScholarCross RefCross Ref
  53. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Luis von Ahn, Ruoran Liu, and Manuel Blum. 2006. Peekaboom: A game for locating objects in images. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 55--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Greg Walsh and Jennifer Golbeck. 2010. Curator: A game with a purpose for collection recommendation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2079--2082. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Chen-Bo Zhong, Vanessa K. Bohns, and Francesca Gino. 2010. Good lamps are the best police darkness increases dishonesty and self-interested behavior. Psychol. Sci. 21, 3, 311--314.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Deterring Cheating in Online Environments

        Recommendations

        Reviews

        Stewart Mark Godwin

        All educational institutions have faced issues of plagiarism and academic dishonesty; this has been partially resolved by requiring students to agree to an honor code. However, these same institutions are presented with different challenges when faced with online learning environments. In this paper, the authors have evaluated two different methods for deterring cheating with a focus on online environments. Assessments that are conducted in isolation, as is the case with online exams, present easy opportunities for participants to seek assistance undetected. The results outlined by the research suggest that a baseline measurement for cheating in online exams is between 26 to 34 percent. The reasons for cheating are listed in numerous research papers and form an interesting section in this paper; this related work guides the authors toward three key research questions. Can cheating be detected and measured__?__ Can the rate of cheating be reduced__?__ What is the correlation between cheating and demographic variables__?__ The data collection involved conducting two different exams with two different groups of students, the first in India and the second in America. The experimental method divided the groups into three areas: control, honor code, and severe warning. The results show the use of a severe warning had a significant effect in reducing the rates of cheating. Analysis of the data indicated a 50 percent reduction in cheating; however, the discussion highlighted other factors that might have influenced this outcome. This is a fascinating topic and should be mandatory reading for all educational administrators and online managers. Online Computing Reviews Service

        Access critical reviews of Computing literature here

        Become a reviewer for Computing Reviews.

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Computer-Human Interaction
          ACM Transactions on Computer-Human Interaction  Volume 22, Issue 6
          December 2015
          232 pages
          ISSN:1073-0516
          EISSN:1557-7325
          DOI:10.1145/2830543
          Issue’s Table of Contents

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 24 September 2015
          • Revised: 1 July 2015
          • Accepted: 1 July 2015
          • Received: 1 May 2015
          Published in tochi Volume 22, Issue 6

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader