skip to main content
10.1145/3287324.3287507acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Comparing Jailed Sandboxes vs Containers Within an Autograding System

Published:22 February 2019Publication History

ABSTRACT

With the continued growth of enrollment within computer science courses, it has become an increasing necessity to utilize autograding systems. These systems have historically graded assignments through either a jailed sandbox environment or within a virtual machine (VM). For a VM, each submission is given its own instantiation of a guest operating system and virtual hardware that runs atop the host system, preventing anything that runs within the VM communicating with any other VM or the host. However, using these VMs are costly in terms of system resources, making it less than ideal for running student submissions given reasonable, limited resources. Jailed sandboxes, on the other hand, run on the host itself, thus taking up minimal resources, and utilize a security model that restricts the process to specified directories on the system. However, due to running on the host machine, the approach suffers as new courses utilize autograding and bring their own set of potentially conflicting requirements for programming languages and system packages. Over the past several years, \em containers have seen growing popularity in usage within the software engineering industry as well as within autograding systems. Containers provide similar benefits of isolation as a VM while maintaining similar resource cost to running within a jailed sandbox environment. We present the implementation of both a jailed sandbox and container-based autograder, compare the running time and memory usage of the two implementations, and discuss the overall resource usage.

References

  1. Kirsti M Ala-Mutka. 2005. A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15, 2 (2005), 83--102. arXiv:https://doi.org/10.1080/08993400500150747Google ScholarGoogle ScholarCross RefCross Ref
  2. Amazon. 2006--2018. Amazon EC2. https://aws.amazon.com/ec2/Google ScholarGoogle Scholar
  3. Computing Research Association. 2017. Generation CS: CS Undergraduate Enrollments Surge Since 2006. Technical Report.Google ScholarGoogle Scholar
  4. John DeNero, Sumukh Sridhara, Manuel Pérez-Quiñones, Aatish Nayak, and Ben Leong. 2017. Beyond Autograding: Advances in Student Feedback Platforms. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE '17). ACM, New York, NY, USA, 651--652. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Docker. 2013--2018. https://www.docker.com/Google ScholarGoogle Scholar
  6. Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic Testbased Assessment of Programming: A Review. J. Educ. Resour. Comput. 5, 3, Article 4 (Sept. 2005). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Michal Forisek. 2007. Security of Programming Contest Systems.Google ScholarGoogle Scholar
  8. Tal Garfinkel. 2003. Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools. In Proceedings of the Network and Distributed System Security Symposium (NDSS).Google ScholarGoogle Scholar
  9. Git. 2005--2018. https://git-scm.com/Google ScholarGoogle Scholar
  10. Georgiana Haldeman, Andrew Tjang, Monica Babe?-Vroman, Stephen Bartos, Jay Shah, Danielle Yucht, and Thu D. Nguyen. 2018. Providing Meaningful Feedback for Autograding of Programming Assignments. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE '18). ACM, New York, NY, USA, 278--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research (Koli Calling '10). ACM, New York, NY, USA, 86--93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. JUnit. 2018. https://junit.org/junit5/Google ScholarGoogle Scholar
  13. Federico Kereki. 2015. Concerning Containers' Connections: On Docker Networking. Linux J. 2015, 254, Article 2 (June 2015). http://dl.acm.org/citation. cfm?id=2807678.2807680 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Stephan Krusche and Andreas Seitz. 2018. ArTEMiS: An Automatic Assessment Management System for Interactive Learning. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE '18). ACM, New York, NY, USA, 284--289. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Alpine Linux. 2018. https://alpinelinux.org/Google ScholarGoogle Scholar
  16. Evan Maicus, Matthew Peveler, Stacy Patterson, and Barbara Cutler. 2019. Autograding Distributed Algorithms in Networked Containers. In Proceedings of the 2019 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ellen Murphy, Tom Crick, and James H. Davenport. 2017. An Analysis of Introductory Programming Courses at UK Universities. Programming Journal 1 (2017), 18.Google ScholarGoogle ScholarCross RefCross Ref
  18. Eugene W. Myers. 1986. An O(ND) difference algorithm and its variations. Algorithmica 1, 1 (01 Nov 1986), 251--266.Google ScholarGoogle Scholar
  19. Core OS. 2014--2018. rkt. https://coreos.com/rkt/Google ScholarGoogle Scholar
  20. František Spa?ček, Radomír Sohlich, and Tomáš Dulík. 2015. Docker as Platform for Assignments Evaluation. Procedia Engineering 100 (2015), 1665--1671.Google ScholarGoogle ScholarCross RefCross Ref
  21. Submitty. 2014--2018. http://www.submitty.org/Google ScholarGoogle Scholar
  22. Wikipedia. 2018. Seccomp. https://en.wikipedia.org/w/index.php?title= Seccomp&oldid=853577449 {Online; accessed 27-August-2018}.Google ScholarGoogle Scholar
  23. Chris Wilcox. 2015. The Role of Automation in Undergraduate Computer Science Education. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 90--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Chris Wilcox. 2016. Testing Strategies for the Automated Grading of Student Programs. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). ACM, New York, NY, USA, 437--442. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Comparing Jailed Sandboxes vs Containers Within an Autograding System

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            SIGCSE '19: Proceedings of the 50th ACM Technical Symposium on Computer Science Education
            February 2019
            1364 pages
            ISBN:9781450358903
            DOI:10.1145/3287324

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 22 February 2019

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            SIGCSE '19 Paper Acceptance Rate169of526submissions,32%Overall Acceptance Rate1,595of4,542submissions,35%

            Upcoming Conference

            SIGCSE Virtual 2024
            SIGCSE Virtual 2024: ACM Virtual Global Computing Education Conference
            November 30 - December 1, 2024
            Virtual Event , USA

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader