skip to main content
10.1145/2401796.2401807acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
research-article

FLOP, a free laboratory of programming

Published:15 November 2012Publication History

ABSTRACT

The Test Driven Design (TDD) methodology [4, 23, 8] is currently a very common approach for programming and software engineering learning. On-line judges are widely used in everyday teaching, and their use in the scope of programming contests is currently especially well known. There are good tools and collections of programming problems available for exams as well as for contests.

We have developed a simple, light, and practical open laboratory. The term open is used here in two senses: It is free for students to use and free to download and distribute under the GPL license. This laboratory hosts programming problems, it allows the instructor to easily add new ones, and it also automatically assesses the solutions sent by the students. In addition to the system, we have developed a collection of programming problems for CS1/2, designed from a pedagogical point of view and covering several levels of difficulty.

References

  1. M. Ala-Mutka. A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2):83--102, June 2005.Google ScholarGoogle ScholarCross RefCross Ref
  2. L. W. Anderson and D. A. Krathwohl. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educ. Objectives. Addison-Wesley, 2001.Google ScholarGoogle Scholar
  3. Bloom B., E. Furst, W. Hill, and D. R. Krathwohl. Taxonomy of Educational Objectives: Handbook I, The Cognitive Domain. Addison-Wesley, 1956.Google ScholarGoogle Scholar
  4. K. Beck. Aim, fire (test-first coding). IEEE Software, 5(18):87--89, September 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Daly and J. Waldron, editors. Introductory Programming, Problem Solving and Computer Assisted Assessment, 2002.Google ScholarGoogle Scholar
  6. C. Douce, D. Livingstone, and J. Orwell. Automatic test-based assessment of programming: A review. Journal of Educ. Resources in Computing, 5(3), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Stephen H. Edwards. Rethinking computer science education from a test-first perspective. In Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, Anaheim, California, USA, 26--30 October, pages 148--155. ACM, New York, NY, USA, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Extreme programming. a gentle introduction. http://www.extremeprogramming.org/, 2009.Google ScholarGoogle Scholar
  9. Michal Forišek. Security of Programming Contest Systems. In Informatics in Secondary Schools: Evolution and Perspectives, pages 553--563, 2006.Google ScholarGoogle Scholar
  10. C. Gregorio, L. F. Llana, P. Palao, C. Pareja, R. Martínez, and J.Á. Velázquez. Exercita: automatic web publishing of programming exercises. SIGCSE Bull, 3(33):161--164, September 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. Gregorio-Rodríguez, L. F. Llana-Díaz, P. Palao-Gostanza, C. Pareja-Flores, R. Martínez-Unanue, and J.Á. Velázquez-Iturbide. Exercita: a system for archiving and publishing programming exercises. Computers and Education, pages 187--97, 2001.Google ScholarGoogle Scholar
  12. Surendra Gupta and Shiv Kumar Dubey. Automatic assessment of programming assignment. Computer Science & Engineering: An International Journal (CSEIJ), 2(1), February 2012.Google ScholarGoogle Scholar
  13. Colin Higgins, Tarek Hegazy, Pavlos Symeonidis, and Athanasios Tsintsifas. The coursemarker cba system: Improvements over ceilidh. Education and Information Technologies, 8(3):287--304, September 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. F. Hunt, J. Moch, C. Nevison, S. Rodger, and Zelenski J. How to develop and grade an exam for 20,000 students (or maybe just 200 or 20). SIGCSE Technical Symposium on Computer Science Education, pages 285--286, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling '10, pages 86--93, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Mike Joy, Nathan Griffiths, and Russell Boyatt. The BOSS online submission and assessment system. J. Educ. Resour. Comput., 5(3), September 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Päivi Kinnunen and Beth Simon. My program is ok -- am I? computing freshmen's experiences of doing programming assignments. Computer Science Education, 22(1):1--28, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  18. A. Kurnia, A. Lim, and B. Cheang. Online judge. Computer & Education, 36:299--315, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. P. Leal and F. Silva. Mooshak: a web-based multi-site programming contest system. Software -- Practice and Experience, 33:567--581, March 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Lister and J. Leaney. First year programming: Let all the flowers bloom. In T. Greening and R. Lister, editors, Proc. of the 5th Australasian Conference on Computing Education, volume 20, pages 221--230, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Pareja-Flores and J.À. Velázquez-Iturbide. Testing-based automatic grading: A proposal from bloom's taxonomy. In Proc. of the 8th IEEE Intl. Conf. on Advanced Learning Technologies (ICALT), 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Manuel Rubio, Belén Sáenz, Natalia Esteban, Antonio Pleite, and Cristóbal Pareja. Aspectos prácticos del uso de jueces automáticos en la enseñanza de la programación. In Actas del V Seminario de Investigación en Tecnologías de la Información Aplicadas a la Educación, pages 179--197. Dykinson, 2011.Google ScholarGoogle Scholar
  23. T. Shepard, M. Lamb, and D. Kelly. More testing should be taught. Comm. of the ACM, 44(6), June 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. FLOP, a free laboratory of programming

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      Koli Calling '12: Proceedings of the 12th Koli Calling International Conference on Computing Education Research
      November 2012
      187 pages
      ISBN:9781450317955
      DOI:10.1145/2401796

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 November 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate80of182submissions,44%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader