Abstract
Assessing students’ programming exercises has become a difficult activity that most educators encounter nowadays. The activity basically includes the tasks to construct questions and solution models in programming exercises as well as the method to evaluate students’ solutions. Existing studies particularly in the area of programming assessment still have limited discussions on current practices in conducting the activity. This paper reports the preliminary study conducted among educators who have been teaching programming courses at higher learning institutions within the northern region in Malaysia. The study aims to gauge the current practices in the construction and evaluation of programming exercises item among educators at the associated institutions. The study used a questionnaire to gather the relevant data from the selected subjects. The results reveal that both the negative and positive testing criteria are essential in constructing and evaluating programming exercises. The findings of this study will be the input to identify the adequate criteria that should be included in developing a schema of test set for automatic programming assessment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ala-Mutka, K.M.: A survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15(2), 83–102 (2005)
Blumenstein, M., Green, S., Nguyen, A., Muthukkumarasamy, V.: GAME: A Generic Automated Marking Environment for Programming Assessment. In: Proceedings of the International Conference on Information Technology: Coding and Computing ITCC 2004, pp. 212–216 (2004)
Choy, M., Nazir, U., Poon, C.K., Yu, Y.T.: Experiences in using an automated system for improving students’ learning of computer programming. In: Lau, R., Li, Q., Cheung, R., Liu, W. (eds.) ICWL 2005. LNCS, vol. 3583, pp. 267–272. Springer, Heidelberg (2005)
Higgins, C.A., Gray, G., Symeonidis, P., Tsintsifas, A.: Automated Assessment and Experiences of Teaching Programming. Journal of Educational Resources in Computing 5, Article 5 (2006)
Auffarth, B., Lopez-Sanchez, M., Miralles, J.C., Puig, A.: System for Automated Assistance in Correction of Programming Exercises (SAC). In: Proceedings of the Fifth CIDUI - V International Congress of University Teaching and Innovation (2008)
Truong, N., Bancroft, P., Roe, P.: Learning to Program Through the Web. ACM SIGCSE Bulletin 37(3), 9–13 (2005)
Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppala, O., Silvasti, P.: Visual Algorithm Simulation Exercise System with Automatic Assessment: TRAKLA2. Informatics in Education 3(2), 267–288 (2004)
Jackson, D., Usher, M.: Grading student programs using ASSYST. In: Proceedings of the 28th SIGCSE Technical Symposium on Computer Science Education, San Jose, CA, pp. 335–339 (1997)
Luck, M., Joy, M.S.: A secure on-line submission system. Journal of Software – Practise and Experience 29(8), 721–740 (1999)
Baldwin, J., Crupi, E., Estrellado, T.: WeBWork for Programming Fundamentals. In: Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, Bologna, Italy, p. 361 (2006)
Joy, M., Griffiths, N., Boyatt, R.: The BOSS Online Submission and Assessment System. ACM Journal on Educational Resources in Computing 5(3), Article 2, (2005)
Shukur, Z.: The Automatic Assessment of Z Specification, PhD Thesis University of Nottingham (1999)
Sekaran, U.: Research Methods for Business: A Skill Building Approach, 4th edn. John Wiley & Sons Inc., India (2003)
IPL Information Processing Ltd., Designing Unit Test Cases (1997), http://www.ipl.com/pdf/p0829.pdf
Howatt, J.W.: On Criteria for Grading Students Programs. SIGCSE Bulletin 26(3), 3–7 (1994)
Liang, Y., Liu, Q., Xu, J., Wang, D.: The Recent Development of Automated Programming Assessment. In: Proceeding of International Conference on Computational Intelligent and Software Engineering, pp. 1–5 (2009)
Sommerville, I.: Software Engineering, 7th edn. Pearson-Addison Wesley, USA (2004)
Demillo, R.A., McCracken, W.M., Martin, R.J., Passafiume, J.F.: Software Testing and Evaluation. The Benjamin/Cummings Publishing Compony, Inc., California (1987)
Tegarden, D.P., Sheetz, S.D.: Effectiveness of traditional software metrics for object-oriented systems, System Sciences. In: Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, vol. 4, pp. 359–368 (1992)
El-Emam, K.: Object-oriented metrics: A review of theory and practice. In: Advances in Software Engineering, pp. 23–50. Springer-Verlag New York, Inc., New York (2002)
Xenos, M., Stavrinoudis, D., Zikouli, K., Christodoulakis, D.: Object- Oriented Metrics – A Survey. In: Proceedings of the Federation of European Software Measurement Associations, Madrid, Spain (2000)
Jackson, D.: A Software System for Grading Student Computer Programs. Computers and Education 27(3-4), 171–180 (1996)
Romli, R., Sulaiman, S., Zamli, K.Z.: Automatic Programming Assessment and Test Data Generation: A Review on Its Approaches. In: Proceeding of 2010 International Symposium on Information Technology (ITSim 2010), pp. 1186–1192 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Romli, R., Sulaiman, S., Zuhairi Zamli, K. (2011). Current Practices of Programming Assessment at Higher Learning Institutions. In: Mohamad Zain, J., Wan Mohd, W.M.b., El-Qawasmeh, E. (eds) Software Engineering and Computer Systems. ICSECS 2011. Communications in Computer and Information Science, vol 179. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22170-5_41
Download citation
DOI: https://doi.org/10.1007/978-3-642-22170-5_41
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22169-9
Online ISBN: 978-3-642-22170-5
eBook Packages: Computer ScienceComputer Science (R0)