Skip to main content
Log in

Improvised progressive model based on automatic calibration of difficulty level: A practical solution of competitive-based examination

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to improve understanding assessment. The suggested student assessment system paradigm involves determining difficulty, creating the exam, and assessing the student. Based on the previously established relationship between question difficulty and right responses, questions are computed and then divided into difficulty categories. This model improves testing by adapting to the student's ability in real-time. This method ensures that all students are graded uniformly and fairly using pre-determined questions and criteria. The methodology can also cut exam creation and administration time, freeing up teachers and administrators to focus on other assessment tasks. It considers more evidence, learner-centered assessment can help employers evaluate candidates more accurately and meaningfully. It might boost academic productivity by letting assessors quickly write high-quality papers and save up time for deeper investigation and experimentation. This may accelerate scientific progress. Automatic paper generation raises ethical questions about research validity and reliability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Abbreviations

AQBDMS:

Adaptive Question Bank Development and Management System

AQG:

Automatic Question Generator

AQP:

Automatic Question Production

BN:

Bayesian Network

CSP:

Constraint Satisfaction Problem

DR:

Derived Relationships

GMAT:

Graduate Management Admission Test

GRE:

Graduate Record Examination

IRT:

Item Response Theory

KNN:

K-Nearest Neighbour

LMKT:

Language Models for Deep Knowledge Tracing

QPD:

Question Paper Designer

QR:

Quantitative Relationships

QUESTOURnament:

A tool to find the real difficulty of Question

RCM:

Regularised Competition Model

SR:

Semantic Relationships

References

  • Al-Maqbali, A. H., & Hussain, R. M. R. (2022). The impact of online assessment challenges on assessment principles during COVID-19 in Oman. Journal of University Teaching and Learning Practice,19(2), 73–92. https://doi.org/10.53761/1.19.2.6

    Article  Google Scholar 

  • Chakraborty, P., Mittal, P., Gupta, M. S., Yadav, S., & Arora, A. (2021). Opinion of students on online education during the COVID-19 pandemic. Human Behavior and Emerging Technologies,3(3), 357–365. https://doi.org/10.1002/hbe2.240

    Article  Google Scholar 

  • Chen, S., & Zhang, J. (2008). Ability Assessment based on CAT in Adaptive Learning System. 2008 International Workshop on Education Technology and Training and 2008 International Workshop on Geoscience and Remote Sensing, ETT and GRS 2008,1, 88–91. https://doi.org/10.1109/ETTandGRS.2008.151

    Article  Google Scholar 

  • Comas-Lopez, M., Molins-Ruano, P., Atrio, S., & Sacha, G. M. (2018). Sequential adaptive tests to improve the assimilation of contents during learning. International Symposium on Computers in Education (SIIE),2018, 1–5. https://doi.org/10.1109/SIIE.2018.8586732

    Article  Google Scholar 

  • Curriculum Redesign for Cloud Computing to Enhance Social Justice and Intercultural Development in Higher Education. In Handbook of Research on Fostering Social Justice Through Intercultural and Multilingual Communication. edited by Meletiadou, Eleni, 62–80. Hershey, PA: IGI Global, 2023. https://doi.org/10.4018/978-1-6684-5083-3.ch004

  • Das, B., Majumder, M., Phadikar, S., & Sekh, A. A. (2021). Automatic question generation and answer assessment: A survey. Research and Practice in Technology Enhanced Learning,16(1), 5. https://doi.org/10.1186/s41039-021-00151-1

    Article  Google Scholar 

  • Dwivedi, P., Shankar, R. T., Meghana, B., Sushaini, H., Sudeep, B. R., & Pooja, M. R. (2019). Designing an adaptive question Bank and question paper generation management system. In International Conference on Artificial Intelligence and Data Engineering (pp. 965-973). Singapore: Springer Nature Singapore.

  • Elzainy, A., El Sadik, A., & Al Abdulmonem, W. (2020). Experience of e-learning and online assessment during the COVID-19 pandemic at the College of Medicine, Qassim University. Journal of Taibah University Medical Sciences,15(6), 456–462. https://doi.org/10.1016/j.jtumed.2020.09.005

    Article  Google Scholar 

  • Facilitating an Online and Sustainable Learning Environment for Cloud Computing Using an Action Research Methodology. In Handbook of Research on Implications of Sustainable Development in Higher Education. edited by Meletiadou, Eleni, 43–70. Hershey: IGI Global, 2023

  • Fung, C. Y., Su, S. I., Perry, E. J., & Garcia, M. B. (2022). Development of a socioeconomic inclusive assessment framework for online learning in higher education. In Socioeconomic Inclusion During an Era of Online Education (pp. 23–46). IGI Global. https://doi.org/10.4018/978-1-6684-4364-4.ch002

  • Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., Fuller, S., Singh, M., Arora, P., Parlikad, A. K., Stankovski, V., Abraham, A., Ghosh, S. K., Lutfiyya, H., Kanhere, S. S., Bahsoon, R., Rana, O., Dustdar, S., Sakellariou, R., … Buyya, R. (2023). Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet of Things and Cyber-Physical Systems. https://doi.org/10.1016/j.iotcps.2023.06.002

  • Goss, H. (2022). Student Learning Outcomes Assessment in Higher Education and in Academic Libraries: A Review of the Literature. The Journal of Academic Librarianship,48(2), 102485. https://doi.org/10.1016/j.acalib.2021.102485

    Article  Google Scholar 

  • Goldenson, R. P., Avery, L. L., Gill, R. R., & Durfee, S. M. (2022). The virtual homeroom: utility and benefits of small group online learning in the COVID-19 era. Current Problems in Diagnostic Radiology, 51(2), 152–154.

    Article  Google Scholar 

  • Guangul, F. M., Suhail, A. H., Khalit, M. I., & Khidhir, B. A. (2020). Challenges of remote assessment in higher education in the context of COVID-19: A case study of Middle East College. Educational Assessment, Evaluation and Accountability,32(4), 519–535. https://doi.org/10.1007/s11092-020-09340-w

    Article  Google Scholar 

  • Hardigan, P., Lai, L., Arneson, D., & Robeson, A. (2001). Significance of academic merit, test scores, interviews and the admissions process: A case study. American Journal of Pharmaceutical Education65(1), 40-43.

  • Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic Integrity in Online Assessment: A Research Review. In Frontiers in Education (Vol. 6). Frontiers Media S.A. https://doi.org/10.3389/feduc.2021.639814

  • How Covid-19 Changed Computer Science Education, ITNOW, Volume 64, Issue 2, Summer 2022, Pages 60–61, https://doi.org/10.1093/itnow/bwac062

  • Huang, Y. M., Lin, Y. T., & Cheng, S. C. (2009). An adaptive testing system for supporting versatile educational assessment. Computers and Education,52(1), 53–67. https://doi.org/10.1016/j.compedu.2008.06.007

    Article  Google Scholar 

  • Jaiswal, A., Varma, A., & Pereira, V. E. (2022). An Empirical Investigation of Academic Integrity in Assessments- Evidence from an Emerging Country. Academy of Management Proceedings, 2022(1). https://doi.org/10.5465/AMBPP.2022.11364abstract

  • Jopp, R., & Cohen, J. (2022). Choose your own assessment–assessment choice for students in online higher education. Teaching in Higher Education,27(6), 738–755. https://doi.org/10.1080/13562517.2020.1742680

    Article  Google Scholar 

  • Khodeir, N., Wanas, N., Darwish, N., & Hegazy, N. (2014). Bayesian based adaptive question generation technique. Journal of Electrical Systems and Information Technology,1(1), 10–16. https://doi.org/10.1016/j.jesit.2014.03.007

    Article  Google Scholar 

  • Martin, F., Ritzhaupt, A., Kumar, S., & Budhrani, K. (2019). Award-winning faculty online teaching practices: Course design, assessment and evaluation, and facilitation. Internet and Higher Education,42, 34–43. https://doi.org/10.1016/j.iheduc.2019.04.001

    Article  Google Scholar 

  • Menon, S., & Suresh, M. (2022). Development of assessment framework for environmental sustainability in higher education institutions. International Journal of Sustainability in Higher Education,23(7), 1445–1468. https://doi.org/10.1108/IJSHE-07-2021-0310

    Article  Google Scholar 

  • Mishra, A., & Jain, S. K. (2016). A survey on question answering systems with classification. Journal of King Saud University-Computer and Information Sciences,28(3), 345–361.

    Article  Google Scholar 

  • Nieminen, J. H. (2022). Assessment for Inclusion: Rethinking inclusive assessment in higher education. Teaching in Higher Education. https://doi.org/10.1080/13562517.2021.2021395

    Article  Google Scholar 

  • Pado, U. (2017). Question Difficulty – How to Estimate Without Norming, How to Use for Automated Grading. Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, 1–10. https://doi.org/10.18653/v1/W17-5001

  • Patil, P. M., Bhavsar, R. P., Pawar, B. V. (2022). Automatic question generation for subordinate conjunctions of Marathi. 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), 1, 169–173.

  • Perez, E. V., Santos, L. M. R., Perez, M. J. V., de Castro Fernandez, J. P., & Martin, R. G. (2012). Automatic classification of question difficulty level: Teachers’ estimation vs. students’ perception. Proceedings - Frontiers in Education Conference, FIE. https://doi.org/10.1109/FIE.2012.6462398

  • Purohit, V. K., Kumar, A., Jabeen, A., Srivastava, S., Goudar, R. H., Shivanagowda, & Rao, S. (2012). Design of adaptive question bank development and management system. 2012 2nd IEEE International Conference on Parallel, Distributed and Grid Computing, 256–261. https://doi.org/10.1109/PDGC.2012.6449828

  • Ragasudha, R., & Saravanan, M. (2022). Secure Automatic Question Paper Generation with the Subjective Answer Evaluation System. International Conference on Smart Technologies and Systems for next Generation Computing (ICSTSN),2022, 1–5.

    Google Scholar 

  • Ranjan, A., Gujar, S., & Ramani, V. (2022). Dynamic matching in campus placements: The benefits and affordability of the dream option. IIMB Management Review,34(3), 262–277. https://doi.org/10.1016/j.iimb.2022.08.001

    Article  Google Scholar 

  • Shah, P., Faquih, U., Devkar, R., & Shahare, Y. (2022). An Intelligent Question Paper Generator using Randomized Algorithm. International Journal of Engineering Research & Technology (IJERT), 11(04). www.ijert.org

  • Singh, R., Timbadia, D., Kapoor, V., Reddy, R., Churi, P., & Pimple, O. (2021). Question paper generation through progressive model and difficulty calculation on the Promexa Mobile Application. Education and Information Technologies,26(4), 4151–4179. https://doi.org/10.1007/s10639-021-10461-y

    Article  Google Scholar 

  • Singhal, R., Goyal, S., & Henz, M. (2016). User-Defined Difficulty Levels for Automated Question Generation; User-Defined Difficulty Levels for Automated Question Generation. 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). https://doi.org/10.1109/ICTAI.2016.126

  • Srivastava, M.,& Goodman, Noah. (2021) Question generation for adaptive education. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (vol 2: Short papers), pp 692–701. Online. Association for Computational Linguistics. https://doi.org/10.1007/s10639-023-12045-4

  • Starkweather, A., Sargent, L., Nye, C., Albrecht, T., Cloutier, R., & Foster, A. (2017). Progressive Assessment and Competency Evaluation Framework for Integrating Simulation in Nurse Practitioner Education. Journal for Nurse Practitioners,13(7), e301–e310. https://doi.org/10.1016/j.nurpra.2017.04.012

    Article  Google Scholar 

  • Sullivan, T. (2001). Locating question difficulty through explorations in question space. Proceedings of the ACM International Conference on Digital Libraries, 251–252. https://doi.org/10.1145/379437.379669

  • Sun, C., Liu, B., Jin, Z., & Wang, C. (2009). An adaptive algorithm for generating question paper with multi-factors. Proceedings of 2009 4th International Conference on Computer Science and Education, ICCSE 2009, 1048–1051. https://doi.org/10.1109/ICCSE.2009.5228540

  • Sunil, A. (2020). Smart Paper Generator. International Journal of Progressive Research in Science and Engineering,1(9), 17–19.

    Google Scholar 

  • Tsunemoto, A., Trofimovich, P., Blanchet, J., Bertrand, J., & Kennedy, S. (2022). Effects of benchmarking and peer-assessment on French learners’ self-assessments of accentedness, comprehensibility, and fluency. Foreign Language Annals, 55(1), 135–154. https://doi.org/10.1111/flan.12571

  • Vozár, O., & Bieliková, M. (2008). Adaptive test question selection for web-based educational system. Proceedings - 3rd International Workshop on Semantic Media Adaptation and Personalization. SMAP,2008, 164–169. https://doi.org/10.1109/SMAP.2008.15

    Article  Google Scholar 

  • Wang, Q., Liu, J., Wang, B., & Guo, L. (2014). A regularized competition model for question difficulty estimation in community question answering services. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1115–1126.

  • Wendler, C., & Bridgeman, B. (2014). The Research Foundation for the GRE revised General Test: A Compendium of Studies. www.ets.org/gre/compendium

  • Yudono, M. A. S., Faris, R. M., Wibowo, A. de, Sidik, M., Sembiring, F., & Aji, S. F. (2022). Fuzzy Decision Support System for ABC University Student Admission Selection. Proceedings of the International Conference on Economics, Management and Accounting (ICEMAC 2021), 230–237. https://doi.org/10.2991/aebmr.k.220204.024

Download references

Acknowledgements

The authors would like to say thank you to the anonymous reviewers and respected editors for taking valuable time to go through the manuscript.

The Introduction and Discussion sections have been reworded using CHATGPT.

The authors would like to express a sincere thanks to Cerebranium and it’s founder Omkar Pimple for pioneering initial version of Progressive Model (https://cerebranium.com). We also would like to thank Rishabh Singh who played pivotal role in structuring initial version of progressive model.

Funding

Authors of this paper confirm that there is no funding received for this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ajay Devmane.

Ethics declarations

Conflict of interest

The authors of this research study declare that there is NO conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shah, A., Devmane, A., Ranka, M. et al. Improvised progressive model based on automatic calibration of difficulty level: A practical solution of competitive-based examination. Educ Inf Technol 29, 6909–6946 (2024). https://doi.org/10.1007/s10639-023-12045-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-023-12045-4

Keywords

Navigation