FAHIM: Flexible, Insight-Driven, Gamified Assessment Hub for Higher Education

Authors

Ahmad Fahimi Amir

Universiti Pertahanan Nasional Malaysia (Malaysia)

Article Information

DOI: 10.47772/IJRISS.2025.925ILEIID000019

Subject Category: Education

Volume/Issue: 9/25 | Page No: 92-106

Publication Timeline

Submitted: 2025-09-23

Accepted: 2025-09-30

Published: 2025-11-04

Abstract

FAHIM is a web-based platform application that helps higher education instructors design fair, outcomes-aligned assessment in minutes. Its core innovation is the tight coupling of a reusable Question Bank tagged by subject, question type, difficulty, Bloom’s level, vocabulary/CEFR tier, and marks with an intuitive drag-and-drop Paper Builder and instant analytics. The app also features an in-app guidance to support users in developing items, drafting question stems and options, and a scoreboard with badges gamifies contribution by tracking progress against peers and celebrating milestones. The app reduces preparation time and provides better balance across difficulty and cognitive levels, clearer alignment to outcomes, and greater transparency for moderation. The gamified scoreboard drives engagement and continuous improvement, while AI assistance lowers the learning curve. Together, FAHIM turns exam design into repeatable, data-informed practice rather than a last-minute craft and demonstrates how metadata, lightweight analytics, and an embedded AI assistant can advance accuracy, clarity, and consistency in higher education assessment.

Keywords

question bank, assessment design, educational analytics

Downloads

References

1. Akintayo, O. T., Eden, C. A., Onyebuchi, N. C., & Ayeni, O. O. (2024). Evaluating the impact of educational technology on learning outcomes in the higher education sector: A systematic review. International Journal of Management & Entrepreneurship Research, 6(5), 1091. https://doi.org/10.51594/ijmer.v6i5.1091 [Google Scholar] [Crossref]

2. Ali, L. (2018). The design of curriculum, assessment and evaluation in higher education with constructive alignment. Journal of Education and E-Learning Research, 5(1), 72–78. https://doi.org/10.20448/journal.509.2018.51.72.78 [Google Scholar] [Crossref]

3. Amir, A. F., Mohamed, A. T. F. S., & Juhari, J. (2024). Sustainable online education for higher education institutions: A systematic literature review. Asia Pacific Journal of Educators and Education, 39(1),45–91. https://doi.org/10.21315/apjee2024.39.1.3 [Google Scholar] [Crossref]

4. Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2000). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman. [Google Scholar] [Crossref]

5. Baharum, N. N., Razali, A. B., Ismail, L., & Nordin, N. (2021). Aligning a university English language proficiency measurement tool with the CEFR: A case in Malaysia. Pertanika Journal of Social Sciences and Humanities, 29(S3), 139–156. https://doi.org/10.47836/pjssh.29.s3.09 [Google Scholar] [Crossref]

6. Balci, S., Secaur, J. M., & Morris, B. J. (2022). Comparing the effectiveness of badges and leaderboards on academic performance and motivation of students in fully versus partially gamified online physics classes. Education and Information Technologies, 27(6), 8669–8704. https://doi.org/10.1007/s10639-022-10983-z [Google Scholar] [Crossref]

7. Biggs, J., & Tang, C. (2014). Constructive alignment: An outcomes-based approach to teaching anatomy. In New directions in teaching and learning anatomy (pp. 31–44). Springer International Publishing. https://doi.org/10.1007/978-3-319-08930-0_4 [Google Scholar] [Crossref]

8. Bissell, A. N., & Lemons, P. P. (2006). A new method for assessing critical thinking in the classroom. BioScience, 56(1), 66–72. https://doi.org/10.1641/0006-3568(2006)056[0066:anmfac]2.0.co;2 [Google Scholar] [Crossref]

9. Brown, G. T. L. (2022). The past, present and future of educational assessment: A transdisciplinary perspective. Frontiers in Education, 7, Article 1060633. https://doi.org/10.3389/feduc.2022.1060633 [Google Scholar] [Crossref]

10. Brüggemann, T., Ludewig, U., Lorenz, R., & Mcelvany, N. (2022). Effects of mode and medium in reading comprehension tests on cognitive load. Computers & Education, 192, Article 104649. https://doi.org/10.1016/j.compedu.2022.104649 [Google Scholar] [Crossref]

11. Cheng, L., Hu, H., & Rogers, T. (2004). ESL/EFL instructors’ classroom assessment practices: Purposes, methods, and procedures. Language Testing, 21(3), 360–389. https://doi.org/10.1191/0265532204lt288oa [Google Scholar] [Crossref]

12. Deterding, S. (2015). The lens of intrinsic skill atoms: A method for gameful design. Human–Computer Interaction, 30(3–4), 294–335. https://doi.org/10.1080/07370024.2014.993471 [Google Scholar] [Crossref]

13. Gaillat, T., Sousa, A., Zarrouk, M., Ballier, N., Simpkin, A., Bouyé, M., & Stearns, B. (2021). Predicting CEFR levels in learners of English: The use of microsystem criterial features in a machine learning approach. ReCALL, 33(3), 1–18. https://doi.org/10.1017/s095834402100029x [Google Scholar] [Crossref]

14. Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570. https://doi.org/10.1111/ejed.12533 [Google Scholar] [Crossref]

15. Imran, H. (2019). Evaluation of awarding badges on student’s engagement in gamified e-learning systems. Smart Learning Environments, 6, Article 9. https://doi.org/10.1186/s40561-019-0093-2 [Google Scholar] [Crossref]

16. Ismail, N., & Sidek, S. (2019). Determinant factors for commercialising research products in Malaysian public universities. International Journal of Innovative Technology and Exploring Engineering, 8(6S4), 780–787. https://doi.org/10.35940/ijitee.f1157.0486s419 [Google Scholar] [Crossref]

17. Jamil, A. A., Indiran, L., Roslan, S., Lim, S. A. R., Ismail, K., & Muhamad, S. (2024). Surmounting commercialization barriers in Malaysian research universities: Paving the way for unprecedented innovation impact. International Journal of Research and Innovation in Social Science, 8(11), 682–698. https://doi.org/10.47772/ijriss.2024.8110054 [Google Scholar] [Crossref]

18. Kang, H., & Furtak, E. M. (2021). Learning theory, classroom assessment, and equity. Educational Measurement: Issues and Practice, 40(3), 73–82. https://doi.org/10.1111/emip.12423 [Google Scholar] [Crossref]

19. Kerneža, M., & Zemljak, D. (2023). Science teachers’ approach to contemporary assessment with a reading literacy emphasis. Journal of Baltic Science Education, 22(5), 851–864. https://doi.org/10.33225/jbse/23.22.851 [Google Scholar] [Crossref]

20. Kumar, V., Chandrappa, & Harinarayana, NS. (2024). Exploring dimensions of metadata quality assessment: A scoping review. Journal of Librarianship and Information Science. https://doi.org/10.1177/09610006241239080 [Google Scholar] [Crossref]

21. Lee, N. A. A., Kassim, A. A. M., & Bakar, R. A. (2022). The CEFR-aligned curriculum execution in Malaysia and other countries: A conceptual paper. Malaysian Journal of ELT Research, 19(1), 1–15. https://doi.org/10.52696/tgct6849 [Google Scholar] [Crossref]

22. Luo, J. (2024). Validating the impact of gamified technology-enhanced learning environments on motivation and academic performance: Enhancing TELEs with digital badges. Frontiers in Education, 9, Article 1429452. https://doi.org/10.3389/feduc.2024.1429452 [Google Scholar] [Crossref]

23. Mathew, A. C. (2020). Prediction of question tags based on LDA and deep neural network. International Journal of Computing, Communications and Networking, 9(2), 7–10. https://doi.org/10.30534/ijccn/2020/02922019 [Google Scholar] [Crossref]

24. Meylani, R. (2024). A comparative analysis of traditional and modern approaches to assessment and evaluation in education. Batı Anadolu Eğitim Bilimleri Dergisi, 15(2), 1–17. https://doi.org/10.51460/baebd.1386737 [Google Scholar] [Crossref]

25. O’Neill, G. (2017). It’s not fair! Students and staff views on the equity of the procedures and outcomes of students’ choice of assessment methods. Irish Educational Studies, 36(2), 221–236. https://doi.org/10.1080/03323315.2017.1324805 [Google Scholar] [Crossref]

26. Perry, N. E., Calder, K., & Walton, C. (1999). Teachers developing assessments of early literacy: A community of practice project. Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 22(4), 255–270. https://doi.org/10.1177/088840649902200404 [Google Scholar] [Crossref]

27. Prajapati, M. (2022). Introductory guide to the Common European Framework of Reference (CEFR) for English language teachers. Integrated Journal for Research in Arts and Humanities, 2(6), 291–296. https://doi.org/10.55544/ijrah.2.6.40 [Google Scholar] [Crossref]

28. Ragupathi, K., & Lee, A. (2020). Beyond fairness and consistency in grading: The role of rubrics in higher education. In Diversity and inclusion in global higher education (pp. 33–54). Springer Singapore. https://doi.org/10.1007/978-981-15-1628-3_3 [Google Scholar] [Crossref]

29. Romero, C., & Ventura, S. (2020). Educational data mining and learning analytics: An updated survey. WIREs Data Mining and Knowledge Discovery, 10(3), Article e1355. https://doi.org/10.1002/widm.1355 [Google Scholar] [Crossref]

30. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066x.55.1.68 [Google Scholar] [Crossref]

31. Sclater, N., & MacDonald, M. (2004). Putting interoperability to the test: Building a large reusable assessment item bank. Research in Learning Technology, 12(3), 225–237. https://doi.org/10.3402/rlt.v12i3.11254 [Google Scholar] [Crossref]

32. Smith, A., Mcconnell, L., Iyer, P., Allman-Farinelli, M., & Chen, J. (2024). Co-designing assessment tasks with students in tertiary education: A scoping review of the literature. Assessment & Evaluation in Higher Education, 49(6), 1–17. https://doi.org/10.1080/02602938.2024.2376648 [Google Scholar] [Crossref]

33. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312. https://doi.org/10.1016/0959-4752(94)90003-5 [Google Scholar] [Crossref]

34. Szwarc, E., Bach-Dąbrowska, I., & Bocewicz, G. (2018). Competence management in teacher assignment planning. In Advances in intelligent systems and computing (Vol. 835, pp. 449–460). Springer International Publishing. https://doi.org/10.1007/978-3-319-99972-2_37 [Google Scholar] [Crossref]

35. Uri, N. F. M. (2023). Challenges in CEFR adoption: Teachers’ understanding and classroom practice. International Journal of Modern Languages and Applied Linguistics, 7(1). https://doi.org/10.24191/ijmal.v7i1.7522 [Google Scholar] [Crossref]

36. Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x [Google Scholar] [Crossref]

37. Viberg, O., Mutimukwe, C., Hrastinski, S., Cerratto‐Pargman, T., & Lilliesköld, J. (2024). Exploring teachers’ (future) digital assessment practices in higher education: Instrument and model development. British Journal of Educational Technology, 55(6), 2597–2616. https://doi.org/10.1111/bjet.13462 [Google Scholar] [Crossref]

38. Volkov, S. A. (2023). Applying the cognitive congruence principle to target language training. Training, Language and Culture, 7(4), 54–67. https://doi.org/10.22363/2521-442x-2023-7-4-54-67 [Google Scholar] [Crossref]

39. Wang, X.-W., & Wang, Z. (2024). The influence of gamification affordance on knowledge sharing behaviour: An empirical study based on social Q&A community. Behaviour & Information Technology, 43(10), 1–17. https://doi.org/10.1080/0144929x.2024.2361059 [Google Scholar] [Crossref]

40. Wisniewski, K. (2017). Empirical learner language and the levels of the Common European Framework of Reference. Language Learning, 67(S1), 232–253. https://doi.org/10.1111/lang.12223 [Google Scholar] [Crossref]

41. Wulan, D. R., Fiyul, A. Y., Hidayat, Y., Nainggolan, D. M., & Rohman, T. (2024). Exploring the benefits and challenges of gamification in enhancing student learning outcomes. Global International Journal of Innovative Research, 2(7), 238. https://doi.org/10.59613/global.v2i7.238 [Google Scholar] [Crossref]

42. Zaidi, N. L. B., Monrad, S. M., Kurtz, J. B., Tai, A., Santen, S. A., Gruppen, L. D., Ahmed, A. Z., & Grob, K. L. (2018). Pushing critical thinking skills with multiple-choice questions: Does Bloom’s taxonomy work? Academic Medicine, 93(6), 856–861. https://doi.org/10.1097/acm.0000000000002087 [Google Scholar] [Crossref]

43. Zeng, J., Fan, A. C. W., Looi, C.-K., & Sun, D. (2024). Exploring the impact of gamification on students’ academic performance: A comprehensive meta‐analysis of studies from the year 2008 to 2023. British Journal of Educational Technology, 55(6), 1–22. https://doi.org/10.1111/bjet.13471 [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles