Usability Evaluation of Digital Learning Tools for Online Assessment: A Study of Malaysian Technical Education Institutions
- Akhmal Khalis Mohd Isa
- Mashanum Osman
- Zuraini Othman
- 2082-2090
- Oct 3, 2025
- Education
Usability Evaluation of Digital Learning Tools for Online Assessment: A Study of Malaysian Technical Education Institutions
Akhmal Khalis Mohd Isa¹, Mashanum Osman²*, Zuraini Othman3
¹Politeknik Sultan Idris Shah, Selangor, Malaysia
2,3Center for Advanced Computing Technology (C-ACT), Faculty of Information and Communication Technology (FTMK), University Technical Malaysia Melaka, Malaysia
*Corresponding author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000179
Received: 24 August 2025; Accepted: 30 August 2025; Published: 03 October 2025
ABSTRACT
The COVID-19 pandemic has accelerated the adoption of digital learning tools for online assessment in higher education institutions. This study evaluates the usability of digital learning tools used for online assessment submission in Malaysian technical education institutions. Using the USE (Usefulness, Satisfaction, and Ease of use) questionnaire, data were collected from 80 students and 21 lecturers from community colleges (kolej community) and polytechnics in Johor, Malaysia. The findings reveal that Google Classroom is the most frequently used digital learning tool (79% usage rate), followed by Microsoft Teams (9.9%). While lecturers expressed high satisfaction with digital learning tools (mean score 4.43-4.71), students showed lower satisfaction levels (mean score 2.63-2.69), particularly regarding ease of use and immediate feedback mechanisms. The study identifies a significant gap between educator and student perceptions of digital learning tools, highlighting the need for improved user interface design and enhanced feedback systems for formative assessment. These findings contribute to the understanding of usability factors in educational technology and provide insights for improving online assessment tools in technical education contexts.
Keywords: digital learning tools, online assessment, usability evaluation, USE questionnaire, technical education, Covid-19, educational technology
INTRODUCTION
The landscape of higher education has undergone unprecedented transformation over the past decade, with online and blended learning becoming integral components of educational delivery. This shift has been dramatically accelerated by the COVID-19 pandemic, which necessitated an immediate transition to remote learning modalities across global educational institutions (Daniel, 2020). The pandemic has fundamentally altered how educational institutions approach teaching, learning, and particularly assessment practices.
Larreamendy-Joerns and Leinhardt (2006) identified two complementary movements in the educational landscape: the integration of online teaching and learning into mainstream university practices, and the increasingly prominent role of distance programs in higher education institutions. This observation has proven prophetic, as institutions worldwide have been compelled to rapidly digitize their educational processes. The U.S. Department of Education (2009) analysis of online learning suggests that online instruction can be more beneficial than traditional face-to-face instruction for various learner demographics, including both K-12 and adult learners.
Assessment represents a cornerstone of formal higher education, serving multiple critical functions in the learning process. Bransford, Brown, and Cocking (2000) emphasized that assessment is a fundamental component for effective learning, requiring teaching and learning processes to be assessment-centered to provide learners with opportunities to demonstrate their developing abilities and receive appropriate support. Assessment practices are deeply embedded in pedagogical frameworks and serve to inform students, educators, and institutions about learning outcomes and the achievement of syllabus objectives (Noradila et al., 2021).
In the Malaysian higher education context, assessment completion is mandatory for every subject undertaken by students. Narinasamy (2018) identified two primary assessment approaches: formative evaluations that occur during the learning process, allowing instructors to determine topic mastery, and summative assessments that take place at the completion of learning outcomes or semester end. The COVID-19 pandemic has necessitated a complete transition to online assessment modalities, presenting unique challenges for both educators and students.
The shift to remote assessment has introduced complexity in formative evaluation processes, as students are physically distant from instructors, reducing interaction possibilities and immediate feedback opportunities (Senel & Senel, 2021). The absence of physical classroom environments requires students to receive instant feedback from instructors to identify and correct their mistakes effectively. This creates a critical need for digital learning tools that can facilitate meaningful interaction and provide timely feedback mechanisms.
Remote assessments offer advantages in terms of location and timing flexibility (Khan & Khan, 2018), but student perceptions of online assessment tools significantly impact their learning experience and academic performance. Understanding these perceptions is crucial for educators to evaluate and improve their teaching methods and content delivery strategies (Amalia, 2018). Guangul et al. (2020) investigated remote assessment challenges in Middle East College, revealing institutional difficulties in addressing student willingness to submit assessments and time management issues that affect overall academic performance.
The technical and vocational education and training (TVET) sector, which includes community colleges and polytechnics in Malaysia, faces particular challenges in implementing effective online assessment systems. These institutions serve diverse student populations with varying levels of technological literacy and access to digital resources. Understanding the usability of digital learning tools in this context is essential for improving educational outcomes and student satisfaction.
This study aims to investigate the usability of digital learning tools used for online assessment submission in Malaysian technical education institutions, focusing on the perspectives of both students and lecturers. The research employs the USE (Usefulness, Satisfaction, and Ease of use) questionnaire to evaluate commonly used digital learning platforms and identify areas for improvement in online assessment delivery.
LITERATURE REVIEW
2.1 Digital Learning Tools in Higher Education
The proliferation of digital learning tools has transformed educational delivery methods, offering numerous advantages for both educators and students. Araka et al. (2020) identified several key benefits of using online tools for assessment, including ease of submission and response, robust access control mechanisms, and comprehensive online storage capabilities. These tools provide valuable statistical data regarding student participation rates, enabling educators to gain insights into engagement patterns and learning behaviors.
Modern learning management systems have evolved to provide integrated functionality encompassing communication, content management, and storage solutions. Kon and Kan (2020) highlighted Canvas, Moodle, Google Classroom, and Microsoft Teams as prominent platforms currently utilized in online learning environments. Each platform offers unique features and capabilities that cater to different educational needs and institutional requirements.
Moore et al. (2011) emphasized that most contemporary learning tools incorporate features enabling students to upload and submit assignments more efficiently and rapidly than traditional methods. These systems provide educators with comprehensive data including lists of missing and submitted assignments, submission timestamps, and student progress tracking capabilities. Such functionality enables more effective course management and allows for timely intervention when students encounter difficulties.
2.2 Mobile Learning and Student Motivation
The integration of mobile applications in educational contexts has demonstrated positive impacts on student motivation and engagement. Demir et al. (2018) found that students experienced joy, excitement, happiness, and a sense of value when mobile applications were incorporated into their learning environment, resulting in increased motivation levels. This emotional response to technology integration suggests that well-designed mobile learning solutions can significantly enhance the educational experience.
Evans (2008) identified multiple benefits of mobile learning, including providing students with rapid access to information, supporting various learning modalities, enabling contextual learning experiences, allowing student control over their learning pace, offering support and encouragement, increasing course participation, fostering desire to engage with materials, and creating positive meaningful differences in academic achievement. These benefits highlight the potential of mobile technologies to address diverse learning needs and preferences.
2.3 Usability Evaluation in Educational Technology
Usability evaluation has become increasingly important in educational technology development and implementation. Various instruments have been developed to assess mobile applications and digital learning tools based on user perceptions and experiences. The USE questionnaire, developed by Lund (2001), provides a comprehensive framework for evaluating system usability across multiple dimensions.
Hariyanto et al. (2020) successfully applied the USE questionnaire to evaluate personalized adaptive e-learning systems, demonstrating its effectiveness in educational contexts. The questionnaire’s structure, incorporating usefulness, ease of use, ease of learning, and satisfaction measures, provides a holistic view of user experience with digital learning tools.
Alternative usability evaluation instruments include the Software Usability Measurement Inventory (SUMI) by Kirakowski and Corbett (1993), the System Usability Scale (SUS) by Brooke (1995), and the Post-Study System Usability Questionnaire (PSSUQ) by Lewis (1995). Each instrument offers unique perspectives on usability assessment, with the USE questionnaire being particularly well-suited for educational technology evaluation due to its focus on learning-related usability factors.
2.4 Challenges in Online Assessment
The transition to online assessment has revealed several challenges that institutions must address to ensure effective learning outcomes. Singer-Freeman (2020) identified grand challenges for assessments in higher education, including maintaining academic integrity, ensuring equitable access to assessment tools, and providing meaningful feedback to students in digital environments.
Rawluysk (2018) emphasized the critical relationship between assessment practices and student learning outcomes in higher education, noting that ineffective assessment strategies can significantly impact educational quality. The challenge becomes more complex in online environments where traditional assessment methods may not translate effectively to digital formats.
Hashim et al. (2021) observed implementation challenges in classroom assessment within Technical and Vocational Education and Training (TVET) contexts, highlighting the need for specialized approaches that consider the practical and hands-on nature of technical education. These challenges are magnified in online environments where practical skill demonstration becomes more complex.
METHODOLOGY
3.1 Research Design
This study employed a quantitative research design using a cross-sectional survey approach to evaluate the usability of digital learning tools for online assessment. The research focused on understanding user perceptions and experiences with commonly used digital learning platforms in Malaysian technical education institutions.
3.2 Participants
The study involved 101 participants comprising 80 students (79.2%) and 21 lecturers (20.8%) from community colleges (kolej community) and polytechnics located in Johor, Malaysia. Participants were selected through convenience sampling using broadcast messages distributed via social media platforms. All participants had direct access to and experience with digital learning applications used in their respective institutions.
TABLE 1. DEMOGRAPHIC RESULTS
The demographic profile of participants showed that 37 (36.6%) were male and 64 (63.4%) were female. Age distribution indicated that 57 participants (56.4%) were aged 18-20 years, 21 participants (20.8%) were aged 21-30 years, 18 participants (17.8%) were aged 31-40 years, and 5 participants (5%) were aged 41-50 years. All participants owned smartphones and had regular access to digital learning applications.
3.3 Data Collection Instrument
The study utilized the USE (Usefulness, Satisfaction, and Ease of use) questionnaire developed by Lund (2001) to measure the usability of assessment submission modules in online learning applications. The USE questionnaire comprises four dimensions:
Table 2 Outline for The Questionnaire.
No | Variables | Items Number | Total Items |
1 | Usefulness | 1,2,3,4,5,6 | 6 |
2 | Ease of Use | 7,8,9,10,11,12 | 6 |
3 | Ease of learning | 13, 4, 15, 16 | 4 |
4 | Satisfaction | 17, 18, 19, 20 21 | 5 |
Usefulness – measuring the degree to which users believe the system helps them accomplish their learning objectives Ease of Use – evaluating how easy users find the system to operate and navigate Ease of Learning – assessing how readily users can learn to use the system effectively Satisfaction – measuring overall user satisfaction with the system
The questionnaire contained 21 items distributed across the four dimensions, utilizing a 5-point Likert scale where 1 represents “strongly disagree,” 2 represents “disagree,” 3 represents “neutral,” 4 represents “agree,” and 5 represents “strongly agree.”
3.4 Data Collection Procedure
Data collection was conducted online using Google Forms to distribute the questionnaire to participants. The survey link was shared through social media platforms and institutional communication channels. Participants were provided with clear instructions regarding the study’s purpose and were assured of confidentiality and anonymity. Data collection occurred over a four-week period to ensure adequate response rates.
3.5 Data Analysis
Collected data were analyzed using descriptive statistical methods including frequencies, percentages, means, and standard deviations. Comparative analysis was conducted between student and lecturer responses to identify perception differences. Data analysis was performed using SPSS version 28.0, with results presented through tables, charts, and graphical representations.
RESULTS AND DISCUSSION
4.1 Digital Learning Tools Usage Patterns
The analysis of digital learning tool preferences revealed that Google Classroom dominated usage patterns among participants, with 80 respondents (79.2%) indicating it as their primary platform. Microsoft Teams ranked second with 10 users (9.9%), followed by CIDOS with 6 users (5.9%), institutional Learning Management Systems (LMS) with 4 users (4.0%), and Moodle LMS with 1 user (1.0%).
Fig. 1 Frequently Used Digital Learning Tools.
This finding aligns with previous research by Sahulata et al. (2022) and Davidson (2018), which identified Google Classroom as a preferred online learning tool due to its user-friendly interface, integration with other Google services, and comprehensive feature set for educational purposes. The dominance of Google Classroom in Malaysian technical education institutions suggests that educators and students value its accessibility, ease of use, and cost-effectiveness.
The limited adoption of more sophisticated LMS platforms like Moodle may indicate resource constraints or training limitations within the participating institutions. This finding has implications for institutional technology adoption strategies and professional development programs for educators.
4.2 Time Engagement Patterns
Analysis of time allocation for digital learning applications revealed diverse engagement patterns among participants. Seventeen respondents (16.8%) spent less than one hour daily using digital learning apps, while 59 respondents (58.4%) engaged for 1-3 hours daily. Eighteen respondents (17.8%) used applications for 4-5 hours daily, and 7 respondents (6.9%) spent more than 5 hours daily on digital learning activities.
The predominance of 1-3 hour daily usage suggests that most learners have moderate engagement levels with digital learning tools. This finding indicates that digital learning applications serve as supplementary rather than primary learning resources for many students. The relatively small percentage of high-usage participants (>5 hours daily) may reflect either highly engaged learners or students facing difficulties requiring extended time to complete tasks.
4.3 Usefulness Evaluation
The usefulness dimension revealed significant differences between student and lecturer perceptions. Students reported lower perceived usefulness with mean scores ranging from 2.63 to 2.69, indicating general disagreement with usefulness statements. In contrast, lecturers demonstrated high perceived usefulness with mean scores ranging from 4.43 to 4.71, showing strong agreement with usefulness measures.
This disparity suggests that current digital learning tools are better aligned with educator needs than student requirements. Students may require more intuitive interfaces, clearer navigation paths, and enhanced functionality to perceive these tools as truly useful for their learning objectives. The gap indicates opportunities for tool developers to focus on student-centric design improvements.
4.4 Ease of Use Assessment
Ease of use evaluation showed similar patterns to usefulness, with students reporting mean scores of 2.63-2.69, indicating difficulty in system operation and navigation. Students particularly struggled with user interaction elements, problem-solving steps, system flexibility, and help guidance features. Conversely, lecturers reported high ease of use with mean scores of 4.43-4.71, indicating satisfaction with system operability.
The contrasting perceptions suggest that lecturers, who may have received formal training on these platforms or have greater technical expertise, find the systems easier to navigate than students. This finding highlights the need for improved user onboarding, enhanced help systems, and more intuitive interface design to better serve student users.
4.5 Ease of Learning Evaluation
The ease of learning dimension showed the smallest gap between student and lecturer perceptions, with an overall mean score of 3.02, indicating neutral to slightly positive agreement. This finding suggests that while initial learning of these systems is manageable for most users, ongoing usability challenges persist once users attempt to accomplish more complex tasks.
Both students and lecturers agreed that the applications are relatively easy to learn initially, but the previously identified usability issues may emerge as users attempt to utilize more advanced features or accomplish specific learning objectives.
4.6 Satisfaction Levels
Satisfaction measurements revealed the most significant disparities between user groups. Students expressed dissatisfaction with current digital learning tools, indicating they would not recommend the applications to friends and disagreeing that the tools are enjoyable to use, work as expected, or encourage timely assessment submission.
Lecturers, conversely, expressed high satisfaction levels across all measured dimensions. This stark difference suggests that current digital learning tools are optimized for educator workflows rather than student learning experiences. The dissatisfaction among students has serious implications for learning engagement, academic performance, and long-term educational outcomes.
4.7 Implications for Educational Technology Design
The findings reveal critical gaps in current digital learning tool design that prioritize administrative and educator convenience over student user experience. Several key implications emerge from this analysis:
User-Centered Design Needs: Current tools require significant redesign to prioritize student user experience, incorporating more intuitive navigation, clearer feedback mechanisms, and enhanced accessibility features.
Training and Support Requirements: Institutions must invest in comprehensive training programs for students to improve their competency and confidence with digital learning tools.
Feedback Mechanism Enhancement: The identified lack of immediate feedback capabilities in formative assessment contexts requires technological solutions that can provide timely, meaningful feedback to students.
Platform Standardization Considerations: The dominance of Google Classroom suggests potential benefits of standardizing on widely accepted platforms while ensuring they meet diverse institutional needs.
The usability challenges identified in this study may exacerbate existing digital divides within TVET student populations. Students with limited technological backgrounds face additional barriers when learning tools are not intuitive or well-supported. This has implications for educational equity and student success rates in technical programs.
LIMITATIONS AND FUTURE RESEARCH
5.1 Study Limitations
This study has several limitations that should be considered when interpreting results. The research was conducted exclusively in Johor, Malaysia, limiting generalizability to other regions or educational contexts. The convenience sampling method may have introduced selection bias, as participants self-selected to participate in the study.
The cross-sectional design provides a snapshot of current perceptions but does not capture changes over time or the impact of ongoing training and system improvements. Additionally, the study focused on user perceptions rather than objective usability measures, which may be influenced by individual preferences and experiences.
5.2 Future Research Directions
Future research should consider longitudinal studies to track changes in user perceptions as digital learning tools evolve and as users gain greater experience with these systems. Comparative studies across different regions and educational contexts would enhance understanding of cultural and institutional factors influencing usability perceptions.
Mixed-methods research incorporating qualitative interviews or focus groups could provide deeper insights into specific usability challenges and potential solutions. Additionally, objective usability testing using metrics such as task completion time, error rates, and navigation efficiency would complement subjective perception data.
Investigation of specific design features that contribute to usability differences between student and lecturer experiences could inform targeted improvement efforts. Research on the effectiveness of different training approaches for improving student competency with digital learning tools would provide practical guidance for institutions.
CONCLUSION
This study provides valuable insights into the usability of digital learning tools for online assessment in Malaysian technical education institutions. The findings reveal significant disparities between student and lecturer perceptions, with students experiencing lower levels of usefulness, ease of use, and satisfaction compared to their educators. This suggests that current digital learning tools are optimized for administrative convenience and educator workflows rather than student learning experiences.
Google Classroom emerged as the dominant platform, used by nearly 80% of participants, suggesting its widespread acceptance in the technical education sector. However, the low satisfaction levels among students indicate that popularity does not necessarily translate to optimal user experience for all stakeholder groups.
The research highlights critical areas requiring attention from educational technology developers and institutional administrators. Priority should be given to enhancing user interface design, improving feedback mechanisms for formative assessment, and developing comprehensive training programs to support student success with digital learning tools.
The study contributes to the growing body of knowledge regarding educational technology usability and provides practical implications for improving online assessment delivery in technical education contexts. As educational institutions continue to integrate digital technologies into their teaching and learning processes, understanding user experiences and addressing identified gaps becomes essential for achieving desired educational outcomes.
The COVID-19 pandemic has permanently altered the educational landscape, making effective digital learning tools not just advantageous but essential for educational delivery. This research provides a foundation for ongoing efforts to improve these critical educational technologies and ensure they serve all users effectively.
Educational institutions must recognize that successful technology integration requires more than tool adoption; it demands careful attention to user experience, comprehensive training programs, and ongoing evaluation and improvement efforts. By addressing the identified gaps between educator and student experiences, institutions can work toward more equitable and effective digital learning environments that support the success of all learners.
ACKNOWLEDGMENTS
The authors would like to thank the Centre of Research and Innovation Management of University Technical Malaysia Melaka (UTeM) for sponsoring the publication fees under the Tabung Penerbitan CRIM UTeM. We also extend our gratitude to all participants from community colleges and polytechnics in Johor who contributed their time and insights to this research.
REFERENCES
- Amalia, R. (2018). Students’ perception of online assessment use in Schoology in EFL classrooms. Undergraduate thesis, Universitas Islam Negeri Sunan Ampel Surabaya.
- Araka, E., Maina, E., Gitonga, R., & Oboko, R. (2020). Research trends in measurements and intervention tools for self-regulated learning for e-learning environments-systematic review (2008-2018). Research and Practice in Technology Enhanced Learning, 16(6), 1-21. https://doi.org/10.1186/s41039-020-00129-5
- Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and chool. National Academy Press.
- Brooke, J. (1995). SUS: A quick and dirty usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland (Eds.), Usability evaluation in industry (pp. 189-194). Taylor & Francis.
- Daniel, S. J. (2020). Education and the COVID-19 pandemic. Prospects, 49(1), 91-96. https://doi.org/10.1007/s11125-020-09464-3
- Davidson, L. K. (2018). Google Classroom: An examination of efficacy for teaching and learning. octoral dissertation, Northcentral University.
- Demir, K., Doğan, A., & Büyüköztürk, Ş. (2018). Adaptation of the mobile learning management system evaluation scale into Turkish. Bartin University Journal of Faculty of Education, 7(1), 156-177.
- Evans, C. (2008). The effectiveness of m-learning in the form of podcast revision lectures in higher education. Computers & Education, 50(2), 491-498.
- Guangul, F. M., Suhail, A. H., Khalit, M. I., & Khidhir, B. A. (2020). Challenges of remote assessment in higher education in the context of COVID-19: A case study of Middle East College. Educational Assessment, Evaluation and Accountability, 32, 519-535.
- Hariyanto, D., Triyono, M. B., & Köhler, T. (2020). Usability evaluation of personalized adaptive e-learning system using USE questionnaire. Knowledge Management & E-Learning, 12(1), 85-105. https://doi.org/10.34105/j.kmel.2020.12.005
- Hashim, S., Zakariah, S. H., Taufek, F. A., Zulkifli, N. N., Che Lah, N. H., & Murniati, D. E. (2021). An observation on implementation of classroom assessment in Technical and Vocational Education and Training (TVET) subject area. Journal of Technical Education and Training, 13(3), 190-200.
- Khan, S., & Khan, R. A. (2018). Online assessments: Exploring perspectives of university students. Education & Information Technologies, 24, 661-677.
- Kirakowski, J., & Corbett, M. (1993). SUMI: The Software Usability Measurement Inventory. British Journal of Educational Technology, 24(3), 210-212.
- Kon, S., & Kan, K. (2020). A comparative study of learning management systems for online learning. In Proceedings of the 2020 International Conference on Education Technology and Computer (pp. 89-94).
- Larreamendy-Joerns, J., & Leinhardt, G. (2006). Going the distance with online education. Review of Educational Research, 76(4), 567-605.
- Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57-78.
- Lund, A. M. (2001). Measuring usability with the USE questionnaire. Usability Interface, 8(2), 3-6.
- Moore, J. L., Dickson-Deane, C., & Galyen, K. (2011). E-learning, online learning and distance learning environments: Are they the same? The Internet and Higher Education, 14(2), 129-135. https://doi.org/10.1016/j.iheduc.2010.10.001
- Narinasamy, I. (2018). Implementing classroom assessment in Malaysia: An investigation. Retrieved from https://www.researchgate.net/publication/348355621
- Noradila, I., Ganesan, N., & Ahmad Maulana, N. S. E. (2021). Student’s perception towards the usage of online assessment in University Putra Malaysia amidst COVID-19 pandemic. Journal of Research in Humanities and Social Science, 9, 09-16.
- Rawluysk, P. E. (2018). Assessment in higher education and student learning. Journal of Instructional Pedagogies, 21, 1-13.
- Sahulata, D., Rahayu, S., & Osman, K. (2022). Google Classroom as a tool for active learning. In AIP Conference Proceedings (Vol. 2330, No. 1, p. 020015). AIP Publishing LLC.
- Senel, S., & Senel, H. C. (2021). Remote assessment in higher education during COVID-19 pandemic. International Journal of Assessment Tools in Education, 8(2), 181-199.
- Singer-Freeman, K. E. (2020). Grand challenges for assessments in higher education. Research & Practice in Assessment, 15, 1-10.
- u. S. Department of Education. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Office of Planning, Evaluation, and Policy Development.