Comparatively, the least precision corresponded with the category of operating a phone with 0.91 indicating a
minimal percentage of false positives- this could be related to similar gesture of using hands in a familiar
behavior. However, the model had a balanced classification in all conditions, such as calling someone
(precision: 1.00, recall: 0.95) reevaluating its good performance. The training dynamics within the 30 epochs
are shown in figures 4 and 5. The validation and training accuracy increased quickly in the first 10 epochs with
only a slight difference that converged at 0.94 showing that overfitting is minimal. The training loss decreased
significantly and the little difference between training and validation loss represent high generalization ability.
These findings indicate that the model is efficient and well regularized, which can extrapolate performance on
new and unseen data. Although these performance metrics are encouraging, the wider value of this work is in
its low resource application of video-based behavioral classification with a highly computationally motivated
transfer learning paradigm. However, in contrast to the state of business before this research, which requires
multi-camera arrangements or high-cost sensor integration, a single video stream as input to the system and
frame-wise processing can generate results comparable or even better when combined with optimized CNN
structures. Nevertheless, the novelty of the research is not progressive, and in the future, it is necessary to
consider adding temporal models (e.g., LSTMs or transformers) to represent motion sequence better.
Furthermore, an enriched reading of captured behaviors beyond face-emotion cues or eye-gaze-tracking might
go beyond the binary classification system into finer-grained academic integrity analytics.
CONCLUSION
This research discussed the limitations in the current traditional methods of e-assessment malpractice
identification, which are mainly dependant on face detection that cannot be able to capture the complex acts of
malpractice. In closing this gap, a multi-media analytics system has been suggested to provide checks and
balances to academic integrity in online assessment testing. The framework takes long proctored video,
subdivides them into manageable frames, and identifies them according to fine-adjusted convolutional neural
networks. Malpractice behaviors are divided into four classes of cheating and one of normal, and summarized
by instructors using an easy interface. The implementation used state-of-art CNN network, InceptionV3, to
extract low-level and significant high-level features in captured video frames such as gaze estimation, phone
detection, and window activity. Such characteristics allow detecting suspicious activity in temporal sequences
correctly, and detection becomes reliable. An experimental proof with the dataset of 24 test takers proved the
ability of the framework to recognize 96 percent of cases of cheating across various scenarios. Outcomes
affirm that deep learning and transfer learning have potential to detect malpractice in e-assessment in a
scalable and automated manner. It is hoped that future study directions are based on the integration of
multimodal conversations, real-time processing and even more responsive architecture to improve further
system accuracy and dependability. These results indicate a major milestone in terms of maintaining credibility
in online learning and building confidence when it comes to digital learning services.
REFERENCE
1. Adeyemi, J., Ogunlere, S., & Akwaronwu, B. (2025). Real-Time Detection of Examination
Malpractices Using Convolutional Neural Networks and Video Surveillance: A Systematic Review
with Meta-Analysis. British Journal of Computer, Networking and Information Technology, 8, 15–50.
Retrieved from https://doi.org/10.52589/BJCNIT-QC5EELJE.
2. Al-Mutairi, A., & Al-Sahli, R. (2024). Secure Authentication System Based on Multi-Factor
Authentication. Social Science Research Network (SSRN). Retrieved from
https://doi.org/10.13140/RG.2.2.24880.74247.
3. Ghosh, S., Das, N., & Nasipuri, M. (2019). Reshaping Inputs for Convolutional Neural Networks—
Some Common and Uncommon Methods. Pattern Recognition, 93, 332–348. Retrieved from
https://doi.org/10.1016/j.patcog.2019.04.009.
4. Hussain, M., Qureshi, Z., & Malik, S. (2024). The Impact of Educational Technologies on Modern
Education: Navigating Opportunities and Challenges. Global Educational Studies Review, IX(3), 21–
30. Retrieved from https://doi.org/10.31703/gesr.2024(IX-III).03.
5. Jantos, A. (2021). Motives for Cheating in Summative E-Assessment in Higher Education - A
Quantitative Analysis. In J. Zaharia & M. Deac (Eds.), Proceedings of the 13th International