Framework for Designing Proctoring Systems for Online Language Assessments
- Syahirah Binti Ramli
- Dr. Azidah Binti Abu Ziden
- 4190-4198
- Jun 26, 2025
- Education
Framework for Designing Proctoring Systems for Online Language Assessments
*Syahirah Binti Ramli¹, Assoc. Prof. Dr. Azidah Binti Abu Ziden²
¹,²Universiti Sains Malaysia
¹Universiti Teknologi MARA Cawangan Pulau Pinang
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0296
Received: 20 May 2025; Accepted: 23 May 2025; Published: 26 June 2025
ABSTRACT
Language assessments are unique because they evaluate a wide range of skills that require careful and fair monitoring. However, many online proctoring systems are built for general tests and do not fully address the special needs of language testing. This paper presents a practical framework to guide the design of proctoring systems tailored specifically for language assessments. The framework was developed through a systematic literature review, which helped identify key challenges and best practices from existing research. It highlights the importance of reliability, practicality, ease of use, privacy, and fairness in proctoring online language assessments. By using this framework, proctoring system developers and educators can build proctoring solutions that respect the complexities of language skills while maintaining academic integrity.
Keywords: Online Language Assessments; Online Proctoring System; Framework Development
INTRODUCTION
The education field has experienced a significant transformation with the inevitable adoption of online learning especially after the emergence of the Covid-19 pandemic (Hussein et al., 2020). This shift has imposed the need of exploring innovative and modern approaches to assessment, including the use of proctoring tools (Bryson & Andres, 2020). Compared to traditional paper and pen language assessment, online assessment offers numerous advantages such as flexibility and accessibility (Cheng & Chau, 2016; Stojan et al., 2021). However, as the virtual spaces take over the traditional face-to-face assessment setting, the demand for secure and reliable approaches of evaluating students’ abilities in online settings has become increasingly apparent (Luburić et al., 2021).
Language assessments frequently include interactive tasks such as speaking and listening exercises. These language skills are dynamic and require evaluators to consider various nuances, including pronunciation, fluency, and comprehension. Conventional online proctoring systems, primarily designed for objective assessments like multiple choice questions, lack the ability to effectively monitor and assess complex language tasks. Isbell et al. (2023) emphasise that remote proctoring in language testing raises concerns about fairness and justice, especially when the proctoring methods are not tailored to the specific demands of language assessments. In addition, not all students have equal access to reliable internet, up-to-date devices, or quiet, private areas for taking language tests. These challenges can affect how well they do, especially in language assessments that require speaking and listening in real time. Cele and Maphalala (2025) emphasise that these automated proctoring systems can unintentionally put some students at a disadvantage. As a result, such tools may compromise the fairness of the assessment process and raise important concerns about social justice in education.
The study by Hodges et al. (2020) highlights how rapid technological adoption during the pandemic often overlooked educational best practices and teaching principles. As a result, significant concerns like fair assessment and academic honesty were treated more as technical issues than as educational priorities.
Most research articles on proctoring systems note that these tools are typically developed by commercial companies or third-party developers rather than educational institutions (Isbel, et al, 2023). This leads to a tendency to prioritise the prevention of academic dishonesty over the preservation of pedagogical integrity, thus failing to accommodate the cognitive and contextual complexities of assessments, especially language testing.
Privacy issue is another major concern in online proctoring, as many students find the constant video and audio monitoring to be intrusive and stressful, which can negatively affect their performance (Dawson, 2021). As Balash et al. (2021) highlight, students often feel uneasy about the extent and type of personal data being collected, raising important ethical questions about these practices.
This paper addresses the urgent need for a framework to guide the design of online proctoring systems tailored for language assessments. Drawing on a systematic review and insights from multiple disciplines, the framework aims to help developers create proctoring systems that are educationally meaningful, ethically responsible, and technically reliable. It can benefit several groups by a) providing clear guidelines for technology developers to create proctoring tools that meet the specific needs of language assessments. b) ensuring test-takers experience a more transparent and unbiased testing environment. c) enabling educational institutions to implement reliable proctoring systems that improve both the security and fairness of online exams Therefore, this study intends to address the following research questions:
- What essential elements of proctoring systems for language assessment are commonly identified in the literature?
- What are the current gaps and limitations in proctoring systems for language assessments that need further investigation and framework development?
- What recommendations have been made for future research on designing proctoring systems for language tests?
LITERATURE REVIEW
A General overview of Online Proctoring
Online proctoring, or also called digital proctoring, e-proctoring, remote proctoring, or virtual proctoring, is an automated process or method that help to prevent cheating and secure a test administration on online platforms (Nguyen, 2023). The use of online proctoring has grown significantly in response to the increasing adoption of online assessments, particularly in the context of ensuring fairness and security in remote testing environments (King, Guyette, & Piotrowski, 2009). Over the years, this technology has evolved into various forms, each designed to address specific challenges in remote assessment (Al Najar & Ahmad, 2024; Jia & He, 2022). Online proctoring systems are typically categorised into three main types which are live proctoring, automated proctoring, and hybrid systems (Jia & He, 2022).
Online Language Assessments
The rise of digital technology has significantly impacted language assessments. Online assessments offer greater accessibility and flexibility, allowing for self-paced learning and evaluation (Means et al., 2014). Despite their benefits, online assessments face issues such as ensuring academic integrity and accommodating various levels of technological proficiency (Alqurashi, 2019). In language assessments different language skills require distinct evaluation methods (Alderson, 2005). For instance, reading skills are often assessed through comprehension questions, summarization tasks, and multiple-choice questions that test understanding of texts (Weir, 2005). Writing skills are evaluated by analysing written compositions for coherence, grammar, vocabulary usage, and adherence to given prompts (Hyland, 2003). Listening skills are typically assessed using audio recordings followed by questions that measure the ability to comprehend and interpret spoken language (Buck, 2001). Speaking skills are usually evaluated through oral exams, interviews, and interactive tasks that gauge fluency, pronunciation, and communicative competence (Luoma, 2004).
Gaps in Current Proctoring Systems for Language Assessments
Even though online proctoring technologies have improved a lot, they still face major challenges when it comes to language assessments. Most traditional proctoring tools are designed for written tests or multiple-choice questions but they do not handle the more interactive and spontaneous parts of language exams, such as speaking in a conversation or simulating real dialogue, very well (Isbell et al., 2023).
On top of that, things like test anxiety, feeling self-conscious, and being watched while performing can really affect how well someone speaks or listens during the test (Young, 1991). The pressure of constant monitoring can make people more nervous which then impacts their fluency and how smoothly they express themselves.
Another problem is that many current proctoring systems are not flexible enough to handle different cultures and languages. AI tools often struggle with recognising different accents, switching between languages, or understanding cultural gestures, which can lead to mistakes or false alarms (Koenecke et al., 2020). This is especially concerning in important international language tests where fairness should be a top priority.
Overall, online proctoring has grown a lot and plays an important role in keeping tests fair and secure, especially for remote learning. Language tests, however, are different from regular exams because they involve speaking, listening, and other interactive skills that need special ways to assess. Right now, many proctoring systems struggle with these language tasks and do not always consider things like test anxiety or cultural differences. Therefore, this paper aims to address this problem by creating a clear framework to help design online proctoring systems specifically for language assessments.
METHODOLOGY
Systematic Literature Review
A systematic literature review was carried out to explore previous studies related to the research questions. The approach followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) which outlines the identification, screening, eligibility, and inclusion phases of the systematic review to ensure a comprehensive and transparent review process (Page et al., 2021). In the identification phase, the literature on online language proctoring was searched in April 2025. Databases searched included Scopus, Web of Science and Google Scholar. Keywords used in various combinations included: “online proctoring”, “digital proctoring”, “e-proctoring”, “automated proctoring”, “remote proctoring” and “language assessment”. Boolean operators ‘OR’ and ‘AND’ were employed to refine the search results. In screening for the most relevant studies, systematic inclusion and exclusion criteria were followed. Studies were included if they: (a) published in journals or conference proceedings; (b) focused on online proctoring in the context of language assessments; (c) published in English; and (d) published between 2015 and 2025. Exclusion criteria encompassed: (a) studies not related to online proctoring; (b) articles on proctoring other subjects; and (c) articles published before 2015. The PRISMA diagram of this study’s literature search and review process is shown in Figure 1.
Figure 1 – PRISMA Flowchart for the Sample Identification Process
Critical Analysis and Synthesis
Thematic analysis was employed to identify recurring patterns and themes within the selected studies. Following the approach outlined by Thomas and Harden (2008), the process involved: (a) coding text line-by-line; (b) developing descriptive themes; and (c) generating analytical themes that provide deeper insights into the data. This method facilitates the synthesis of qualitative findings across multiple selected studies.
Development of the Framework
The framework was developed by carefully analysing and synthesising the key issues and findings from previous studies gathered through a systematic literature review. From this review, important components related to online proctoring for language assessments were identified. These components reflect technical aspects, such as audio-visual tools and AI features, educational concerns, like communication skills and cultural sensitivity and challenges and issues in language assessment proctoring. Using these insights, the framework was designed to guide technology developers by highlighting the essential elements they need to consider when creating proctoring systems tailored for language testing. This process ensures the framework is grounded in existing research and addresses real challenges reported in the literature.
RESULTS
This section presents the main findings from the systematic literature review that contributed to the development of the framework for online proctoring in language assessments. Several important themes were identified across the selected studies. These findings have been organised to highlight the essential elements needed for the development of effective and fair proctoring systems tailored to language assessment.
Methodological Overview
The reviewed studies employed diverse methodologies. Quantitative approaches including surveys, experimental studies, and comparative analyses, dominate the research field, with studies such as Bhuiyan and Islam (2023) and Milanovic et al. (2023) focusing on statistical analysis of learner outcomes and test score comparability. Qualitative methods like interviews, case studies, and essays offer deeper insights into perceptions and experiences, as demonstrated by Jalilzadeh et al. (2024) and Ozer (2024). Mixed methods approach, which combine surveys with qualitative data, have been used to explore complex issues related to fairness and technology use, as seen in the works of Coniam (2022) and Valkova (2024). Additionally, literature reviews and conceptual discussions provide broader perspectives on technological and ethical implications, illustrated by Rahmati and Sadhegi (2024) and Kassim (2021). Recent methodological trends show an increased use of mixed and qualitative designs to capture detailed user experiences alongside quantitative findings.
Thematic Analysis of Research Scope and Focus Areas
Table 1 presents the thematic mapping of the 31 studies reviewed in this systematic literature review. Each study was categorized based on its alignment with five key themes identified from the analysis: Online Assessments During COVID-19, Remote Proctoring and Cheating Prevention, AI Integration in Language Assessment, Technology Tools and Platforms, and Fairness, Integrity, and User Perceptions. This thematic classification highlights the distribution of research focus areas and illustrates how individual studies contribute to these critical domains. The table provides a comprehensive overview, enabling a clear understanding of thematic trends and gaps within the existing literature.
Table 1 – Themes Emerged in The Selected Studies
Online Assessments During COVID-19
Several studies (Bhuiyan & Islam, 2023; Kassim, 2021) highlighted the rapid transition to online assessments during the pandemic. Findings generally show positive acceptance of e-assessment systems supported by robust infrastructure but also reveal challenges related to digital equity and fairness. Hence, adaptations in assessment design and delivery were noted as necessary for maintaining validity.
Remote Proctoring and Cheating Prevention
Research on remote proctoring (Coniam et al., 2021; Purpura et al., 2021; Waluyo & Nor, 2024) highlights its effectiveness in reducing malpractice but also draws attention to privacy concerns and technical glitches. Some studies reveal increased cheating likelihood under remote conditions, prompting recommendations for randomized questions and active monitoring (Jalilzadeh et al., 2024). It was also found that AI-enhanced proctoring is promising but requires ethical safeguards (Isbell et al., 2023).
AI Integration in Language Assessment
AI’s role in language learning and assessment is expanding, especially for personalised learning and automated scoring (Amin, 2023; O’Sullivan, 2023). While AI tools improve grading consistency and learner confidence, concerns persist regarding equity, privacy, and ethical use. Therefore, there is a need for clear AI guidelines in integrating it in online language assessments.
Technology Tools and Platforms
Platforms such as LMS (Blackboard, Moodle), video conferencing tools (Zoom, Microsoft Teams), and specialised proctoring software (Honorlock, ProctorTrack) are widely employed. These tools enhance accessibility and assessment scalability but also introduce challenges including user anxiety, technological reliability, and fairness perceptions (Christiansen, 2023; Coniam, 2021).
Fairness, Integrity, and User Perceptions
Fairness remains a central theme, with studies reporting mixed attitudes toward online assessments and proctoring. Anxiety about privacy and trust in technology is frequently mentioned (Coniam, 2021; Valkova, 2024). Gender and cultural factors also affect perceptions and performance, indicating a need for inclusive designs and policies.
Thematic Analysis of Key Findings
Recurring findings across studies include: (a) the effectiveness; (b) issues and challenges; (c) learner impact; and (d) academic integrity violation. Online assessments and remote proctoring are largely effective alternatives to traditional testing, with minor performance differences and acceptance growing among learners and educators. Issues of fairness, privacy, and technical reliability are prevalent, especially related to remote proctoring and AI use. In addition, from the learner perspective, it was found that online language assessment proctoring can affect learner anxiety and trust, influencing engagement and performance. However, despite technological solutions, cheating remains a concern, prompting calls for innovative test designs and enhanced proctoring methods.
Future Recommendations from the Selected Studies
Future recommendations from the selected studies focus on several key areas. First, there is a need for infrastructure improvements, emphasising investment in robust and equitable e-learning and assessment systems (Bhuiyan & Islam, 2023; Kassim, 2021). Second, faculty and candidate training are highlighted to enhance digital literacy and assessment skills among educators and learners, which can improve both acceptance and effectiveness of online proctoring (Al-Hashmi, 2023; Coniam, 2021). Third, policy development is recommended to establish clear ethical guidelines, fairness policies, and transparent proctoring protocols (Isbell et al., 2023; Ozer, 2024). Lastly, technology refinement is called for, with a focus on creating reliable, user-friendly proctoring and AI tools that include privacy safeguards and accessibility features (Purpura et al., 2021; Valkova, 2024). Additionally, further research is encouraged to investigate cultural factors, find a balance between access and integrity, and explore collaboration between AI and human proctors.
Framework for Developing Proctoring Systems for Online Language Assessments
Based on the findings from the systematic literature review, a framework was developed (Figure 2) to guide the creation of online proctoring systems specifically for language assessments. The framework highlights key areas that need attention to make proctoring systems reliable, fair, and user-friendly. First, technological reliability and user experience are important, meaning the software should be stable, easy to use, and provide real-time technical support to help reduce test-taker anxiety. Privacy, security, and ethics are also critical; strong data protection, transparency about data use, and ethical AI with human oversight must be ensured to build trust. Fairness and accessibility are emphasized by designing systems that work well regardless of internet quality, support candidates with disabilities, and regularly check for bias. To maintain integrity, the framework suggests using multiple layers of authentication, randomizing questions, and using AI to spot suspicious behaviour while involving human review. Customisation for different language skills is another key part, recommending that proctoring tools adapt the proctoring features based on different language skills to ensure the validity and reliability of the test. Training and support for both test-takers and administrators are essential to help users feel comfortable and manage the system properly. Finally, the framework calls for continuous evaluation through user feedback, performance monitoring, and collaboration with researchers to keep improving the system. This comprehensive framework aims to help developers build proctoring systems that truly meet the unique needs of language testing.
Figure 2 – Framework for Developing Proctoring Systems for Online Language Assessments
CONCLUSION AND RECCOMENDATIONS
This study has successfully developed a practical framework to guide technology developers in creating tailored proctoring systems specifically designed for language assessments. The framework synthesises key factors identified through a systematic literature review, emphasizing technological reliability, user experience, strong privacy and security measures, and system flexibility to adapt to different needs. This framework aims to make language testing fairer and more trustworthy, while also being easier to use for a wide range of test-takers. Overall, the proposed framework provides a clear guide for creating effective, trustworthy, and adaptable proctoring systems that are tailored for the unique demands of language assessments.
For future research, it would be valuable to test and improve this framework by applying it in real-life language assessment settings. Getting feedback from both test-takers and administrators will help make sure the system is user-friendly and meets everyone’s needs. It is also important to study the new technologies, like AI and biometric tools, which could make proctoring even more secure and reliable. Lastly, paying close attention to ethical concerns and privacy laws will help ensure these systems stay fair and respectful of users’ rights as the technology evolves.
REFERENCES
- Al Najar, G.., & Ahmad, M. N.. (2024). A Review on Factors Affecting the success of Online Proctoring System. Journal of Information Systems Research and Practice, 2(2), 56–70. Retrieved from https://mjlis.um.edu.my/index.php/JISRP/article/view/53561
- Al-Hashmi, S., & Nizwa, O. (2023). Did they really Work? English Teachers’ Attitude towards the Effectiveness of Remote Online Exams in Times of Emergencies. Editorial Board, 16(12), 27.
- Alderson, J. C. (2005). Assessing reading. Cambridge University Press.
- Alqurashi, E. (2019). Predicting student satisfaction and perceived learning within online learning environments. Distance Education, 40(1), 133–148. https://doi.org/10.1080/01587919.2018.1553562
- Amin, M. Y. M. (2023). AI and Chat GPT in language teaching: Enhancing EFL classroom support and transforming assessment techniques. International Journal of Higher Education Pedagogies, 4(4), 1-15.
- Balash, D. G., Kim, D., Shaibekova, D., Fainchtein, R. A., Sherr, M., & Aviv, A. J. (2021). Examining the examiners: Students’ privacy and security perceptions of online proctoring services. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://www.usenix.org/system/files/soups2021-balash.pdf
- Bhuiyan, A. A. M., & Islam, A. K. M. M. (2023). E-assessment during COVID-19 pandemic at a Saudi university: impact on assessment methods and course learning outcomes. Journal of Language and Cultural Education, 11(3).
- Bryson, J., & Andres, L. (2020). Covid-19 and rapid adoption and improvisation of online teaching: curating resources for extensive versus intensive online learning experiences. Journal of Geography in Higher Education, 44, 608 – 623. https://doi.org/10.1080/03098265.2020.1807478.
- Buck, G. (2001). Assessing listening. Cambridge University Press.
- Cele, S., & Maphalala, M. C. (2025). Examining Social Justice Implications of Proctoring Technologies in Online Assessments within Open and Distance e-Learning (ODeL) Environments: Privacy, Equity, and Access. International Journal of Educational Innovation and Research, 4(1), 125–143. https://doi.org/10.31949/ijeir.v4i1.12773
- Çelikbağ, M. A., & Delialioğlu, Ö. (2021, November). Proctored vs Unproctored Online Exams in Language Courses: A Comparative Study. In International Conference on Computers in Education.
- Cheng, G., & Chau, J. (2016). Exploring the relationships between learning styles, online participation, learning achievement and course satisfaction: An empirical study of a blended learning course. British Journal of Educational Technology, 47(2), 257- 278. doi:10.1111/bjet.12243
- Christiansen, T. (2023). Malpractice in Online Versus Onsite Computer Based Language Tests Reflections from the COVID lockdown experience. Lingue e Linguaggi, 59, 39-63.
- Coniam, D. (2022). Online Invigilation of English Language Examinations: A Survey of China Candidates’ Attitudes and Perceptions. International Journal of TESOL Studies, 4(1).
- Coniam, D., Lampropoulou, L., & Cheilari, A. (2021). Online Proctoring of High-Stakes English Language Examinations: A Survey of Past Candidates’ Attitudes and Perceptions. English Language Teaching, 14(8), 58-72.
- Dawson, P. (2020). Defending assessment security in a digital world: Preventing e-cheating and supporting academic integrity in higher education. Routledge.
- Evenddy, S. S., & Hamer, W. (2023). Examining Technology Integration in Language Assessment. PROCEEDING AISELT, 8(1).
- Fitriani, F. (2022). Adapting Language Tests to Hybrid Learning: EFL Teachers’ Challenges and Their. IDEAS: Journal on English Language Teaching and Learning, Linguistics and Literature, 10(2), 1768-1777.
- Foster, D., & Layman, H. (2013, March). Online proctoring systems compared.
- Garcia Laborda, J., & Fernandez Alvarez, M. (2021). Multilevel language tests: Walking into the land of the unexplored.
- Gnambs, T., & Lenhard, W. (2024). Remote testing of reading comprehension in 8-year-old children: Mode and setting effects. Assessment, 31(2), 248-262.
- Hodges, C., Moore, S., Lockee, B., Trust, T., & Bond, A. (2020). The difference between emergency remote teaching and online learning. Educause Review, 27(1), 1-12. https://er.educause.edu/articles/ 2020/3/the-difference-between-emergency-remote-teaching-and-online-learning
- Hussein, M., Yusuf, J., Deb, A., Fong, L., & Naidu, S. (2020). An evaluation of online proctoring tools. Open Praxis, 12(4), 509. https://doi.org/10.5944/openpraxis.12.4.1113
- Hyland, K. (2003). Second language writing. Cambridge University Press.
- Isbell, D. R., Kremmel, B., & Kim, J. (2023). Remote Proctoring in Language Testing: Implications for Fairness and Justice. Language Assessment Quarterly, 20(4–5), 469–487. https://doi.org/10.1080/15434303.2023.2288251
- Jalilzadeh, K., Rashtchi, M., & Mirzapour, F. (2024). Cheating in online assessment: a qualitative study on reasons and coping strategies focusing on EFL teachers’ perceptions. Language Testing in Asia, 14(1), 29.
- Jia, J., & He, Y. (2021). The design, implementation and pilot application of an intelligent online proctoring system for online exams. Interact. Technol. Smart Educ., 19, 112-120. https://doi.org/10.1108/ITSE-12-2020-0246.
- Kassim, A. (2021). COVID-19 is Still Here: Discussing Home-Based Online Assessments. International Journal of Language Education and Applied Linguistics, 1-4.
- Kim, J. (2025, April 30). Proctoring in a Second Language: Exploring Fairness and Justice in Remote English Language Testing. Retrieved from osf.io/rph3t_v1
- King, C. G., Guyette, R. W., & Piotrowski, C. (2009). Online exams and cheating: An empirical analysis of business students’ views. Journal of Educators Online, 6(1), 1–11.
- Klimanova, L., Merrill, J., & Spasova, S. D. (2021). Emergency Remote Teaching, Online Instruction, and the Community. Russian Language Journal/Русский язык, 71(2), 1-22.
- Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C., Rickford, J. R., Jurafsky, D., & Goel, S. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689. https://doi.org/10.1073/pnas.1915768117
- Kucherova, O., & Ushakova, I. (2022). Effectiveness of online testing in General English University Course from teacher and student perspectives.
- Luburić, N., Slivka, J., Sladić, G., & Milosavljević, G. (2021). The challenges of migrating an active learning classroom online in a crisis. Computer Applications in Engineering Education, 29, 1617-1641.
- Luoma, S. (2004). Assessing speaking. Cambridge University Press.
- Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2014). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1–47. https://doi.org/10.1016/j.compedu.2013.11.004
- Milanovica, M., Leeb, T., & Coniamc, D. The Delivery of Speaking Tests in Traditional or Online Proctored Mode: A Comparability Study.
- Miles, R. (2023, November). Writing Assessment in the Laptop-mediated English Language Classroom: Rasch analysis, fairness and flexible delivery. In HCT International General Education Conference (HCT-IGEC 2023) (pp. 55-76). Atlantis Press.
- Nguyen, H. T. T. (2023). Unproctored assignment-based online assessment in higher education: Stakeholder evaluation of issues. Issues in Educational Research, 33(1), 207-226.
- Nguyen, T. Q. Y., Tran, T. T. H., Nguyen, T. N. Q., Nguyen, T. P. T., Nguyen, T. C., Sao Bui, T., & Nguyen, Q. H. (2022, November). Online Language Testing and Assessment in the Pandemic: Opinions from Test Administrators and Examiners. In Proceedings of the AsiaCALL International Conference (Vol. 1, pp. 30-45).
- O’Sullivan, B. (2023). Reflections on the application and validation of technology in language testing. Language Assessment Quarterly, 20(4-5), 501-511.
- Ozer, O. (2024). AI language models: A breach of academic integrity in online language learning?. STUDIES IN LANGUAGE, 13(1), 237.
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71BMJ
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
- Poonpon, K. (2021). Test Takers’ Perceptions of Design and Implementation of an Online Language Testing System at a Thai University during the COVID-19 Pandemic. PASAA, 62, 1-28.
- Purpura, J. E., Davoodifard, M., & Voss, E. (2021). Conversion to remote proctoring of the community English language program online placement exam at Teachers College, Columbia University. Language Assessment Quarterly, 18(1), 42-50.
- Rahmati, T., & Sadeghi, K. USED IN RESEARCHING LANGUAGE ASSESSMENT.
- Rahmatillah, R., Fajrita, R., & Rahma, E. A. (2023). Understanding the implementation of an at-home language test: A case of an online version of TOEFL-PBT. Englisia: Journal of Language, Education, and Humanities, 10(2), 217-230.
- Resdiana, W., & Yulientinah, D. S. (2023). Designing English language testing using a web based monitoring platform. JEELL (Journal of English Education, Linguistics and Literature) English Departement of STKIP PGRI Jombang, 9(2), 41-48.
- Sadiq, M., & Harrison, T. (2020). Remote exam integrity: Challenges and solutions with online proctoring. International Journal of E-Learning, 19(1), 21-38.
- Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology, 8(1), 45. https://doi.org/10.1186/1471-2288-8-45BioMed Central+1PubMed Central+1
- Valkova, R. B. (2024). Remote proctoring in language assessment: Exploring the impact on test-takers’ scores and perceptions. Studies in Language Assessment, 13(1), 141–173. https://doi.org/10.58379/FCZM2855
- Voss, E. (2023). Proctoring remote language assessments. In Fundamental considerations in technology mediated language assessment (pp. 186-200). Routledge.
- Waluyo, B., & Rofiah, N. L. (2024). The Likelihood of Cheating at Formative Vocabulary Tests: Before and During Online Remote Learning in English Courses. Journal of Language and Education, 10(1 (37)), 133-145.
- Weir, C. J. (2005). Language testing and validation: An evidence-based approach. Palgrave Macmillan.
- Whitelock, D., Edwards, C., & Okada, A. (2020). Can e-authentication raise the confidence of both students and teachers in qualifications granted through the e-assessment process?. Journal of Learning for Development-JL4D, 7(1), 46-60.
- Young, D. J. (1991). Creating a low-anxiety classroom environment: What does language anxiety research suggest? The Modern Language Journal, 75(4), 426–439. https://doi.org/10.2307/329492
- Zhang, S., & Isaacs, T. (2022). Can Interactions Happen across the Screens?: The Use of Videoconferencing Technology in Assessing Second Language Pragmatic Competence. In Technology-Assisted Language Assessment in Diverse Contexts (pp. 196-211). Routledge.