INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XII December 2025
heavily on automated feedback could interfere with the recursive stage of writing known as "reviewing." If AI
takes control of the revision process, learners might fail to absorb techniques or identify their flaws, resulting in
what recent cognitive studies refer to as "cognitive offloading," a form of passivity that hinders the growth of
autonomous problem-solving skills (Gerlich, 2025).
Baek et al., (2024) and Ali et al. (2024), also claim that AI tools can help novice writers by reducing cognitive
load for grammar, structure and organisation, which enables them to focus on higher-level thinking drawn from
long-term memory. However, this perceived benefit highlights a recurring tension. AI tools often bypass
generative phases such as drafting, evaluating meaning and constructing arguments, which limit opportunities
for authentic knowledge building. Subsequently, generative AI can encourage premature acceptance of machine
outputs and reduce learners’ monitoring and evaluative behaviours (Espartinez, 2024). This research has
observed a lack of drafting and revising among students, in addition to the reduction in self-regulated
engagement with meaning-making.
The “monitoring phase” component of Flower and Hayes’ (1981) model may also be compromised if AI is
introduced early in the writing process. Yan (2023) suggests that students employing AI tools exhibit reduced
awareness because the technology interferes with the organic writing flow by providing early answers. Research
on LLM quality and hallucinations also adds complexity to depending on AI. Surveys and technical studies
document hallucinations and factual errors in LLM outputs, resulting in fluently expressed paragraphs with
inaccurate statements or misleading references (Farquhar et al., 2024; Hwang et al., 2023). The evidence suggests
that although AI-generated text may appear rhetorically polished, it can be conceptually shallow and require
careful pedagogical mediation.
Artificial Intelligence in Education and the ESL Classroom
Rapid advancement in AI has significantly reshaped English as a Second Language instruction, particularly in
academic writing development. Modern adaptive learning tools can now provide personalized writing support
that adapts to individual student needs with real-time feedback on grammar, vocabulary, and sentence structure
(Barrot, 2024; Hwang et al., 2023; Espartinez, 2024). For many ESL learners, AI writing assistants serve as
always-available language tutors, providing instant error corrections and suggestions which boost the learning
process (Espartinez, 2024; Barrot, 2023).
However, emerging research reveals concerning trends about overreliance on these technologies. AI tools can
excel at improving surface-level writing features, but they may unintentionally discourage deeper intellectual
engagement (Gerlich, 2025; Yan, 2023). Multiple studies document cases where students' dependence on AI-
generated content resulted in writing that is grammatically correct but lacks original thought and critical analysis
(Wang & Fan, 2025; Espartinez, 2024). The limitations become particularly evident in advanced academic
writing, where AI often fails to replicate subject-area writing styles or generate nuanced arguments (Jiang &
Hyland, 2025; Baek et al., 2024). Recent research also shows that overuse of tools such as ChatGPT may reduce
metacognitive involvement and awareness on the part of L2 writers (Freeman, 2025; Espartinez, 2024).
In response, educators now face the challenge of integrating these tools effectively while preserving essential
writing skills. Current best practices emphasize using AI as an assisting tool rather than a replacement for
traditional writing instruction (Espartinez, 2024; Cotton et al., 2023). Many experts support hybrid approaches
that combine AI's efficiency with human-guided instruction in critical thinking and genre conventions (Wang &
Fan, 2025; Jiang & Hyland, 2025). However, implementation remains uneven across institutional contexts.
Many universities lack clear policy frameworks, leaving educators to adopt ad hoc approaches to AI regulation
(Cotton et al., 2023). This challenge is particularly severe in Asian and Malaysian tertiary settings where human-
centred pedagogies and concerns about maintaining students’ authentic voices shape more cautious adoption
practices (Hu et al., 2025). Compounding these issues, teachers are increasingly expected to evaluate AI-
mediated writing without the necessary AI literacy training (Espartinez, 2024). Additional evidence suggests
that students using AI systems may show reduced self-editing tendencies compared to learners using automated
writing evaluation systems (Steiss et al., 2024). These results highlight the necessity of pedagogical approaches
that teach students when and how to use AI tools responsibly.
Page 573