Page 9863
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Integrating Artificial Intelligence into Military English for Specific
Purposes Education: A Case Study at a Military University in
Vietnam
To Thi Lien Ha
*
Faculty of Foreign Languages, Vietnam ADAFA, Hanoi, Vietnam
*
Corresponding author
DOI:
https://dx.doi.org/10.47772/IJRISS.2025.910000804
Received: 01 November 2025; Accepted: 07 November 2025; Published: 24 November 2025
ABSTRACT
This study investigates how Artificial Intelligence (AI) can enhance two mission-critical linguistic competencies
in military English for Specific Purposes (ESP): (1) technical vocabulary accuracy and (2) spoken command
performance under time-pressured operational conditions. A mixed-method case study was adopted to address
the limited digital access and strict security requirements at the ADAFA in Vietnam. The quantitative component
involved administering a 20-item questionnaire to 60 cadets, while qualitative insights were collected through
semi-structured interviews with five ESP instructors. The instruments were adapted from validated AI-in-
education scales and designed to capture four dimensions relevant to defense education: usefulness, motivation,
linguistic performance, and technological readiness. Findings reveal that intranet-based AI tools significantly
improve cadets’ accuracy in domain-specific terminology and increase confidence in delivering English
command phrases. AI-generated feedback reduces speaking anxiety, while adaptive modules provide targeted
practice aligned with operational tasks such as radar coordination and artillery command. However,
implementation remains constrained by teachers’ limited AI literacy, security restrictions, and insufficient
infrastructural support. To address these issues, the study proposes a three-layer model combining pedagogical
adaptation, secure technological mediation, and institutional capacity building. The results provide a
contextualized framework for integrating AI into ESP instruction in high-security military environments.
Keywords: Artificial Intelligence, English for Specific Purposes, Military English, Adaptive Learning, Defense
Education
INTRODUCTION
Artificial Intelligence (AI) has rapidly transformed global education by enabling adaptive learning, intelligent
feedback, and data-driven instruction (Edmett et al., 2023; He, Zhang, & Huang, 2025). In English for Specific
Purposes (ESP), AI supports personalized content delivery, enhances motivation, and simulates authentic
communication tasks. These capabilities are particularly relevant in military education, where officers must
master precise technical terminology and operational English to ensure mission clarity, multinational
coordination, and technological command readiness (Dudley-Evans & St John, 1998).
At the ADAFA in Vietnam, English training plays a vital role in preparing cadets for air defense operations.
However, instruction remains largely teacher-centered and focused on grammar translation, with limited
opportunities for communicative practice or exposure to authentic operational discourse (Xuan Mai & Thanh
Thao, 2022). Security restrictions also limit cadets’ access to online learning resources and real-time language
support (Xu, 2024). These structural constraints hinder the development of two mission-critical linguistic
competencies in military English: (1) technical vocabulary accuracy and (2) spoken command performance
under time pressure. These competencies were selected as the focus of the present study because they directly
influence operational clarity during radar coordination, command issuing, and joint-force communication.
Page 9864
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
AI-based systems have the potential to address these barriers by offering immediate corrective feedback,
personalized vocabulary modules, and simulated operational scenarios tailored to military communication
(Akhter, 2022). Yet, despite increasing global adoption of AI in education, research on AI integration in military
ESP contexts remains scarce. Most existing studies focus on civilian universities, where learners have open
internet access and fewer institutional constraints (He et al., 2025). As a result, there is a lack of empirical
understanding of how AI can be implemented safely, effectively, and ethically within high-security defense
academies.
This study addresses this gap by examining how AI can enhance military ESP instruction at ADAFA, with a
specific focus on improving cadets’ mastery of technical terminology and spoken command performance. It
explores instructors’ and cadets’ perceptions of AI-based learning, evaluates its effects on motivation and
linguistic accuracy, and considers the institutional conditions needed to support secure implementation. By
contextualizing AI integration within the discipline, hierarchy, and digital restrictions of a military academy, this
research contributes both theoretical insights and practical implications for AI-enhanced ESP education in
defense environments.
LITERATURE REVIEW
AI has become a transformative force in language education, thanks to its ability to process learner data at scale,
deliver adaptive feedback, and simulate communicative contexts with high accuracy (Edmett et al., 2023). AI-
driven applications, such as intelligent tutoring systems, natural language processing (NLP) tools, automated
assessment engines, and speech recognition programs, provide fine-grained diagnostics that help learners
identify linguistic weaknesses and track progress over time. Akhter (2022) highlights that large language models
and AI-powered feedback systems strengthen both accuracy and fluency by offering immediate, individualized
correction, while Mizumoto (2023) emphasizes that AI enhances metacognitive awareness by enabling learners
to regulate strategies based on real-time performance data. Collectively, these functions align with learner-
centered pedagogy, where technology supports autonomy, reflection, and differentiated instruction.
Within English for Specific Purposes (ESP), scholars have documented the potential of AI in creating
domainspecific learning pathways. He, Zhang, and Huang (2025) argue that AI platforms generate customized
tasks targeting specialized terminology and communicative situations, allowing learners to engage with language
forms directly connected to their professional fields. Adaptive learning engines can detect varying proficiency
levels within the same class, delivering individualized tasks that prevent advanced learners from being held back
while supporting those who require remediation. Edmett et al. (2023) further note that automated writing
evaluation, speech analytics, and AI-assisted vocabulary trainers enable teachers to devote instructional time to
developing strategic communication skills rather than correcting mechanical errors. Such capabilities are
particularly relevant for ESP domains that demand high levels of precision, clarity, and discipline-specific
terminology.
However, the majority of AI-related ESP research is conducted in civilian educational settings, where learners
enjoy unrestricted access to digital infrastructure, flexible institutional governance, and open internet
connectivity. These contexts differ significantly from military ESP environments, in which communication
demands not only linguistic proficiency but also technical accuracy, operational clarity, and rapid
decisionmaking under pressure (Dudley-Evans & St John, 1998). Military English is recognized as a high-stakes
form of ESP due to its direct connection to mission execution, command coordination, and international defense
engagement. Studies on military ESP highlight that learners must master specialized lexical sets, such as
airdefense terminology, artillery commands, radar reporting structures, and perform spoken communication tasks
in real or simulated operational contexts, often under time pressure (Xuan Mai & Thanh Thao, 2022). These
linguistic demands go beyond general English or civilian ESP, as miscommunication may compromise
operational safety.
Despite the clear pedagogical relevance of AI for supporting accuracy and real-time performance, the
technological integration of AI in military ESP remains limited. In global defense systems, AI has been primarily
applied to tactical simulations, unmanned systems, and strategic decision-support algorithms (Rashid et al.,
Page 9865
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
2023). In contrast, applications focusing on linguistic competence, especially those aimed at improving spoken
commands, pronunciation of mission-critical terminology, or vocabulary precision, are seldom documented.
Chen (2024) notes that while AI-based communicative approaches have gained traction in higher education,
military institutions often maintain traditional, instructor-centered pedagogies due to concerns regarding
discipline, operational security, and the reliability of external technologies. Xu (2024) similarly observes that
digital transformation in education is constrained by limited access to secure networks and by institutional
hesitation in adopting AI tools that require data exchange with external servers.
These structural, cultural, and security constraints generate a unique research gap: AI has rarely been examined
as a tool for enhancing technical vocabulary acquisition and spoken command performance in high-security
military language classrooms. The existing literature provides valuable insights into AI-driven personalization
and feedback; however, it does not address how these features can be adapted to environments where internet
connectivity is restricted, device use is regulated, and instructional innovation must align with defense protocols.
Furthermore, studies have not explored how AI might mitigate common learning challenges among military
cadets, such as anxiety when delivering spoken commands, difficulty recalling complex terminology, or lack of
exposure to authentic operational communication.
Thus, while the broader literature demonstrates AI’s potential to individualize instruction, enhance performance
analytics, and support specialized vocabulary learning, little is known about its pedagogical, technological, and
institutional integration within military ESP contexts. The present study extends existing research by focusing
on the ADAFA in Vietnam and examining how AI can address two mission-critical linguistic competencies: (1)
technical air-defense vocabulary accuracy, and (2) spoken command performance under time-pressured
conditions. By situating AI within the constraints of a secure military environment, this study offers a
contextualized understanding of how adaptive technologies can support operationally relevant language learning
while adhering to institutional discipline and security requirements.
RESEARCH METHODOLOGY
Research Design
This study employed a mixed-method case study design to investigate how AI can enhance two missioncritical
linguistic competencies in military English: technical air-defense vocabulary and spoken command performance.
A mixed-method approach was selected because it enables both the measurement of perceptual patterns and the
interpretation of contextualized experiences, an essential consideration in tightly regulated military environments
where quantitative data alone may not fully reflect institutional barriers or pedagogical constraints. As Creswell
and Plano Clark (2018) argue, combining quantitative and qualitative strands enables researchers to triangulate
findings, enhance validity, and gain a more comprehensive understanding of complex educational phenomena.
The case study design was chosen because ADAFA represents a unique instructional setting characterized by
restricted internet access, strict confidentiality protocols, and a highly structured learning culture. These features
require closer contextual examination than survey-based or experimental designs typically allow. The integration
of AI technologies within such constraints necessitates analyzing both learner responses and institutional
dynamics, justifying a context-embedded case study approach.
Participants
Participants included 60 cadets and five ESP instructors from the Faculty of Foreign Languages at ADAFA. The
sample of 60 cadets was selected because it represents two whole training cohorts, reflecting the Academy’s
typical class size and ensuring an adequate level of representativeness while remaining feasible under military
scheduling restrictions. Cadets were second- and third-year students majoring in air-defense command and radar
operations, with self-reported English proficiency levels ranging from A2 to B1 on the CEFR scale.
The five instructors constituted the entire pool of teachers with a minimum of five years’ experience in military
ESP instruction. Their inclusion ensured maximum coverage of expert perspectives on both pedagogical
Page 9866
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
challenges and institutional constraints. Participation was voluntary for both groups, and anonymity was
maintained through the use of pseudonyms.
Instruments
To capture multiple dimensions of AI-assisted learning in the military context, the study employed two
complementary instruments: a structured questionnaire for cadets and a semi-structured interview guide for
instructors.
Cadet Questionnaire
A 20-item Likert-scale questionnaire (1 = Strongly Disagree; 5 = Strongly Agree) was administered offline to
comply with the Academy’s security regulations. The questionnaire items were adapted from validated scales
used in previous AI-in-education research (Edmett et al., 2023; Mizumoto, 2023). The decision to use 20 items
was intentional: fewer than 20 would not adequately cover the constructs relevant to military ESP, while more
items would extend testing time and disrupt cadets’ tightly scheduled training duties. The items measured four
constructs aligned with the study’s conceptual framework: perceived usefulness of AI tools, learning motivation
and psychological responses, linguistic performance in technical vocabulary and command speaking, and
technological readiness and ease of use
Likert scaling was chosen because it allows efficient capture of attitudes and perceptions from groups operating
under strict time constraints. Reliability analysis generated a Cronbach’s α of 0.89, indicating high internal
consistency.
Instructor Interview Guide
A semi-structured interview protocol was developed to elicit deeper insights into instructional practices,
perceived benefits, institutional barriers, and security considerations. Semi-structured interviews were chosen
because they strike a balance between consistency, provided by guiding questions, and flexibility to probe
emergent themes unique to a military setting. Each interview lasted 30–40 minutes and was conducted privately
in faculty offices to ensure confidentiality and privacy. The guiding questions addressed uses of AI, observed
changes in cadet performance, constraints arising from restricted digital access, and recommendations for secure
AI implementation.
The rationale for including instructors was to complement cadets’ perceptions with expert interpretations
grounded in operational and pedagogical experience. This dual-instrument strategy strengthened data
triangulation, enabling a richer analysis of contextual factors that cannot be captured through surveys alone.
Data Collection Procedure and Data Analysis
Data collection took place over six weeks in May and June 2025. Due to security requirements, all survey
activities were conducted offline using printed forms distributed through the Academy’s internal channels. The
offline format ensured adherence to the “no external device” regulation applied to cadets during training periods.
Completed questionnaires were collected and manually anonymized before data entry.
Interviews were scheduled individually and recorded with consent using secure, offline recording devices
approved by the Academy. Ethical approval was granted by the ADAFA Research Ethics Committee (Ref. No.
ADAFA-EDU-AI-2025-04). Participants were informed that all data would be stored on the Academy’s internal
servers and would not involve any personal or operationally sensitive information.
Quantitative data were coded and analyzed using SPSS 26.0. Descriptive statistics (means, standard deviations)
were used to identify overall trends, while Pearson’s correlation tests explored relationships among key variables,
such as the link between AI usefulness and motivation. These analyses were selected because they offer
interpretable patterns that are well-suited to medium-sized samples, which are typical of military cohorts.
Page 9867
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Qualitative data from interviews were transcribed and analyzed using Braun and Clarke’s (2023) six-phase
thematic analysis framework. This approach allowed inductive identification of themes reflecting instructors’
lived experiences, institutional obstacles, and pedagogical needs. Two independent coders reviewed the
transcripts to enhance inter-coder reliability, and discrepancies were resolved through discussion.
Research Validity, Reliability, and Ethical Considerations
Methodological rigor was ensured through data triangulation, member checking, and peer debriefing. Cadets’
survey responses were compared with instructors’ qualitative accounts to cross-validate interpretations. Member
checking involved providing each interviewee with a summary of their transcript for confirmation and
verification. Peer debriefing sessions among faculty researchers further reduced the risk of interpretive bias.
Given the sensitivity of the military environment, all data were stored on password-protected internal devices.
No external cloud services, mobile applications, or internet-connected AI tools were used at any stage of the
research process. Participation was voluntary, and cadets were informed that no academic or disciplinary
consequences would result from declining or withdrawing from involvement.
FINDINGS AND DISCUSSION
Findings
Analysis of the survey and interview data revealed four significant findings related to the effects of AI integration
on military ESP learning at ADAFA. These findings focus on (1) technical vocabulary development, (2) spoken
command performance, (3) learner engagement and perceived operational relevance, (4) constraints in
technological and institutional conditions.
Enhanced Acquisition of Technical Military Vocabulary
Survey results indicate that AI substantially supported cadets’ mastery of air-defense terminology. A large
majority (83%) reported that AI exercises helped them identify and repeatedly review discipline-specific lexical
items such as bearing, altitude, acquisition, and terms related to radar or artillery operations. Errortracking
functions within the AI platform highlighted words frequently misused or mispronounced, enabling learners to
focus on items central to operational communication.
Interview data further confirm that AI modules provided differentiated practice opportunities. Instructor 2 noted
that weaker cadets benefited from the adaptive difficulty, which “allowed them to review essential terms multiple
times without slowing down those with higher proficiency.” This pattern suggests that the adaptive mechanisms
described in He, Zhang, and Huang (2025) translated effectively into the controlled conditions of a military
language classroom.
Improvement in Spoken Command Performance
Findings also show that AI contributed to measurable gains in cadets’ spoken output, particularly in command-
giving tasks. Approximately 78% of cadets agreed that automated pronunciation and fluency feedback helped
them refine command phrases and improve clarity. Several cadets described feeling “less anxious” practicing
with AI before speaking in front of peers, which mirrors Mizumoto’s (2023) observation that AI can strengthen
metacognitive control and reduce performance pressure.
Instructors likewise noted improvements in accuracy and automaticity when cadets issued English commands
during drills. Instructor 4 reported that AI analytics revealed patterns of recurrent errors, such as consonant
clusters in artillery or altitude, which enabled more targeted follow-up practice. These observations suggest that
AI supported the development of both linguistic accuracy and procedural fluency, competencies central to
military communication.
Page 9868
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Increased Engagement and Perceived Operational Relevance
AI-enabled simulations emerged as one of the most engaging components of the learning process, receiving the
highest average rating across the questionnaire (mean 4.28/5). Cadets described these tasks as “realistic” and
“motivating” because they replicated radar alerts, tracking sequences, and rapid-response coordination. Such
scenarios required cadets to apply vocabulary and command structures under time-pressured conditions similar
to real operations.
Instructors emphasized that cadets retained terminology more effectively when it appeared in simulation-based
tasks rather than in isolated drills. Instructor 1 stated that “students remembered terms better when they
encountered them inside realistic mission settings.” This perception aligns with the view that ESP learning is
most effective when embedded in authentic task environments (Dudley-Evans & St John, 1998).
Structural, Technological, and Cultural Constraints
Despite positive learning outcomes, several constraints limited the depth of AI use. The most significant barrier
involved restricted internet access, which prevented the use of many cloud-based AI tools and compelled reliance
on limited intranet-based applications. Cadets frequently commented that external platforms were inaccessible
due to security regulations. At the same time, instructors expressed concerns regarding confidentiality and data
handling, a pattern consistent with security-related limitations noted by Rashid et al. (2023).
Additionally, instructors’ varying levels of AI literacy influenced the consistency of implementation. Only two
instructors reported prior experience with AI-based tools, and others felt uncertain about integrating analytics
into lesson planning. Cultural factors also influenced initial reactions: some cadets were hesitant to rely on
automated feedback, as they viewed teacher-led correction as more familiar within the hierarchical military
environment. Over time, however, most cadets reported increasing comfort as they became accustomed to
AIsupported practice.
DISCUSSION
The findings of this study show that AI can significantly enhance military ESP instruction at ADAFA by
strengthening cadets’ mastery of technical vocabulary, improving spoken command performance, and increasing
engagement through simulation-based tasks. These outcomes support existing claims that AI enables adaptive,
feedback-driven, and context-specific learning (Edmett et al., 2023; He, Zhang, & Huang, 2025), while extending
the literature by demonstrating how these affordances operate within a tightly regulated military environment.
A key contribution of the study is the apparent improvement in the acquisition of discipline-specific terminology.
Air-defense communication relies on a narrow, high-precision lexicon, and errors in this domain can directly
compromise operational clarity. The adaptive vocabulary reinforcement enabled by AI confirms He, Zhang, and
Huang’s (2025) argument that AI can personalize domain-specific input; however, the present findings show that
personalization is not merely pedagogically beneficial but also operationally necessary. AI’s ability to highlight
recurring lexical errors and provide targeted review proved especially valuable in classes with wide proficiency
gaps, where traditional instruction struggles to differentiate effectively.
Improvements in spoken command performance further illustrate the role of AI in supporting high-stakes
communicative tasks. Military English relies on short, formulaic utterances delivered under time pressure; thus,
the opportunity to rehearse commands with immediate, private feedback was highly valued by cadets. This
finding aligns with Mizumoto’s (2023) notion of AI as a facilitator of metacognitive regulation: cadets monitored
their pronunciation accuracy, tracked progress, and self-corrected before speaking publicly. The psychological
effect is noteworthy. In a hierarchical, error-sensitive environment, AI provided a low-pressure space that helped
learners gain confidence, a factor rarely highlighted in civilian ESP literature but central to military
communication training.
Page 9869
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
The strong engagement generated by AI-supported simulations demonstrates the importance of embedding
English practice within operationally authentic contexts. While earlier studies emphasize AI’s motivational
benefits (Edmett et al., 2023), this study underscores the particular relevance of simulation-based learning for
military tasks. By replicating radar alerts and rapid-response sequences, simulations blurred the boundary
between language practice and tactical training, enabling cadets to transfer terminology and command structures
more effectively. This aligns with Dudley-Evans and St John (1998), who argue that ESP is most effective when
anchored in authentic communicative demands.
However, the findings also reveal institutional and technological constraints that shape AI adoption. Restricted
internet access, essential for military security, limits the use of commercial AI tools, mirroring concerns raised
by Rashid et al. (2023). Instructors’ apprehension about data handling shows that AI solutions must be
pedagogically effective while conforming to defense confidentiality standards. These requirements differentiate
military ESP from civilian contexts and highlight the need for local hosting, data sovereignty, and system
transparency.
Instructor readiness emerged as another mediating factor. Although instructors recognized the value of AI,
uneven levels of AI literacy led to inconsistent implementation. This confirms Edmett et al.’s (2023) observation
that teachers play a central role in AI-enhanced pedagogy; however, the military context amplifies this issue, as
instructional practices must be standardized across cohorts. Professional development is therefore not optional
but essential for sustainable integration.
Cultural expectations within military training further influenced perceptions of AI. Initial discomfort with
automated feedback reflected the hierarchical norms of the institution, where instructors traditionally hold
evaluative authority. Yet cadets’ increasing comfort over time suggests that these reservations can be mitigated
through guided exposure rather than posing a permanent barrier. This dynamic reinforces the importance of
institutional alignment, a key element of the study’s conceptual framework.
Overall, the findings reaffirm the pedagogical potential of AI while demonstrating that the unique demands of
military contexts mediate its effectiveness. AI clearly supports more precise vocabulary learning, more confident
spoken command performance, and more authentic engagement with operational tasks. Yet these benefits depend
on secure infrastructures, teacher readiness, and policy-level coordination to ensure that integration is both
pedagogically meaningful and institutionally feasible.
Pedagogical, Technological, and Institutional Implications
The findings offer several implications for military ESP instruction. Pedagogically, the improvements in
vocabulary mastery and spoken command performance suggest that AI should be deliberately integrated into
existing curricula rather than treated as optional support. Adaptive exercises can reinforce foundational
competencies before cadets engage in higher-stakes communicative tasks, while AI feedback can serve as a
preparatory tool to reduce anxiety in command-delivery contexts.
Technologically, the constraints identified underscore the need for secure, intranet-based AI systems tailored to
defense requirements. Although external platforms are inaccessible, the positive outcomes observed with limited
tools show that local systems, if equipped with adaptive feedback, vocabulary analytics, and simulation
capabilities, can still yield meaningful gains. Collaboration between ESP instructors, Academy IT units, and
system developers is crucial to ensure that AI platforms meet both pedagogical and security standards.
Institutionally, the variation in instructor readiness underscores the need for targeted professional development.
Training should focus on interpreting AI analytics, integrating feedback into lesson design, and managing learner
interactions with AI. Additionally, clear institutional messaging is needed so cadets and instructors understand
AI as a supportive tool rather than a replacement for human authority. Establishing policies on acceptable use,
data protection, and instructional expectations will help ensure consistent and responsible implementation.
Page 9870
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Together, these implications show that effective AI adoption in military ESP requires strategic pedagogical
integration, robust technological safeguards, and sustained institutional support. When these conditions align, AI
can meaningfully enhance linguistic readiness and contribute to more mission-oriented language training.
Proposed Framework
Building on the patterns identified in the findings and the interpretive insights presented in the discussion, this
study proposes a three-layer framework for integrating AI into military ESP instruction at ADAFA. The
framework is designed to respond simultaneously to the pedagogical opportunities revealed by the data and the
institutional constraints inherent to a high-security military environment. Rather than treating these dimensions
as separate considerations, the model conceptualizes them as interdependent layers that must operate in
alignment to ensure both effectiveness and feasibility.
The first layer, pedagogical adaptation, addresses the core instructional needs identified in the findings:
strengthening technical vocabulary mastery and improving spoken command performance. AI demonstrated
clear advantages in these areas by providing adaptive reinforcement, individualized feedback, and
simulationbased practice. Accordingly, this layer positions AI as an integrated component of the curriculum
rather than a supplementary tool. Instructors can sequence AI modules in ways that mirror the communicative
demands of air-defense operations: diagnostic vocabulary analytics at the beginning of instructional cycles,
targeted pronunciation and command practice during the controlled practice phase, and simulation-based tasks
at the application stage. Such a structure ensures that AI is embedded meaningfully within ESP pedagogy,
enabling cadets to rehearse high-stakes communicative functions in a manner aligned with the authentic, task-
based principles described by Dudley-Evans and St John (1998). Importantly, the framework views instructors
as essential mediators, consistent with Edmett et al. (2023), who interpret AI-generated data, guide students
through feedback, and contextualize machine recommendations within operational communication norms.
The second layer, technological security and infrastructure, emerges directly from the institutional and
technological constraints identified in both the findings and the discussion. Given that ADAFA operates in a
restricted digital ecosystem, the framework emphasizes the development of locally hosted, intranet-based AI
systems capable of delivering adaptive learning without relying on external cloud services. This layer integrates
military security requirements with pedagogical needs by outlining a secure architecture involving encrypted
data storage, role-based access permissions, and stable offline functionality. Rather than limiting innovation,
these constraints shape a technology ecosystem purpose-built for military education, one that protects sensitive
information while still enabling features that cadets and instructors found most beneficial, such as pronunciation
analytics, vocabulary error tracking, and scenario-based simulation. This orientation reflects Rashid et al.’s
(2023) observation that effective AI integration in defense contexts requires striking a balance between
functionality and confidentiality, a balance that this framework formalizes.
The third layer, institutional capacity building, responds to the cultural and organizational patterns highlighted
in the discussion. Instructors’ uneven AI literacy, cadets’ initial hesitation, and the lack of clear institutional
guidance all indicate that technological solutions alone cannot sustain long-term integration. This layer,
therefore, prioritizes structured professional development, focusing on interpreting AI analytics, designing
AIsupported lessons, and guiding learners’ interaction with automated feedback. It also calls for the Academy to
articulate explicit policies on acceptable use, data handling, and pedagogical expectations so that AI adoption
does not depend on individual instructor initiative. Clear communication regarding the supportive, not
evaluative, role of AI can help mitigate cultural reservations associated with hierarchical military learning
environments. Finally, periodic pilot testing and iterative revisions ensure that AI tools evolve in tandem with
curricular demands and institutional priorities.
In short, these three layers form a cohesive framework that aligns pedagogical practice, technological design,
and institutional governance. AI integration at ADAFA will be most effective when these layers operate in
concert: when adaptive learning is embedded within instruction, when secure infrastructures enable reliable use,
and when institutional support fosters the confidence and competence needed for sustained innovation. The
Page 9871
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
framework, therefore, provides a structured pathway for adopting AI in a manner that is pedagogically
meaningful, technologically secure, and operationally compatible with the realities of military education.
CONCLUSION
This study examined the integration of AI into English for Specific Purposes (ESP) instruction at the ADAFA in
Vietnam, with a targeted focus on enhancing cadets’ mastery of technical military vocabulary and improving
spoken command performance. Through a mixed-method case study involving cadets and ESP instructors, the
research demonstrated that AI-supported learning can meaningfully strengthen linguistic readiness in a highly
regulated military environment.
The findings highlight three key outcomes. First, adaptive AI tools effectively reinforced discipline-specific
terminology, enabling cadets to review and internalize core lexical items essential for operational
communication. Second, AI-driven pronunciation and fluency feedback contributed to greater accuracy and
confidence in spoken command delivery, helping cadets rehearse mission-critical utterances in a low-pressure
environment before applying them in classroom drills. Third, simulation-based tasks increased engagement by
embedding language practice within realistic operational scenarios, thereby narrowing the longstanding gap
between classroom instruction and field communication.
Beyond reporting these outcomes, the study contributes to current scholarship by demonstrating how
pedagogical affordances associated with AI, such as personalization, real-time feedback, and task authenticity,
operate under the constraints of a military institution. The proposed three-layer framework further offers a
structured model for aligning instructional design, secure technological infrastructure, and institutional policy,
emphasizing that sustainable innovation depends on the interaction of these interconnected domains.
Despite its contributions, the study has certain limitations. The research was conducted within a single military
academy, and the AI tools available were restricted to intranet-based systems with limited functionalities. The
findings, therefore, reflect a specific institutional context and may not fully capture the potential of more
advanced or cloud-based AI applications. Future studies could expand the sample across multiple military
branches, examine long-term proficiency development, or explore how AI can support additional ESP
competencies such as listening comprehension, written reporting, and cross-cultural communication in
multinational operations. Experimental or longitudinal designs may also yield more profound insights into the
developmental trajectory of AI-assisted learning over time.
In conclusion, the study affirms that AI represents a promising and feasible pathway for modernizing language
training in defense settings. When guided by secure infrastructure, informed pedagogical practice, and
institutional alignment, AI can play a strategic role in preparing cadets for the linguistic demands of
contemporary military cooperation and technological operations.
REFERENCES
1. Akhter, E. (2022). The role of large language models (LLMs) in personalized English language
instruction. International Journal of Scientific Interdisciplinary Research, 1(1), 97–128.
https://doi.org/10.63125/86jf4136
2. Braun, V., & Clarke, V. (2023). Thematic analysis. In H. Cooper, M. N. Coutanche, L. M. McMullen,
A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology:
Research designs: Quantitative, qualitative, neuropsychological, and biological (2nd ed., pp. 65–81).
3. American Psychological Association.
https://doi.org/10.1037/0000319-004
4. Chen, H. (2024). Innovative approaches in English language teaching: Integrating communicative
methods and technology for enhanced proficiency. Communications in Humanities Research, 32, 214–
220.
https://doi.org/10.54254/2753-7064/32/20240075
5. Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd
ed.). SAGE Publications.
Page 9872
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
6. Dudley-Evans, T., & St John, M. J. (1998). Developments in English for specific purposes. Cambridge
University Press.
7. Edmett, A., Ichaporia, N., Crompton, H., & Crichton, R. (2023). Artificial intelligence and English
language teaching: Preparing for the future. British Council.
8. He, L., Zhang, X., & Huang, Y. (2025). Integration of Artificial Intelligence in English for Specific
Purposes Education. Atlantis Press.
https://doi.org/10.2991/978-2-38476-400-6_68
9. Mizumoto, A. (2023). Data-Driven Learning Meets Generative AI: Introducing the Framework of
Metacognitive Resource Use. Applied Corpus Linguistics, 3(3), 100074.
https://doi.org/10.1016/j.acorp.2023.100074
10. Rashid, A. B., Kausik, A. K., Sunny, A. A. H., & Bappy, M. H. (2023). Artificial intelligence in the
military: An overview of the capabilities, applications, and challenges. International Journal of
Intelligent Systems, 2023(1), Article 8676366.
https://doi.org/10.1155/2023/8676366
11. Xu, H. (2024). Innovative research on college English teaching model based on artificial intelligence.
Education Reform and Development, 6(9), 273–277.
https://doi.org/10.26689/erd.v6i9.8313
12. Xuan Mai, L., & Thanh Thao, L. (2022). English language teaching pedagogical reforms in Vietnam:
External factors in light of teachers’ backgrounds. Cogent Education, 9(1).
https://doi.org/10.1080/2331186X.2022.2087457
APPENDICES
Appendix A. Survey Questionnaire for Cadets Purpose:
This survey aims to explore cadets’ perceptions of AI in ESP learning at the AD–AFA.
All responses are confidential and will be used solely for academic research purposes.
Instructions:
Please indicate your level of agreement with each statement below (1 = Strongly Disagree, 5 = Strongly Agree).
No.
Statement
1
2
3
5
1
AI-assisted exercises help me improve my English
proficiency.
2
AI feedback helps me correct my pronunciation and
grammar effectively.
3
Learning through AI makes ESP lessons more
interesting and engaging.
4
AI learning modules provide materials relevant to my
military specialty.
5
I can study English more independently with the help
of AI tools.
6
My motivation to learn English increases when using
AI systems.
7
I am confident using AI platforms for English learning.
8
AI helps me apply English to professional (military)
communication tasks.
Page 9873
www.rsisinternaonal.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
9
The Academy provides sufficient access to technology
for AI learning.
10
I want to continue learning English through AIbased
systems.
Demographics:
1. Year of study: 1 2 3 4
2. Major: Anti-aircraft Command Radar Communications
3. English level (self-assessed): A1 A2 B1 B2
Appendix B. Interview Protocol for ESP Instructors Objective:
To collect qualitative insights into instructors’ experiences, perceptions, and challenges in integrating AI into
ESP teaching at ADAFA.
Interview Duration: 30–40 minutes
Format: Semi-structured, recorded with consent
Guiding Questions:
1. How do you currently use technology or AI tools in your ESP classes?
2. In your opinion, what are the main benefits of AI for ESP instruction?
3. What difficulties or institutional barriers have you encountered when applying AI in the Academy?
4. How do cadets respond to AI-assisted learning activities?
5. What kind of training or institutional support do teachers need to use AI effectively?
6. What recommendations would you make for developing a secure, military-specific AI platform?
7. How do you see the role of teachers changing in an AI-enhanced classroom?
Follow-up prompts:
Could you give an example?
How did you manage that situation?
What outcomes did you notice in students’ performance?
Appendix C. Ethical Approval Statement
Approval for this study was obtained from the Research Ethics Committee, ADAFA (Ref. No. ADAFAEDU-AI-
2025-04).
Participation was voluntary, and all respondents provided written informed consent.
No personal or sensitive military information was collected or disclosed.