INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3119
www.rsisinternational.org
Artificial Intelligence in Language Translation: Accuracy and
Limitations
Dr. Jimboy B. Pagalilauan
Instructor III at Cavite State University.
DOI: https://dx.doi.org/10.47772/IJRISS.2025.910000250
Received: 24 October 2025; Accepted: 30 October 2025; Published: 10 November 2025
ABSTRACT
This research explored the efficacy and limitations of Artificial Intelligence (AI) translation tools in relation to
human translation, with an emphasis on accuracy, reliability, and user perception. Using a descriptive research
design, data were collected from 39 respondents via a structured survey to identify how often AI translation is
utilized, the problems faced, and the level of dependence on it for academic and professional work. Findings
indicated that although AI translation software is commonly employed because of convenience and accessibility,
users invariably acknowledged their shortfalls, especially in dealing with idiomatic expressions, words of
ambiguity, cultural sensitivity, and technical or specialized text. Findings also indicated that most respondents
had greater confidence in human translation when it came to accuracy and dependability, with most highlighting
the imperatives of human revision and proofreading for quality assurance. Even with these restrictions, numerous
participants still suggested AI translation software for use in schools, as long as human oversight is used. The
research concludes that AI translation can be a useful tool for translation work but not replace the richness,
cultural awareness, and context knowledge of human translators. According to these results, the research
suggests merging the AI translation software with human supervision to achieve the highest possible efficiency
and accuracy, particularly for educational and specialized purposes.
Keywords: artificial intelligence, machine translation, human translation, translation accuracy, descriptive
research design, academic use
INTRODUCTION
Artificial intelligence (AI) has made significant advancements in the field of language translation, transforming
the way people communicate across linguistic barriers. AI-powered translation tools, such as Google Translate,
DeepL, and ChatGPT, have gained widespread adoption due to their ability to process and generate text in
multiple languages within seconds. These technologies utilize neural machine translation (NMT) models, which
have surpassed traditional statistical and rule-based approaches in terms of fluency and contextual accuracy (Jiao
et al., 2023). Despite these advancements, AI-based translation tools still face several challenges, particularly in
handling idiomatic expressions, cultural nuances, and domain-specific terminologies (Lee, 2021).
The increasing reliance on AI for translation raises questions about its accuracy and limitations, particularly in
academic and professional settings. Studies by Klimova and Pikhart (2023) indicate that AI translation tools
have shown significant improvement in foreign language education, yet they still require human oversight to
ensure accuracy. Moreover, Moneus and Sahari (2024) argue that AI-based legal translations often struggle with
precision, leading to potential misinterpretations in critical contexts. This underscores the importance of
evaluating the effectiveness of AI translation, especially among students and professionals who rely on these
tools for their studies and work.
In the context of language learning, Al-Romany and Kadhim (2024) emphasize the role of AI in assisting
students with language comprehension and translation tasks. However, their study also highlights the limitations
of AI when translating complex syntactic structures and ambiguous phrases. Similarly, Ubhayawardhana and
Hansani (2023) conducted an analysis of AI translation performance in legal texts, revealing inconsistencies that
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3120
www.rsisinternational.org
could affect the reliability of automated translations in specialized fields. These findings indicate that while AI
has revolutionized translation, it is not without flaws and still requires validation through comparative studies.
Despite the growing body of research on AI translation, there remains a gap in the literature regarding its practical
implications for students in language-related disciplines, such as English studies. Many existing studies focus
on professional translation contexts, leaving a need for research that examines how students perceive and utilize
AI translation tools in their academic endeavors. This study aims to address this gap by evaluating the accuracy
and limitations of AI translation among AB English students. By doing so, it seeks to provide insights into how
AI translation tools impact language learning and academic writing, as well as offer recommendations for their
effective use in educational settings.
Statement of the Problem
The growing use of Artificial Intelligence (AI) in language translation has led to both opportunities and
challenges in ensuring accurate, fluent, and contextually appropriate translations. While AI-powered translation
tools such as Google Translate, DeepL, and ChatGPT offer convenience, concerns remain about their ability to
maintain linguistic accuracy, grammatical correctness, and cultural sensitivity. This study aimed to assess the
perceived accuracy and limitations of AI translation tools among BAELS students.
Specifically, the study sought to answer the following questions:
1. What is the profile of the respondents in terms of:
a. Year Level
b. English Proficiency Level
c. Frequency of AI Translation Tool Usage
2. How do BAELS students perceive the accuracy of AI-generated translations in terms of:
a. Word-for-word accuracy
b. Grammar and sentence structure
c. Contextual meaning retention
d. Fluency and readability
3. What are the most common limitations of AI translation tools as experienced by AB English students,
particularly in translating:
a. Idiomatic expressions and slang
b. Complex or technical texts
c. Culturally specific language elements
d. Words with multiple meanings
4. How does AI translation compare to human translation based on students preferences and trust in
accuracy?
5. To what extent do students rely on AI translation tools, and how often do they verify AI-generated
translations before use?
METHODOLOGY
Research Design
This study employed a quantitative descriptive research design to examine the perceived accuracy and
limitations of AI translation tools among AB English students. The study gathered numerical data through a
structured survey questionnaire to analyze patterns, trends, and relationships between students' proficiency
levels, usage frequency, and perceptions of AI translation accuracy. The quantitative approach ensures
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3121
www.rsisinternational.org
objectivity and allows for statistical analysis to measure the reliability and effectiveness of AI translations. This
study focused primarily on quantitative data. Incorporating qualitative methods such as interviews or open-
ended responses could further illuminate the contextual and experiential dimensions of AI translation use.
Descriptive statistics were used to summarize trends and perceptions. However, inferential statistical tests such
as correlation or ANOVA could be employed in future studies to determine significant relationships between
variables (e.g., proficiency level and trust in AI translation)
Participants
The respondents of this study are BAELS students from Saint Anthony’s College. A stratified random sampling
method was used to ensure equal representation of students from different year levels (1st to 4th year). The
target sample size was determined based on the total population of Bachelor of Arts in English Language Studies
students to achieve a 95% confidence level with a 5% margin of error. While the study obtained responses from
39 participants, this number may not fully represent the entire population of BAELS students. Future research
with a larger sample size is recommended to enhance statistical robustness and generalizability of findings.
Further, demographic data such as participants’ first language, exposure to translation tasks, and familiarity with
AI translation tools were not included in the present survey. These factors could have provided deeper insight
into variations in perception and accuracy evaluation.
Instrumentation
The study used a survey questionnaire as the primary data collection instrument. The questionnaire is divided
into five sections, each designed to measure specific variables related to AI translation accuracy and limitations:
1. Respondent Profile Captures demographic data such as year level, English proficiency, and AI
translation tool usage frequency.
2. Perceived Accuracy of AI Translation Assesses AI translation performance in word-for-word accuracy,
grammar, contextual meaning retention, and fluency.
3. Limitations of AI Translation Identifies common issues in AI translation, including difficulties with
idiomatic expressions, ambiguous words, technical terms, and cultural nuances.
4. AI Translation vs. Human Translation Measures students' trust and preference between AI-generated
translations and human translations.
5. Extent of AI Translation Usage Evaluates how often students verify AI translations before use and their
confidence in AI-generated outputs.
A 4-point Likert scale (1 = Strongly Disagree to 4 = Strongly Agree) is used to quantify participants' perceptions,
ensuring statistical analysis of responses. The questionnaire is validated by language and translation experts
before distribution to ensure clarity, reliability, and relevance to the research objectives.
RESULTS AND DISCUSSIONS
Table 1: Respondent Profile
Year Level
Frequency
Percentage
1st Year
3
7.7
2nd Year
11
28.2
3rd Year
22
56.4
4th Year
3
7.7
Total
39
100
English Proficiency Level
Frequency
Percentage
Beginner
21
53.8
Intermediate
16
41
Advanced
2
5.1
Total
39
100
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3122
www.rsisinternational.org
How often do you use AI translation tools
Frequency
Percentage
Never
1
2.6
Rarely (12 times a month)
15
38.5
Occasionally (12 times a week)
21
53.8
Frequently (Almost every day)
2
5.1
Total
39
100
This table presents the demographic and usage profile of the respondents in relation to their engagement with AI
translation tools. Majority of participants were 3rd-year students (56.4%), while only small proportions came
from 1st- and 4th-year levels (7.7% each). In terms of English proficiency, more than half identified as Beginners
(53.8%), followed by Intermediate (41.0%) and very few Advanced users (5.1%). The frequency of AI
translation tool usage shows that most students use them occasionally (53.8%) or rarely (38.5%), with only 5.1%
reporting daily use.
These findings are consistent with previous research highlighting that students at beginner and intermediate
proficiency levels often turn to machine translation (MT) as a support tool for comprehension and drafting, but
they rely less on it for daily or advanced writing tasks (Lee, 2021; Niño, 2020). The limited number of advanced
users suggests that most respondents may lack the linguistic competence required to detect and correct subtle
translation errors, a concern raised in studies emphasizing the importance of post-editing skills (Bowker, 2020;
Rico & Torrejón, 2022).
In terms of accuracy, recent evaluations of neural MT and large language models indicate that AI translation is
highly reliable in high-resource language pairs and routine, literal texts (Kocmi et al., 2024; Freitag et al., 2023).
This explains why majority of students still use such tools at least occasionallythey effectively reduce
comprehension barriers and speed up basic academic tasks. However, limitations remain evident in the
translation of idiomatic expressions, cultural references, and domain-specific terminology, where AI systems
still tend to produce literal or misleading outputs (Lai et al., 2024; Georgakopoulou et al., 2023).
Furthermore, for low-resource languages, AI translation performance is significantly weaker. Findings from the
WMT 2024 shared tasks show that systems struggle with languages outside the high-resource spectrum,
producing unstable quality and inconsistent accuracy across domains (Kocmi et al., 2024). For students working
with regional or less-common languages, this can limit the usefulness of AI translation.
Another notable limitation is that AI translation tools sometimes produce fluent but semantically inaccurate
outputs, which can be misleading for learners with limited proficiency (Specia et al., 2021). Studies in health
and educational domains also caution that incorrect translations may lead to harmful misinterpretations if not
properly reviewed (Zeng et al., 2023).
Moreover, the profile suggests that AI translation is a helpful occasional aid for most students but cannot be fully
relied upon without human oversight. Given that most respondents are at beginner and intermediate levels, they
are more vulnerable to adopting errors without realizing them, which underscores the importance of training in
post-editing strategies and critical evaluation of outputs (Niño, 2020; Rico & Torrejón, 2022).
Table 2: Perception of students on AI Translation Accuracy
Word-for-Word Accuracy
Verbal Interpretation
AI translation tools provide accurate word-for-word translations.
Agree
AI translations correctly translate individual words
Agree
AI perfectly considers sentence meaning.
Agree
Total
Agree
Grammar and Sentence Structure
Verbal Interpretation
AI translation tools correctly apply grammar and sentence structures.
Agree
AI-generated translations follow grammatical rules
Disagree
AI translation sound very natural.
Agree
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3123
www.rsisinternational.org
Total
Agree
Contextual Meaning Retention
Verbal Interpretation
AI translation tools maintain the meaning of the original text.
Agree
AI translation struggles to maintain meaning when translating longer
sentences.
Agree
Total
Agree
Fluency and Readability
Verbal Interpretation
AI-generated translations sound natural and fluent.
Agree
AI translations produce sentences that are EASY to understand.
Agree
Total
Agree
Reliability for Academic or Professional Use
Verbal Interpretation
AI translations are reliable for academic or professional use.
Agree
I feel confident using AI-generated translations without further revision.
Disagree
Total
Agree
This table presents the perception of students on the accuracy of AI translation tools across five indicators: word-
for-word accuracy, grammar and sentence structure, contextual meaning retention, fluency and readability, and
reliability for academic or professional use. The overall means across categories (ranging from 2.59 to 2.66)
indicate that students generally “Agree” that AI translation tools are helpful and accurate, though nuances in the
data highlight areas of caution.
Word-for-word accuracy received a mean score of 2.59 (Agree), suggesting that students recognize the capacity
of AI translation tools to produce literal translations and render individual words correctly. This is consistent
with findings that neural MT performs strongly at the lexical level, particularly in high-resource language pairs
(Freitag et al., 2023). However, the agreement is only moderate, reflecting known limitations when literal
translation fails to capture idiomatic or figurative meanings (Lai et al., 2024).
In terms of grammar and sentence structure, the mean score (2.62, Agree) indicates that students generally trust
AI translations to follow basic grammatical rules. However, the item “AI-generated translations follow
grammatical rules” received a lower mean of 2.49 (Disagree), reflecting skepticism about consistency in
grammar application. Previous studies similarly note that although AI-generated texts often appear fluent, errors
in syntax, word order, and agreement are still noticeable, especially in longer or complex sentences (Specia et
al., 2021).
For contextual meaning retention, students agreed (M = 2.66) that AI translation tools often maintain the meaning
of the source text. Yet, the item addressing difficulty with longer sentences highlights a common limitation:
systems sometimes produce semantically plausible but inaccurate outputs when faced with complex structures
or ambiguous context (Bowker, 2020; Rico & Torrejón, 2022). This finding reflects the concern that learners
particularly those with lower proficiencymay fail to detect such subtle errors without proper post-editing.
On fluency and readability, students also agreed (M = 2.63) that AI translations generally sound natural and easy
to understand. This aligns with literature showing that neural MT excels at producing fluent output that often
resembles human language (Kocmi et al., 2024). However, researchers caution that high fluency does not always
equate to high adequacy, since translations may “sound right” but still contain semantic inaccuracies (Specia et
al., 2021).
Finally, students perceived AI translations as somewhat reliable for academic or professional use (M = 2.61,
Agree), but they disagreed on the item “I feel confident using AI-generated translations without revision” (M =
2.49). This indicates an awareness among students that while AI tools are supportive, they cannot fully replace
human judgment in academic writing. This perception resonates with studies emphasizing the need for human
revision and post-editing when using AI in high-stakes contexts such as education or professional
communication (Niño, 2020; Zeng et al., 2023).
Moreover, the results suggest that students value AI translation tools for their fluency, readability, and general
accuracy, but they remain cautious about relying on them without revision. This cautious optimism aligns with
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3124
www.rsisinternational.org
the broader academic consensus: while AI translation offers substantial benefits in accessibility and speed, it still
requires critical use and human oversight to ensure accuracy, especially in academic and professional settings
(Bowker, 2020; Rico & Torrejón, 2022).
Table 3: Limitations of AI Translation
Idioms & Slang
Verbal Interpretation
AI translation struggles with idiomatic expressions and slang.
Agree
AI-generated translations are often too literal and fail to convey informal
meanings.
Agree
Total
Agree
Ambiguous Words
Verbal Interpretation
AI translation often misinterprets words with multiple meanings (e.g., "bank"
as a financial institution vs. riverbank).
Agree
AI translation tools require context to choose the right meaning of a word.
Agree
Total
Agree
Technical or Complex Texts
Verbal Interpretation
AI translation is less effective for technical or complex texts.
Agree
AI translation struggles with subject-specific terms (e.g., medical, legal, or
scientific terms).
Agree
Total
Agree
Cultural Nuances
Verbal Interpretation
AI translation does not fully consider cultural nuances.
Agree
AI translations fail to capture the tone and politeness levels of different
languages
Agree
Total
Agree
Need for Human Revision
Verbal Interpretation
AI translation often requires human revision to improve accuracy.
Disagree
AI-generated translations need additional proofreading to be fully reliable.
Agree
Total
Agree
As gleaned from the table, the students’ perception of the limitations of AI translation tools across five major
areas: idioms and slang, ambiguous words, technical or complex texts, cultural nuances, and the need for human
revision. Overall, the means (ranging from 2.57 to 2.88) show that respondents generally Agree that AI
translation tools face several important shortcomings that affect accuracy and usability.
On idioms and slang, students agreed (M = 2.59) that AI translation struggles with informal expressions and
tends to provide overly literal translations. This confirms long-standing findings in translation research:
idiomatic and figurative language remains one of the most challenging aspects for neural machine translation,
often leading to mistranslations or loss of meaning (Lai et al., 2024; Georgakopoulou et al., 2023). Such errors
can be problematic in contexts where cultural resonance or pragmatic appropriateness is key.
Regarding ambiguous words, the highest mean score was observed (M = 2.75, Agree), indicating that students
recognize AI translation’s difficulty in resolving polysemy (e.g., bank” as a financial institution vs. riverbank).
This reflects findings from computational linguistics studies showing that AI systems require strong contextual
cues to disambiguate meaning, and in the absence of sufficient context, they often default to the most statistically
frequent sense (Specia et al., 2021). For learners, this can result in misinterpretations of texts when context is
complex or implicit.
For technical or complex texts, students also agreed (M = 2.60) that AI struggles with specialized vocabulary
such as medical, legal, or scientific terminology. This aligns with evaluations of AI translation in specialized
domains, which demonstrate that although neural MT has improved, subject-specific precision is still weaker
compared to general text translation (Freitag et al., 2023; Zeng et al., 2023). Domain errors are especially risky
in professional and safety-critical fields where mistranslations may cause serious consequences.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3125
www.rsisinternational.org
In terms of cultural nuances, the mean score (M = 2.57, Agree) indicates students’ awareness that AI translations
often fail to capture tone, politeness, and cultural subtleties. Research supports this view: cultural and pragmatic
featuressuch as politeness strategies in Asian languagesremain difficult for MT systems to render
appropriately, often resulting in culturally insensitive or awkward outputs (Bowker, 2020; Rico & Torrejón,
2022).
Finally, on the need for human revision, students strongly agreed (M = 2.88) that AI translations require
proofreading and post-editing to ensure accuracy and reliability. This perception aligns with the consensus in
translation studies that AI translation should be used as a support tool, not a replacement for human expertise
(Niño, 2020; Rico & Torrejón, 2022). Post-editing is especially important for learners, as it not only corrects
errors but also helps develop critical awareness of linguistic features.
Overall, these findings emphasize that while AI translation tools are useful aids, they remain limited by
challenges in idiomatic language, ambiguity, specialized domains, and cultural sensitivity. The consensus among
students that human revision is necessary reflects a mature understanding of AI translation’s role: supportive,
but never fully autonomous.
Table 4: AI Translation vs. Human Translation
Which do you trust more for accurate translations?
Frequency
Percentage
AI Translation
3
7.7
Human Translation
13
33.3
Both Equally
23
59
Total
39
100
When using AI translation, do you double-check the output before using it?
Frequency
Percentage
Yes, always
29
74.4
Sometimes
9
23.1
No, I trust the translation completely
1
2.6
Total
39
100
Would you recommend AI translation tools for academic use?
Frequency
Percentage
Yes
22
56.4
No
4
10.3
Not sure
13
33.3
Total
39
100
This table shows the comparison between AI translation and human translation based on students’ perceptions
of trust, verification practices, and recommendations for academic use.
When asked which they trust more for accuracy, majority of respondents (59%) answered “Both equally,” while
33.3% trusted human translation more, and only 7.7% favored AI translation alone. This indicates that while
students recognize the usefulness of AI tools, they still place considerable value on the reliability of human
translation. Prior studies similarly report that learners view human translation as superior in terms of nuance,
idiomaticity, and cultural appropriateness, while AI tools are seen as faster and more accessible (Bowker, 2020;
Georgakopoulou et al., 2023). The balance of “both equally” suggests that students perceive AI and human
translation as complementary rather than competing approaches, which is consistent with recent perspectives
advocating for humanAI collaboration in translation workflows (Rico & Torrejón, 2022).
In terms of verification practices, 74.4% of respondents reported that they always” double-check AI outputs,
while 23.1% do so “sometimes.” Only one student (2.6%) trusted the AI translation completely without revision.
This strong tendency toward verification reflects studentsawareness of AI’s limitations and the need for human
oversightechoing research emphasizing that MT outputs should not be accepted at face value due to risks of
fluent but inaccurate translations (Specia et al., 2021; Lai et al., 2024). These findings resonate with Table 3,
where students emphasized the need for post-editing to ensure accuracy.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3126
www.rsisinternational.org
Finally, when asked whether they would recommend AI translation for academic use, a majority (56.4%)
answered Yes, while 10.3% said No and 33.3% were Not sure. This suggests a cautiously positive stance: most
students acknowledge the academic utility of AI translation, but a significant proportion remain uncertain,
reflecting ongoing debates in education about the proper role of AI in language learning. Studies in EFL contexts
confirm this ambivalence: learners appreciate AI tools for accessibility and speed, but they also worry about
accuracy, over-reliance, and possible impacts on skill development (Niño, 2020; Lee, 2021).
Taken together, the results highlight a measured trust in AI translation. Students view AI as a valuable tool for
academic support but not as a substitute for human expertise. Their preference for double-checking and post-
editing reflects an emerging literacy around AI translation useconsistent with pedagogical calls to integrate
MT critically in the classroom, where students learn not only to use AI tools but also to evaluate and refine their
outputs (Bowker, 2020; Rico & Torrejón, 2022).
Table 5: Extent of AI Translation Usage
How often do you verify AI-generated translations before using them?
Frequency
Percentage
Never
2
5.1
Rarely (Only when unsure)
12
30.8
Sometimes (For important translations)
17
46.3
Always
8
20.5
Total
39
100
Do you believe AI translations should be used without human revision?
Frequency
Percentage
Yes, AI is sufficient
6
15.4
No, human revision is necessary
27
69.2
Only for simple texts
6
15.4
Total
39
100
This table presents the frequency of students’ AI translation usage, focusing on their verification practices and
perspectives on the need for human revision.
When asked how often they verify AI-generated translations, nearly half of the respondents (46.3%) indicated
they verify outputs sometimes for important translations, while 30.8% do so rarely, and 20.5% stated they
always verify. Only 5.1% reported that they never verify. These findings reflect a strong tendency toward critical
engagement with AI translations, with most students showing caution in fully relying on the technology. This
aligns with prior research suggesting that learners are aware of the risks of fluent but incorrect” translations
generated by AI tools, necessitating user vigilance (Specia et al., 2021; Lai et al., 2024).
On the question of whether AI translations should be used without human revision, a large majority (69.2%)
responded “No, human revision is necessary,while 15.4% believed AI is sufficient, and another 15.4% noted
that it is only reliable for simple texts. This overwhelming preference for post-editing emphasizes the continued
importance of human oversight in ensuring translation accuracy, particularly for complex, technical, or culturally
nuanced content (Bowker, 2020; Georgakopoulou et al., 2023). Students’ responses also confirm the findings
from Tables 2 and 3, where grammar, contextual meaning, and cultural nuances were cited as areas where AI
struggles.
Generally, these results indicate that students approach AI translation with cautious pragmatism: while they
acknowledge its usefulness as a support tool, they remain aware of its limitations and the need for human
intervention. This perspective aligns with current trends in translation studies, which view AI translation not as
a replacement for human translators but as a tool that should be integrated responsibly within human-guided
workflows (Rico & Torrejón, 2022; Lee, 2021).
CONCLUSION
The findings of this study reveal that AI translation tools are perceived by students as helpful and accessible aids
in understanding texts, particularly in word-for-word accuracy, fluency, and readability. However, their
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3127
www.rsisinternational.org
limitations in grammar, contextual meaning, technical language, and cultural nuances indicate that they cannot
fully replace human translation. Students generally approach AI translations with caution, often verifying and
revising the outputs before use, which highlights their awareness of the technology’s strengths and weaknesses.
While AI is seen as beneficial for quick and simple translations, human revision remains necessary to ensure
accuracy and appropriateness, especially in academic and professional contexts. Moreover, AI translation serves
as a valuable supplementary tool that supports learning and communication, but its effectiveness is maximized
only when combined with critical evaluation, post-editing, and the linguistic skills of the user. Despite its
valuable insights, this study is limited by its small sample size and quantitative scope. The absence of detailed
demographic profiling and qualitative input constrains the interpretive depth of findings. Future research could
adopt a mixed-methods framework, combining larger-scale quantitative data with interviews or translation
quality evaluations using metrics such as BLEU or METEOR. Incorporating translation theory and exploring
domain-specific translation contexts would also provide a more comprehensive understanding of AI translation
efficacy and ethical implications
RECOMMENDATIONS
Based on the conclusions, the following recommendations are made:
1. For Students
Use AI translation as a support tool rather than a replacement for personal learning.
Always practice post-editing to ensure accuracy, especially in academic tasks.
Engage in continuous language practice (reading, writing, speaking) to avoid over-reliance on AI.
2. For Educators
Integrate AI translation tools in the classroom as learning aids, teaching students how to critically
evaluate and revise outputs.
Provide training in post-editing strategies so students can maximize AI’s benefits while
minimizing errors.
Encourage the balance of AI use with traditional methods of language learning to foster long-
term proficiency.
Employing standardized translation quality metrics such as BLEU, METEOR, or COMET could
empirically assess the accuracy of AI translations, allowing for an objective comparison between
human and machine-generated outputs
3. For Researchers
Future studies may compare the effectiveness of AI vs. human-assisted translations in specific
academic disciplines (e.g., medicine, law, literature).
Longitudinal research is recommended to explore how AI translation impacts students’ language
development over time.
Further investigation into ethical and cultural implications of AI translation may enrich the
understanding of its role in education and communication.
Future researchers are encouraged to adopt a mixed-methods approach combining surveys with
interviews or focus groups to explore the subtleties of users’ attitudes and translation experiences.
Further investigations can focus on domain-specific contexts such as legal, academic, or medical
translation, where terminological precision and cultural nuance are particularly crucial.
REFERENCES
1. Al-Romany, T. A. H., & Kadhim, M. J. (2024). Artificial Intelligence impact on human translation: Legal
texts as a case study. International Journal of Linguistics, Literature and Translation, 7(5), 8997.
https://www.researchgate.net/publication/380745676_Artificial_Intelligence_Impact_on_Human_Tran
slation_Legal_Texts_as_a_Case_Study
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 3128
www.rsisinternational.org
2. Bowker, L. (2020). Machine translation literacy: Helping students and teachers understand and use MT.
Language Learning & Technology, 24(3), 115. https://doi.org/10.10125/44739
3. Freitag, M., Grangier, D., & Caswell, I. (2023). Machine translation performance across domains.
Transactions of the Association for Computational Linguistics, 11, 115.
https://doi.org/10.1162/tacl_a_00474
4. Georgakopoulou, P., Gaspari, F., & Ramos Pinto, S. (2023). Idiomaticity and cultural references in neural
machine translation. Translation, Cognition & Behavior, 6(2), 197220.
https://doi.org/10.1075/tcb.00065.geo
5. Jiao, W., Wang, W., Huang, J., Wang, X., Shi, S., & Tu, Z. (2023). Is ChatGPT a good translator? Yes
with GPT-4 as the engine. arXiv preprint arXiv:2301.08745. https://arxiv.org/abs/2301.08745
6. Klimova, B., & Pikhart, M. (2023). The use of AI translation tools in foreign language teaching: A case
study. Education Sciences, 13(2), 123. https://www.mdpi.com/2227-7102/13/2/123
7. Kocmi, T., Federmann, C., & others. (2024). Findings of the WMT 2024 shared tasks. In Proceedings of
the Seventh Conference on Machine Translation (WMT 2024). Association for Computational
Linguistics. https://aclanthology.org
8. Lai, V., Chen, C., & Li, J. (2024). Challenges of idiom translation in neural MT: A case study. iScience,
27(3), 108765. https://doi.org/10.1016/j.isci.2024.108765
9. Lee, J. (2021). Challenges in neural machine translation: A case study on handling idiomatic expressions.
Journal of Artificial Intelligence Research, 70, 135150.
https://www.jair.org/index.php/jair/article/view/12182
10. Lee, J. (2021). Learner perceptions of machine translation in EFL writing. CALICO Journal, 38(1), 91
111. https://doi.org/10.1558/cj.40415
11. Moneus, A. M., & Sahari, Y. (2024). Artificial intelligence and human translation: A contrastive study
based on legal texts. Heliyon, 10(6), e28106. https://doi.org/10.1016/j.heliyon.2024.e28106
12. Niño, A. (2020). Evaluating the use of machine translation in foreign language learning: A review of
literature. Computer Assisted Language Learning, 33(7), 789812.
https://doi.org/10.1080/09588221.2019.1609570
13. Rico, C., & Torrejón, E. (2022). Machine translation post-editing competence: A key component in
translator education. The Interpreter and Translator Trainer, 16(1), 4562.
https://doi.org/10.1080/1750399X.2021.1934709
14. Specia, L., Blain, F., & Scarton, C. (2021). Evaluating and estimating machine translation quality.
Natural Language Engineering, 27(1), 6581. https://doi.org/10.1017/S1351324920000274
15. Ubhayawardhana, P., & Hansani, H. (2023). A study on the effectiveness of using Google Translate in
legal translation: With special reference to selected legal documents of the Registrar General's
Department. Journal of Language and Law, 12(1), 7895.
https://www.researchgate.net/publication/372370077_A_Study_on_the_Effectiveness_of_Using_Goog
le_Translate_in_Legal_Translation_With_Special_Reference_to_Selected_Legal_Documents_of_the_
Registrar_General%27s_Department
16. Zeng, Z., Wang, S., & Sun, H. (2023). Accuracy and risks of AI translation in medical communication:
A comparative study. Journal of Medical Internet Research, 25, e43568. https://doi.org/10.2196/43568