INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Awareness and Misconceptions of AI Among Educators  
Najwan Khambari*1, Wan Mohd Yaakob Wan Bejuri1, Nurul Azma Zakaria1, Aslinda Hassan1, Mas  
Nida Md Khambari 2, Taqwan Thamrin 3  
1
Fakulti Teknologi Maklumat dan Komunikasi, Universiti Teknikal Malaysia Melaka, Malaysia  
2
Fakulti Pengajian Pendidikan, Universiti Putra Malaysia, Malaysia  
3
Fakultas Ilmu Komputer, Universitas Bandar Lampung, Indonesia  
Received: 20 November 2025; Accepted: 27 November 2025; Published: 11 December 2025  
ABSTRACT  
The rapid expansion of Artificial Intelligence (AI) in education has been accompanied by both opportunities and  
challenges, with its effective adoption being largely dependent on how well AI is understood by educators and  
how accurately its capabilities and limitations are perceived. Existing studies have shown that awareness levels  
are inconsistent and that widespread misconceptions are held across different educational levels and  
geographical regions, with greater familiarity being reported by higher-education lecturers compared to primary  
and secondary school teachers. Misconceptions such as the belief that AI will replace human teachers,  
assumptions that AI possesses human-like intelligence, concerns regarding the dehumanization of learning, and  
anxieties related to data privacy have been found to hinder meaningful AI integration in educational practice. In  
this study, contemporary literature on educators’ awareness and misconceptions of AI has been synthesised  
through a narrative review of publications published between 2020 and 2025, and patterns of awareness,  
dominant misconceptions, and factors influencing AI adoption have been analysed. Findings indicate that  
awareness remains highly variable, misconceptions persist across contexts, and institutional support, digital  
literacy, and access to professional development are significant determinants of educators’ readiness to use AI.  
Based on these insights, it is suggested that targeted AI literacy initiatives, structured professional development,  
and clear institutional policies are urgently required to dispel misconceptions and promote ethical, confident,  
and responsible use of AI in education. This review is expected to contribute to ongoing scholarly and policy  
discussions by providing evidence-based guidance for policymakers, institutions, and training providers to  
strengthen educators’ preparedness for AI-enhanced teaching and learning.  
KeywordsArtificial Intelligence; Educator Awareness; Misconceptions; Professional Development;  
Responsible AI Integration  
INTRODUCTION  
Artificial Intelligence (AI) has rapidly become a defining force in contemporary education systems, offering new  
possibilities for personalised instruction, adaptive learning environments, automated assessment, and intelligent  
tutoring systems [3], [5], [19]. As AI-enabled tools become increasingly embedded in pedagogical and  
administrative processes, educators play a decisive role in determining whether such technologies are integrated  
meaningfully, cautiously, or in ways that undermine pedagogical intent [7], [12]. Research has shown that  
educators’ perceptions and conceptual understandings profoundly influence their willingness to adopt AI tools,  
their ability to evaluate AI outputs critically, and their confidence in navigating emerging ethical considerations  
[5], [8], [18].  
Despite AI’s growing visibility, educators’ awareness of AI remains uneven across educational levels and  
geographical regions. Higher-education lecturers typically exhibit greater familiarity with AI tools due to their  
exposure to plagiarism detection systems, academic analytics, and emerging generative AI platforms [6], [8],  
[20]. In contrast, teachers in K12 settings often report fragmented or superficial awareness, with many  
Page 4896  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
conflating AI with general digital automation or lacking clarity regarding its operational mechanisms [1], [2],  
[11], [15], [17]. Studies in Sweden, Turkey, and Northern Cyprus highlight that many teachers possess only  
partial conceptualisations of AI and struggle to differentiate between rule-based systems and machine-learning  
processes [1], [2], [11]. These disparities are compounded by infrastructural inequities and limited formal AI  
training, especially in developing contexts [10], [15], [23].  
Equally critical is the persistence of misconceptions about AI, which shape educators’ attitudes and behaviour.  
Common misconceptions include the belief that AI possesses human-like cognition, that AI systems operate  
with inherent neutrality or infallible accuracy, and that AI tools may replace human teachers entirely [11], [17],  
[18]. Such misconceptions are evident across both K12 and higher-education sectors and are often reinforced  
by media narratives, limited AI literacy, and the absence of scaffolded professional development [4], [7], [12].  
The emergence of generative AI since 2023 has introduced new layers of confusion and concern, with educators  
expressing uncertainty about academic integrity, hallucinated outputs, data privacy, and students’ potential  
overreliance on AI-generated content [8], [9], [22], [23].  
Although the literature addressing AI in education has expanded substantially, existing reviews tend to focus on  
broad pedagogical trends, technological affordances, or barriers to acceptance, rather than conducting a focused  
synthesis of educators’ awareness, misconceptions, and readiness [3], [4], [10], [12], [14], [19], [21]. Research  
on AI literacy is similarly fragmented, with several scoping reviews highlighting the absence of validated  
frameworks to guide educators’ conceptual understanding and pedagogical decision-making [12], [13], [14],  
[17]. Furthermore, empirical studies examining educators’ engagement with generative AI remain limited and  
geographically uneven, creating an urgent need for updated analyses reflecting post-2022 technological realities  
[8], [22], [23].  
In response to these gaps, this study conducts a narrative review of peer-reviewed literature published between  
2020 and 2025 to synthesise contemporary evidence on educators’ awareness and misconceptions of AI and to  
examine the individual, institutional, and contextual factors influencing AI adoption. Specifically, the review  
aims to: (i) map awareness patterns across K12 and higher-education contexts; (ii) identify and categorise  
dominant misconceptions; (iii) analyse determinants of AI readiness and adoption; and (iv) derive implications  
for policy, institutional strategy, professional development, and future research.  
The remainder of this paper is organised as follows. Section II reviews prior scholarship on educators’ awareness,  
perceptions, misconceptions, and adoption of AI, synthesising findings across multiple empirical and review  
studies. Section III presents the findings of the narrative analysis and discusses them across key thematic  
domains, including awareness patterns, misconceptions, determinants of adoption, and cross-contextual  
differences. Section IV outlines the implications of these findings for policy, institutional practice, teacher  
education, and classroom implementation. Section V concludes the paper by summarising the key contributions.  
Section VI identifies the limitations of the review, and Section VII proposes directions for future research.  
RELATED WORKS  
Artificial intelligence in education (AIED) has become a rapidly growing research area in recent years, especially  
since the acceleration of generative AI technologies in 2022. The literature reveals diverse perspectives on  
educators’ awareness, attitudes, misconceptions, and readiness toward AI across different educational contexts.  
This section synthesizes empirical studies and reviews from 20202025, covering K-12 teachers, higher-  
education lecturers, and teacher education programs.  
Awareness of AI Across Educational Levels  
Recent studies consistently highlight that educators’ awareness of AI is unevenly distributed across educational  
sectors, disciplines, and regions. Case-based and survey-based research indicates that university lecturers,  
especially those in technology-related fields tend to report higher awareness and familiarity with AI concepts  
and tools than primary and secondary school teachers [1], [2], [7]. In higher education, instructors are more  
likely to have encountered AI through research analytics, learning management systems with AI features,  
plagiarism detection, or generative AI tools for writing and coding support [8], [9], [22]. This exposure  
Page 4897  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
contributes to a baseline awareness of AI terminology and functionality, even when deeper conceptual  
understanding remains limited.  
In contrast, findings from K12 contexts suggest that awareness is often superficial and fragmented. Swedish  
teachers, for example, demonstrated partial understanding of AI, frequently conflating AI with general digital  
technologies or automation; many struggled to articulate how AI differs from conventional software or what it  
means for teaching and learning [11]. Similarly, teachers in Northern Cyprus were aware of AI as a “buzzword”  
and could name some AI applications, yet their practical understanding of how AI operates or could be integrated  
pedagogically was modest [1]. These patterns are echoed in other K12 settings, where teachers report hearing  
about AI through social media or popular discourse rather than through formal professional development [15],  
[16].  
Teacher education and pre-service contexts present a mixed picture. Some programs are beginning to include AI  
literacy components, but pre-service teachers’ awareness is still heavily shaped by personal technology use and  
media narratives rather than structured coursework [15], [17]. Studies on teachers’ needs for AI education show  
that many pre-service and in-service teachers alike feel unprepared to explain AI concepts to students or to make  
informed decisions about AI tools [15]. Taken together, the literature indicates that while “AI awareness” is  
increasing nominally, it often reflects a surface-level recognition of AI’s existence rather than a robust,  
pedagogically grounded understanding.  
Positive Perceptions and Perceived Usefulness of AI  
Alongside awareness, a substantial body of research documents generally positive perceptions of AI’s potential  
in education. Across primary, secondary, and higher education, educators often identify AI as a promising means  
of enhancing instructional efficiency, personalizing learning, and supporting data-informed decision making [3],  
[4], [5], [19]. Teachers and lecturers report that AI-powered tools can automate repetitive tasks such as grading,  
item generation, or scheduling, thereby freeing time for more complex pedagogical work and interaction with  
students [4], [7], [10].  
Perceived usefulness also extends to AI’s capacity to support differentiated instruction and learner engagement.  
In language learning contexts, for instance, AI chatbots and intelligent tutoring systems are perceived to help  
learners practice speaking and writing, receive immediate feedback, and access resources tailored to their  
proficiency level [6], [9]. Similar findings appear in studies of AI in academic writing support, where AI tools  
are seen as helpful for scaffolding structure, suggesting vocabulary, and promoting academic conventions,  
particularly for second-language learners [8], [9].  
Several studies highlight that educators view AI as a way to foster higher-order skills. By offloading routine  
tasks to AI, teachers believe they can focus on designing inquiry-based activities, facilitating critical discussions,  
and mentoring students’ metacognitive development [4], [6], [19]. Some also see AI as a resource for inclusive  
education, for example by providing adaptive supports or alternative representations for learners with diverse  
needs [3], [19]. Importantly, these positive perceptions are typically stronger among educators who have hands-  
on experience with AI tools, reinforcing the link between exposure, perceived usefulness, and willingness to  
experiment [5], [8], [22].  
These positive perceptions are often stronger among technologically proficient educators or those with prior  
exposure to AI tools.  
Negative Perceptions, Fears, and Ethical Concerns  
Despite acknowledging AI’s benefits, educators’ perceptions are frequently ambivalent, combining optimism  
with concern. A recurring theme is anxiety about job displacement: many teachers express fear that AI could  
eventually replace human educators or substantially reduce their role, particularly when policy narratives  
emphasize efficiency and automation [5], [18]. These fears are more pronounced where teachers feel excluded  
from decision-making about technology adoption or where AI is framed primarily as a cost-saving measure.  
Page 4898  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Another cluster of concerns relates to data privacy, security, and surveillance. Educators worry about the  
collection and use of large volumes of student data required to power AI-driven analytics and adaptive systems  
[5], [7], [10]. Questions are raised about who controls this data, how it may be reused by vendors, and what  
safeguards exist against misuse or breaches [3], [19]. In some contexts, teachers are hesitant to adopt AI tools  
precisely because institutional policies and guidelines on data protection are either absent or not clearly  
communicated [10].  
Ethical and pedagogical issues also feature prominently in literature. Teachers express unease about algorithmic  
bias and fairness, particularly in systems that support assessment, selection, or recommendation [3], [7], [19].  
There is concern that AI might encode and amplify societal inequities if not critically scrutinized. On a  
pedagogical level, educators worry that AI could dehumanize learning by replacing rich interpersonal  
interactions with automated feedback, or by encouraging over-reliance on AI-generated answers among students  
leading to reduced student creativity and cognitive effort [5], [9], [11], [23]. In studies focusing on generative  
AI and large language models, participants describe tensions between leveraging AI for productivity and  
preserving academic integrity and authentic learning [8], [23]. These negative perceptions and ethical concerns  
do not always translate into outright rejection, but they shape cautious, conditional, or selective adoption.  
These concerns are more pronounced among educators with lower digital literacy or from countries with weaker  
technological infrastructures.  
Negative Perceptions, Fears, and Ethical Concerns  
Beyond general concerns, literature identifies a set of persistent misconceptions that distort educators’  
understanding of AI and its implications. One widespread misconception is the belief that AI possesses general,  
human-like intelligence or even consciousness, leading some educators to anthropomorphize AI systems and  
attribute intentionality or emotions to them [1], [11], [18]. This can result in unrealistic expectations of what AI  
can do, or conversely in exaggerated fears about AI “taking over” human roles.  
Another common misconception is the assumption that AI “learns” or “thinks” in the same way humans do.  
Studies show that many teachers are unfamiliar with the basic principles of machine learning, such as pattern  
recognition, training data, or probabilistic outputs [1], [11], [17]. As a result, they may overestimate the accuracy  
and reliability of AI tools, treating outputs as objective or neutral, or underestimate the role of human judgment  
in interpreting AI-generated recommendations. Misunderstandings also extend to generative AI: some educators  
assume that large language models have access to real-time internet data or personal records when, in fact, they  
operate on trained statistical representations [8], [23].  
Giray’s work on “Ten Myths About AI in Education” synthesizes several of these misconceptions, including the  
ideas that AI will inevitably replace teachers, that AI can automatically personalize learning for all students  
without teacher mediation, and that AI can function as a fully independent tutor [18]. Empirical studies echo  
these myths, noting that teachers sometimes equate any sophisticated digital tool with AI, blurring distinctions  
between automation, rule-based systems, and learning algorithms [11], [17]. Such misconceptions can be double-  
edged: they can generate unwarranted enthusiasm (“AI will solve all problems”) or heightened resistance and  
anxiety (“AI is too dangerous to use”), both of which hinder balanced, evidence-based decision making.  
Factors Influencing Educators’ AI Adoption  
Several studies explicitly investigate the factors that influence educators’ willingness to adopt AI, often drawing  
on established technology acceptance frameworks such as the Technology Acceptance Model (TAM) and the  
Unified Theory of Acceptance and Use of Technology (UTAUT) [2], [7], [21]. At the individual level, perceived  
usefulness and perceived ease of use consistently emerge as strong predictors of behavioral intention to use AI  
tools [7], [21]. Educators who believe that AI can genuinely support their pedagogical goals and who find AI  
tools intuitive are more likely to experiment with and integrate them.  
Digital literacy and self-efficacy are also central determinants. Teachers with higher confidence in their  
technology skills are more willing to explore AI-based applications, troubleshoot problems, and adapt their  
Page 4899  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
practices [7], [12], [15]. Conversely, low digital competence is associated with avoidance, anxiety, and reliance  
on traditional methods [10], [15]. Demographic factors such as age and teaching experience show mixed results:  
some studies report younger teachers as more open to AI, while others suggest that experienced teachers become  
positive adopters once supported through adequate training [2], [10], [12].  
At the institutional level, structural and cultural factors play a major role. Access to robust infrastructure, reliable  
internet connectivity, adequate devices, and supportive platforms is a prerequisite for meaningful AI use [3],  
[10], [19]. Equally important are leadership vision and organizational climate. When school or university leaders  
articulate a clear, pedagogically grounded strategy for AI, provide time and incentives for experimentation, and  
address ethical and policy issues transparently, educators are more likely to engage positively [19], [20].  
Professional development opportunities that are ongoing, context-sensitive, and aligned with local curricula  
further strengthen adoption [4], [7], [12].  
Regional and sociocultural contexts add another layer of complexity. Studies indicate that educators in  
developing countries face more pronounced infrastructure and resource constraints, which can overshadow  
pedagogical or ethical considerations [3], [10], [23]. In contrast, educators in better-resourced contexts may be  
more concerned with data protection, academic integrity, and long-term implications for professional identity  
[11], [18]. Bibliometric analyses of AI literacy and acceptance research highlight uneven global participation in  
the discourse, with certain regions overrepresented and others underexplored [14], [20]. These contextual factors  
underscore the importance of tailoring AI adoption strategies to local realities rather than assuming a one-size-  
fits-all model.  
Research Gaps Identified in Current Literature  
Although research on AI in education has grown rapidly since 2020, several critical gaps remain. First, K12  
teachers are under-represented in AI perception and awareness studies. Most empirical work focuses on higher  
education settings, where lecturers tend to have more digital exposure and institutional support [1], [2], [7], [20].  
As a result, current knowledge disproportionately reflects technologically rich environments, leaving early  
primary, rural, and under-resourced schools insufficiently examined [10], [15], [23].  
Second, there is a notable lack of comprehensive empirical studies on educators’ misconceptions of AI. While  
individual studies highlight fragmented or inaccurate understandings, such as anthropomorphizing AI,  
overestimating capabilities, or conflating AI with automation, few large-scale or cross-context studies  
systematically map these misconceptions or examine how they develop [11], [17], [18]. Much of the existing  
evidence originates from small samples, qualitative findings, or exploratory analyses. This limits the  
generalizability of insights into how misconceptions constrain adoption.  
Third, the literature reveals an absence of standardized, validated AI literacy frameworks for educators. Although  
frameworks and curricula exist at conceptual or theoretical levels particularly in higher education, very few have  
been empirically validated, adapted for K12, or integrated into teacher education programs in a systematic way  
[12], [14], [17]. Professional development programs for AI are often ad hoc, short-term, or not aligned with  
classroom needs, resulting in inconsistent outcomes [6], [15].  
Fourth, current research remains heavily centered on technology acceptance models (TAM, UTAUT) to explain  
educators’ adoption of AI [7], [21]. While useful, these models primarily capture intention rather than real  
classroom enactment. Consequently, there is limited understanding of how teachers implement AI tools, how  
they negotiate ethical issues, or how AI affects pedagogical decision-making in practice. Studies rarely follow  
educators longitudinally or examine sustained use over time.  
Finally, despite the global explosion of generative AI (e.g., ChatGPT, Bard, Gemini) since late 2022, empirical  
research on generative AI adoption among educators remains limited. Existing studies tend to be cross-sectional  
or descriptive, focusing on initial attitudes, anxieties, or intended use rather than long-term pedagogical  
integration or learning outcomes [8], [23]. The speed of technological change has outpaced formal research,  
creating a gap between classroom realities and scholarly evidence.  
Page 4900  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Collectively, these gaps suggest the need for more diverse sampling, cross-context comparison, standardized AI  
literacy frameworks, longitudinal adoption studies, and deeper empirical investigation of misconceptions in the  
age of generative AI.  
Descriptive Statistics and Distribution of Reviewed Studies  
A quantitative synthesis of the 23 studies reviewed (20202025) reveals several patterns in research focus,  
educator context, methodological distribution, and thematic emphasis. These descriptive statistics provide  
additional clarity regarding where scholarly attention has been concentrated and, more importantly, where  
notable gaps persist.  
Distribution by Education Level: This is shown in Figure 1. A clear overrepresentation of higher education  
persists, with K12 contexts under-researched, especially early primary and rural schools. Only 2 studies (9%)  
focused specifically on early primary education [4], [16].  
Distribution by Education Level  
[3], [10],  
[14], 13%  
[5], [6], [8], [9],  
[12],[13],[14],[19],[  
20], [21], [22], [23],  
52%  
[1], [2], [4],[5],  
[11], [15], [16],  
[17], 35%  
Higher Education (HE)  
Mixed / All Levels  
K12 Education  
This gap limits understanding of how AI is perceived at foundational levels where misconceptions may form  
earliest.  
Methodological Type: There is a heavy reliance on surveys and literature reviews (Figure 2), with relatively  
few in-depth qualitative studies, longitudinal designs, or classroom intervention studies.  
Methodological Type  
[9], [23], 2,  
[3], [4], [7],  
9%  
[10], [12],  
[13], [14],  
[19], 8, 35%  
[1], [11],  
[17], 3, 13%  
[1], [2], [5], [6], [8],  
[11], [15], [16], [20],  
[23], 10, 43%  
Systematic Reviews / Meta-Analyses  
Empirical Surveys  
Qualitative Studies (Interviews / Case Studies)  
Mixed Methods  
This limits insights into how awareness, misconceptions, and adoption change over time or translate into  
classroom practice.  
Focus on misconceoptions: Despite widespread discussion of misconceptions in policy discourse, empirical  
research on misconceptions is sparse, small-scale, and often geographically narrow.  
Page 4901  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Focus on Misconceptions  
[1], [11],  
[17], [18],  
[23], 5, 22%  
18, 78%  
Direct Misconception Studies  
Other Studies  
This represents a major research gap.  
FINDINGS AND DISCUSSION  
This section synthesizes findings from the 23 reviewed studies by examining patterns of educators’ awareness  
of AI, dominant misconceptions, determinants of adoption, interrelationships among key constructs, cross-  
context differences, and alignment with theoretical frameworks commonly used to explain technology  
acceptance. The synthesis integrates empirical insights with conceptual interpretations to provide a coherent  
understanding of how educators perceive and approach AI in education, and how these perceptions influence  
adoption behaviors.  
Patterns of Awareness Across Contexts  
Across the reviewed literature, educators’ awareness of AI varies substantially by educational level, geographic  
region, and exposure to AI-enabled tools. A recurring pattern is the higher awareness reported by higher-  
education (HE) lecturers compared to K12 teachers. Studies in HE contexts found that lecturers commonly  
encounter AI-driven systems through plagiarism detection software, adaptive learning environments, and  
academic analytics, contributing to moderate to high awareness levels [5], [6], [8], [20]. Bibliometric analyses  
further show that HE institutions have been focal points for AI-related training and digital transformation  
initiatives, supporting greater conceptual familiarity [14], [20].  
In contrast, K12 teachers generally exhibit lower and more fragmented awareness, particularly in early-primary  
settings. Evidence from Sweden indicates that many teachers struggle to distinguish AI from automation or  
general ICT tools and hold only superficial notions of how AI processes information [11]. Similar issues were  
reported in Northern Cyprus, where K12 teachers exhibited awareness of AI as a concept but demonstrated  
limited understanding of its functionality or pedagogical relevance [1]. Studies in Turkey and Spain similarly  
reveal that teachers’ awareness tends to be shallow, with many educators unable to articulate the difference  
between rule-based systems and machine learning approaches [2], [4].  
Regional differences compound these disparities. Educators in well-resourced European or East Asian contexts  
often have moderate awareness but lack depth in conceptual knowledge [2], [11], [14], whereas teachers in  
developing regions, such as parts of the Middle East, Southeast Asia, and Africa, frequently report limited  
exposure to AI tools and minimal formal training opportunities [5], [15], [23]. These variations suggest that  
awareness is strongly mediated by digital access, institutional culture, and opportunities for hands-on  
engagement.  
A cross-comparison of awareness patterns reveals two systemic tendencies:  
1) Awareness does not equate to understanding. Educators may be familiar with AI as a concept but lack  
accurate mental models of how AI works.  
Page 4902  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
2) Awareness tends to be tool-driven rather than concept-driven.  
Educators often understand specific  
applications (e.g., ChatGPT, Grammarly, Duolingo) without understanding the underlying AI principles.  
Overall, the literature suggests that educators’ awareness of AI remains heterogeneous and inconsistent, with  
substantial gaps in foundational AI literacy across all educational levels.  
Dominant Misconceptions Among Educators  
A significant portion of empirical and review studies reveal widespread misconceptions that shape educators’  
perceptions of AI in education. These misconceptions arise from limited conceptual understanding, media  
narratives, and insufficient professional development.  
One pervasive misconception is the anthropomorphization of AI, where educators assume that AI systems  
possess human-like intelligence, emotions, or agency [11], [18]. Teachers often describe AI as “thinking” or  
“deciding,” attributing cognitive processes that do not reflect the probabilistic and statistical nature of AI models.  
This anthropomorphic framing leads to unrealistic expectations of AI capabilities and introduces unwarranted  
concerns about autonomy or control.  
Another common misconception involves the belief that AI will replace teachers, especially in tasks such as  
instruction delivery, assessment, or feedback [5], [18]. Although studies consistently show that teachers value  
human interaction, empathy, and contextual judgment, the fear of replacement persists, particularly among  
educators with limited digital self-efficacy or exposure to AI [1], [11], [17]. Misconceptions about role  
replacement can reduce openness to AI adoption and increase technostress.  
A third misconception concerns the overestimation of AI accuracy and objectivity. Several studies indicate that  
educators often assume AI-based systems are neutral or infallible, failing to recognize that AI outputs depend  
on training data quality and algorithmic design choices [5], [19], [23]. This misconception is especially  
problematic in contexts where teachers rely on automated grading tools or recommendation algorithms without  
critical evaluation.  
Additionally, educators frequently conflate automation with AI, categorizing non-AI tools as “AI” simply  
because they automate tasks [11], [17]. This conflation obscures meaningful distinctions between AI and  
traditional software, which in turn undermines educators’ ability to evaluate tools effectively.  
Finally, misconceptions surrounding generative AI are emerging, especially post-2023. Teachers may  
incorrectly assume that tools like ChatGPT access real-time information or student data, or they may  
misunderstand the risks of hallucinations and biased outputs [8], [9], [23]. This adds a new dimension to the  
misconception landscape that earlier studies did not address.  
Overall, misconceptions represent a crucial barrier to informed adoption. Their persistence across countries and  
education levels suggests that AI literacy interventions must explicitly address and correct inaccurate beliefs,  
not merely provide technical skills.  
Determinants of Educators’ AI Adoption  
Educators’ willingness to adopt AI is shaped by a combination of individual, institutional, and sociocultural  
factors. At the individual level, self-efficacy, digital literacy, and attitudes toward technology consistently  
emerge as key determinants. Teachers with higher confidence in their technological abilities are more likely to  
engage positively with AI tools and integrate them into practice [7], [12], [15]. Several studies highlight that  
perceived usefulness and ease of use, core constructs from the Technology Acceptance Model (TAM),  
significantly predict adoption intention in both K12 and HE settings [5], [21], [23].  
Institutional factors include the availability of infrastructure, such as reliable internet access, devices, and AI-  
enabled platforms. Studies across multiple regions show that insufficient infrastructure is one of the most  
common barriers to AI adoption, especially in developing countries [3], [10], [15]. Professional development  
(PD) is another critical determinant. Teachers consistently emphasize the need for ongoing, practical, and  
Page 4903  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
contextually grounded PD opportunities that focus not just on tool usage but also on pedagogical applications  
and ethical considerations [4], [7], [16], [17].  
Leadership and policy support also play significant roles. Institutions with clear AI integration strategies, ethical  
guidelines, and supportive leadership environments tend to foster greater educator confidence and willingness  
to experiment [19], [20]. Conversely, environments characterized by unclear policies or top-down mandates  
without teacher consultation may exacerbate resistance or anxiety.  
Sociocultural factors appear prominently in cross-regional studies. For example, teachers in high-income  
countries tend to be more concerned with ethics, privacy, and transparency, whereas teachers in low-income  
settings emphasize infrastructural barriers and the relevance of AI to local curricula [10], [15], [23]. These  
distinctions illustrate that adoption cannot be fully understood without considering contextual variables.  
Overall, the determinants of adoption are multifaceted, indicating that successful AI integration requires not only  
technological readiness but also supportive institutional ecosystems and culturally responsive professional  
learning opportunities.  
Interrelation Between Awareness, Misconceptions, and Adoption  
A cross-study synthesis reveals clear interrelationships among awareness, misconceptions, readiness, and  
adoption. Low awareness is strongly associated with higher prevalence of misconceptions, particularly regarding  
AI autonomy, accuracy, and pedagogical role [1], [11], [17]. Conversely, educators with higher awareness which  
are particularly those who understand AI principles rather than only tools, demonstrate fewer misconceptions  
and greater confidence in using AI-based systems [5], [8], [20].  
Misconceptions act as cognitive filters that shape educators’ interpretation of AI and influence their adoption  
choices. For instance, teachers who believe AI can replace human educators may resist technology adoption,  
while those who view AI as supportive of differentiated instruction display greater openness [4], [18]. Similarly,  
educators who conflate automation with AI may misjudge the value or limitations of AI-enabled tools, resulting  
in inappropriate or suboptimal use.  
Awareness and misconceptions collectively influence readiness, defined as the degree to which educators feel  
prepared to integrate AI tools into their pedagogical practice. Studies repeatedly show that readiness is not simply  
a function of access or attitude, but is moderated by teachers’ conceptual understanding and beliefs [7], [12],  
[23]. Teachers with robust conceptual understanding and lower misconception levels are more likely to critically  
evaluate AI tools, align them with pedagogical goals, and engage in adaptive experimentation.  
This interrelationship can be summarized as follows:  
Low awareness → high misconceptions → low readiness → reduced adoption  
High awareness → low misconceptions → high readiness → increased adoption  
To visually summarize these interrelationships, Table 2 presents a structured overview of how the constructs  
intersect based on the reviewed literature.  
Construct  
Awareness  
Influenced By  
Leads To  
Supported By Studies  
Exposure, PD, institutional support  
Lower  
[1], [2], [5], [11], [20]  
misconceptions  
Low awareness, media narratives, lack Lower  
of AI literacy resistance  
readiness, [11], [17], [18], [23]  
Misconceptions  
Page 4904  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Awareness + accurate understanding + Higher  
adoption [4], [7], [12], [15], [23]  
Readiness  
Adoption  
PD  
intention  
Readiness,  
institutional  
context, Actual  
pedagogical [5], [7], [19], [21], [23]  
perceived usefulness  
integration  
A. Cross-Context Differences (K12 vs Higher Education)  
The distinction between K12 and higher education reveal meaningful divergences. Early primary teachers face  
unique challenges due to limited exposure to AI, fewer institutional resources, and a stronger emphasis on child-  
centered pedagogy. Studies show that K12 teachers are more likely to hold misconceptions about AI, express  
anxiety about potential misuse, and feel unprepared for AI-related instruction [1], [11], [15], [16].  
In contrast, higher-education lecturers tend to adopt more pragmatic perspectives. They are often already using  
AI tools for academic writing, analytics, or content generation and exhibit higher levels of self-efficacy [6], [8],  
[20], [22]. However, HE educators also express concerns about academic integrity and the reliability of  
generative AI tools [8], [9].  
Regional differences add further complexity. Teachers in technologically advanced contexts focus on privacy,  
ethics, and fairness [11], [18], whereas those in resource-constrained settings emphasize infrastructural barriers  
and lack of training [10], [15], [23]. These cross-context differences reinforce the need for differentiated AI  
literacy strategies tailored to local realities.  
IMPLICATIONS  
The findings presented in this review hold several important implications for policymakers, educational  
institutions, teacher training systems, classroom practice, and the research community. Because awareness,  
misconceptions, readiness, and adoption are interdependent constructs, attempts to strengthen AI integration in  
education must recognize the systemic nature of these relationships. The implications outlined below highlight  
the multilevel interventions required to promote responsible, equitable, and pedagogically meaningful AI use  
across different educational contexts.  
Implications for Policy  
Policy frameworks governing AI in education must prioritize AI literacy, ethical safeguards, and contextualized  
integration strategies. Policymakers should articulate clear national or regional guidelines outlining expectations  
for AI implementation, including principles related to transparency, data protection, informed consent, and  
algorithmic fairness. The findings indicate that misconceptions are widespread and often reinforced by media  
narratives or inconsistent institutional communication; therefore, policies must include explicit public education  
components to correct inaccurate beliefs and promote informed discourse among educators [11], [17], [18]  
Furthermore, policies should incorporate differentiated pathways for K12 and higher education, acknowledging  
the distinct pedagogical aims and infrastructural realities of each sector. In many developing regions,  
infrastructural inadequacies remain a major barrier to adoption [10], [15], [23]. National strategies should  
therefore invest in foundational digital infrastructure while also promoting locally relevant AI resources,  
particularly for rural and underserved schools. Finally, policies should mandate continuous data governance  
audits and require developers of educational AI to meet transparent reporting standards, ensuring the tools  
introduced into classrooms are empirically validated, ethically sound, and aligned with curricular goals.  
Implications for Educational Institutions  
At the institutional level, the findings emphasize the need for comprehensive, strategically integrated AI  
readiness plans. Institutions play a crucial role in shaping teachers’ awareness and correcting misconceptions;  
hence, internal communication must accurately represent AI’s capabilities and limitations. This includes  
providing educators with accessible documentation, exemplars of pedagogically aligned AI use, and guidelines  
Page 4905  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
for evaluating the appropriateness of AI tools for a particular learning context.  
Institutions should also invest in robust physical and digital infrastructure, such as reliable internet access,  
compatible devices, and secure AI-enabled learning platforms. Infrastructure is not merely a technical  
requirement but a determinant of equity. Teachers in resource-limited environments cannot develop or exercise  
readiness if they lack functional access to AI systems [10], [15]. Institutional leadership should further cultivate  
a culture of innovation by supporting experimentation and reducing perceived risks associated with using AI  
tools in teaching.  
Another essential implication is the need for institution-level ethical and academic integrity frameworks,  
especially given the rise of generative AI and its implications for assessment and student authorship [8], [9],  
[22]. Institutions must provide clear guidelines that balance responsible use with pedagogical innovation,  
ensuring that educators can incorporate AI safely without compromising academic standards.  
Implications for Teacher Training and Professional Development  
Professional development (PD) emerges across literature as one of the strongest determinants of educators’  
readiness to integrate AI [4],[7], [12], [16], [17], [23]. The implications are therefore substantial. PD programs  
must move beyond tool-based training toward concept-driven AI literacy, helping educators develop accurate  
mental models of how AI systems function, what they can and cannot do, and how to critically evaluate their  
outputs. Effective PD should explicitly address misconceptions, using examples, counterexamples, and hands-  
on exploration of common AI systems.  
PD initiatives must also be ongoing rather than episodic, embedded into teachers’ workflow, and tailored to  
specific educational levels. Early primary educators, for instance, require PD that contextualizes AI within  
developmental learning theories and age-appropriate pedagogical approaches [16]. Higher education lecturers  
may require training focused on ethical issues, academic integrity, and AI-driven assessment design [8], [20],  
[22].  
Crucially, PD must equip teachers to integrate AI pedagogically, not just technically. Educators must learn how  
AI can support differentiation, assessment, collaboration, and personalized learning and where AI’s limitations  
require human oversight and professional judgment. PD programs should also develop teachers’ data literacy,  
equipping them to interrogate AI outputs and recognize issues related to bias, hallucination, or model limitations.  
Implications for Classroom Practice  
The findings highlight that AI integration should be anchored in pedagogical value, not technological novelty.  
Educators must critically evaluate when and how AI can enhance learning, considering student needs, task  
complexity, and curricular objectives. AI tools should be employed to support, rather than replace, core  
instructional processes such as formative assessment, scaffolding, and feedback.  
To reduce misconceptions and promote responsible use, teachers should model transparent AI usage in the  
classroom explaining how AI systems generate outputs, where they may fail, and why human judgment remains  
essential. This not only improves pedagogical clarity but also promotes student AI literacy. Teachers can  
incorporate AI into collaborative learning activities, critical evaluation tasks, and inquiry-based learning, helping  
students engage with AI as both a tool and an object of inquiry.  
Classroom AI use must also uphold ethics, privacy, and inclusion. Educators should adopt AI tools that adhere  
to ethical guidelines, avoid tools requiring unnecessary student data, and ensure that AI does not reinforce  
inequities or marginalize learners with diverse needs. Teachers must also remain vigilant about the risk of  
overreliance, ensuring that students develop the capacity to think critically and independently rather than  
deferring to AI outputs uncritically.  
Implications for Researchers  
The findings reinforce the need for more empirically grounded and theoretically integrated research on  
Page 4906  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
educators’ awareness, misconceptions, readiness, and AI adoption. Researchers must expand the evidence base  
by designing studies that:  
Investigate misconceptions using validated AI literacy frameworks  
Conduct longitudinal analyses of adoption  
Examine the pedagogical impacts of AI integration  
Compare regional and national differences  
Explore the emergence of generative AI in real classrooms post-2023  
Methodological diversity is also required. Survey-based research dominates literature, but qualitative studies,  
design-based research, and mixed-methods investigations are necessary to capture the complexity of classroom-  
level AI integration. Researchers must also engage interdisciplinary perspectives from computing, cognitive  
science, ethics, and educational psychology to develop more holistic models of AI adoption.  
CONCLUSION  
This narrative review synthesizes contemporary research on educators’ awareness, misconceptions, readiness,  
and adoption of AI in educational settings. The findings highlight substantial variability in awareness across  
educational levels and regions, with higher-education lecturers generally demonstrating greater familiarity than  
K–12 teachers. Misconceptions remain pervasive, shaping educators’ perceptions of AI’s role, accuracy, and  
pedagogical utility. Adoption is influenced by a dynamic interplay of individual attitudes, institutional support,  
professional development, and sociocultural context.  
The analysis underscores that improving educators’ readiness for AI-enhanced teaching requires more than  
access to technology; it demands accurate conceptual understanding, supportive policies, pedagogically aligned  
professional development, and institutional ecosystems that foster responsible innovation. The growing  
influence of generative AI further amplifies the need for critical evaluation skills and ethical guidelines across  
all education levels.  
By identifying research gaps and synthesizing findings across diverse contexts, this review contributes a  
structured understanding of the factors shaping educators’ engagement with AI. It further provides actionable  
insights for policymakers, institutions, teacher educators, and researchers seeking to promote informed,  
equitable, and effective AI integration in educational practice.  
LIMITATIONS  
While this review provides a comprehensive synthesis of relevant literature, several limitations must be  
acknowledged. First, the review is limited to studies published between 2020 and 2025. Therefore, earlier  
foundational work may offer additional context. Second, the review relies on studies indexed in Scopus and  
published primarily in English, potentially excluding relevant research in non-English-speaking regions. This  
may underrepresent practices and perspectives from parts of Africa, Latin America, and Southeast Asia.  
Third, the heterogeneity of research designs including surveys, qualitative studies, and systematic reviews limits  
direct comparability across studies. Many studies rely on self-reported measures of awareness or attitudes, which  
may not accurately reflect actual understanding or classroom behavior. Additionally, relatively few studies focus  
specifically on early primary teachers, generative AI adoption, or validated AI literacy frameworks.  
Finally, the rapid evolution of AI technologies means that scholarly discourse may lag behind classroom realities.  
Findings must therefore be interpreted as representing a dynamic and rapidly changing field rather than a stable  
or mature body of knowledge.  
Page 4907  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
RECOMMENDATIONS FOR FUTURE RESEARCH  
Based on the identified gaps, several avenues for future research are recommended. First, there is a need for  
large-scale empirical studies investigating misconceptions and foundational AI literacy among educators,  
especially in K12 contexts. These studies should employ validated frameworks and robust measurement tools.  
Second, future studies should examine longitudinal trajectories of AI adoption, tracking how awareness,  
misconceptions, and readiness evolve over time and following the implementation of professional development  
programs. Third, there is a pressing need for intervention-based research, including design-based studies  
evaluating the impact of specific AI literacy or PD interventions on teacher practice.  
Next, cross-national comparative research is needed to understand how sociocultural, infrastructural, and policy  
differences shape educators’ AI perceptions and adoption behaviors. Fifth, given the rise of generative AI,  
empirical studies should investigate its pedagogical implications, ethical challenges, and classroom integration  
strategies in real-world scenarios.  
Finally, researchers should explore opportunities to integrate multi-theoretical frameworks, combining TAM,  
UTAUT, and AI literacy perspectives to develop comprehensive models that more accurately capture the  
complexity of AI adoption in educational settings.  
ACKNOWLEDGMENT  
This study was supported by research from Universiti Teknikal Malaysia Melaka (UTeM). The authors would  
like to express their sincere gratitude to colleagues at the Fakulti Teknologi Maklumat & Komunikasi, UTeM,  
for their valuable technical input and constructive feedback during the development of this work. The authorship  
of this article reflects equal contribution from all authors involved in the study. Special thanks are extended to  
individuals who assisted in formatting and proofreading the manuscript.  
REFERENCES  
1. A. Güneyli, N. S. Burgul, S. Dericioğlu, and H. Güneralp, “Exploring Teacher Awareness of Artificial  
Intelligence in Education: A Case Study from Northern Cyprus,” European Journal of Investigation in  
Health,  
Psychology  
and  
Education,  
2024.  
[Online].  
Available:  
2. S. E. Öndünç, M. Özmutlu, S. Saraç, and S. G. Turan, “Illuminating teachers' artificial intelligence  
insight in Turkish educational terrain,” in Generative Artificial Intelligence Applications: Holistic  
Reflections  
3. J. J. G. Adil, “AI in Education: A Systematic Literature Review of Emerging Trends, Benefits, and  
Challenges,” Seminars in Medical Writing and Education, 2025. [Online]. Available:  
From  
The  
Educational  
Landscape,  
2025.  
[Online].  
Available:  
4. O. Arranz-García, M. D. C. R. García, and V. Alonso-Secades, “Perceptions, Strategies, and Challenges  
of Teachers in the Integration of Artificial Intelligence in Primary Education: A Systematic Review,”  
Journal  
5. M. Alwaqdani, “Investigating teachers’ perceptions of artificial intelligence tools in education: potential  
and difficulties,” Education and Information Technologies, 2025. [Online]. Available:  
of  
Information  
Technology  
Education:  
Research,  
2025.  
[Online].  
Available:  
6. M. A. Alqaed, “AI in English Language Learning: Saudi Learners’ Perspectives and Usage,” Advanced  
7. R. Taheri, N. Nazemi, S. E. Pennington, and F. Dadgostari, “Factors influencing educators’ AI adoption:  
A grounded meta-analysis review,” Computers and Education: Artificial Intelligence, 2025. [Online].  
8. F. Kamoun, W. E. Ayeb, I. Jabri, and F. Iqbal, “Exploring Students’ and Faculty’s Knowledge,  
Attitudes, and Perceptions Towards ChatGPT: A Cross-Sectional Empirical Study,” Journal of  
Page 4908  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Information  
Technology  
Education:  
Research,  
2024.  
[Online].  
Available:  
9. D. A. Junio and A. A. Bandala, “Utilization of Artificial Intelligence in Academic Writing Class: L2  
Learners Perspective,” in 2023 IEEE 15th International Conference on Humanoid, Nanotechnology,  
Information Technology, Communication and Control, Environment, and Management (HNICEM),  
10. E. J. G. Eusebio, P. R. Baldera, A. M. C. Patiam, and A. L. Ribon, “AI in the Classroom: A Systematic  
Review of Barriers to Educator Acceptance,” International Journal of Learning, Teaching and  
Educational  
Research,  
2025.  
[Online].  
Available:  
11. J. Velander, M. A. Taiye, N. Otero, and M. Milrad, “Artificial Intelligence in K–12 Education: eliciting  
and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning,”  
Education  
and  
Information  
Technologies,  
2024.  
[Online].  
Available:  
12. K. R. Srinivasan, N. H. A. Rahman, and S. D. Ravana, “Reskilling and upskilling future educators for  
the demands of artificial intelligence in the modern era of education,” in Pitfalls of AI Integration in  
Education:  
Skill  
Obsolescence,  
Misuse,  
and  
Bias,  
2025.  
[Online].  
Available:  
13. L. Xia, “Artificial Intelligence Literacy Education: A Scoping Literature Review from 2020–2024,” in  
2025 International Conference on Artificial Intelligence and Education (ICAIE 2025), 2025. [Online].  
14. Y. Yang, Y. Zhang, D. Sun, and Y. Wei, “Navigating the landscape of AI literacy education: insights  
from a decade of research (2014–2024),” Humanities and Social Sciences Communications, 2025.  
15. N. Bautista, J. Femiani, and D. Inclezan, “Understanding K–12 Teachers' Needs for AI Education: A  
Survey-Based Study,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2025.  
16. M. Vogt, V. Ferraioli, V. Abou-Khalil, and F. Vogt, “Teachers’ Perspectives on Using and Teaching  
Artificial Intelligence in Early Primary Education,” in Communications in Computer and Information  
17. J. Velander, N. Otero, F. Dobslaw, and M. Milrad, “Eliciting and Empowering Teachers’ AI Literacy:  
The Devil is in the Detail,” in Lecture Notes in Networks and Systems, 2024. [Online]. Available:  
18. L. Giray, “Ten Myths About Artificial Intelligence in Education,” Higher Learning Research  
Communications,  
2024.  
[Online].  
Available:  
19. S. Baranidharan, S. P. John, and C. Mohan, “AI-Driven Pedagogies and Learning Environments in  
Modern Education: A PRISMA-Based Systematic Review,” in Navigating Barriers to AI  
Implementation  
in  
the  
Classroom,  
2025.  
[Online].  
Available:  
20. N. Wang, F. J. García-Peñalvo, and Á. F. Blanco, “A Bibliometric Analysis of AI Literacy and  
Educational Readiness Among University Students: A Study from 2020 to 2024,” in Lecture Notes in  
Educational  
Technology,  
2025.  
[Online].  
Available:  
21. B. Gao, R. Liu, and J. Chu, “Exploring Trends of Acceptance of Artificial Intelligence in Education: A  
Systematic Literature Review,” in Lecture Notes in Computer Science, 2025. [Online]. Available:  
22. S. M. Echols, “The Bot’s Got Your Back: Leveraging Generative AI to Boost Your Productivity,”  
Computers  
in  
Libraries,  
2024.  
[Online].  
Available:  
23. M. A. Ayanwale, O. P. Adelana, N. B. Bamiro, and K. A. Adewale, “Large language models and GenAI  
in education: Insights from Nigerian in-service teachers through a hybrid ANN-PLS-SEM approach,”  
F1000Research,  
2025.  
[Online].  
Available:  
Page 4909