International Journal of Research and Innovation in Social Science

Submission Deadline- 29th October 2025
October Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th November 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th November 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Deliberating Artificial Intelligence (AI) Use in Teaching in Universities in China

Deliberating Artificial Intelligence (AI) Use in Teaching in Universities in China

Brian Bantugan, PhD1, Weiwei Chen2, Ye Luo2, Na Xu2, and Wenxuan Zheng2

1Faculty Member, St. Paul University Manila

2Graduate Student, St. Paul University Manila

DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000659

Received: 12 October 2025; Accepted: 20 October 2025; Published: 25 October 2025

ABSTRACT

This study explored the diverse perspectives and lived experiences of Chinese university educators regarding the benefits, risks, and challenges of integrating artificial intelligence (AI) into their teaching practices. Guided by a constructivist and participatory paradigm, the research employed a qualitative case study design involving four Chinese university teachers currently pursuing graduate studies in the Philippines. Data were gathered through an open-ended questionnaire and analyzed thematically to identify key patterns in teachers’ conceptualizations, motivations, and reservations about AI use in education. Findings revealed that Chinese teachers generally perceive AI as a transformative global trend and a valuable functional assistant that enhances efficiency, innovation, and personalized learning. However, they also expressed caution, emphasizing potential risks such as overreliance, data inaccuracy, ethical dilemmas, and the erosion of human interaction and critical thinking. The study underscores the need for institutional policies, ethical guidelines, and sustained professional development programs to help teachers critically deliberate on AI adoption rather than passively comply with top-down policy directives. Ultimately, this research contributes to the discourse on educational modernization in China by highlighting that sustainable AI integration requires more than technological readiness—it demands culturally responsive training, equitable support systems, and frameworks that empower teachers as reflective agents of educational innovation.

Keywords: Artificial Intelligence (AI), Teacher Perception, Educational Technology, Pedagogical Beliefs, China

INTRODUCTION

In recent years, China has placed AI at the center of its education reform agenda. The Ministry of Education has proposed comprehensive plans to integrate AI technologies into curricula, teaching methods, and educational policy as part of its “strong-education nation” strategy to modernize and innovate across all levels of schooling (Reuters, 2025). Such reforms reflect both national ambition to enhance educational quality and international competitiveness, and recognition that AI could support personalized learning, efficiency, and new pedagogical possibilities.

Despite the increasing policy support, the actual experiences, understandings, and attitudes of teachers toward AI in classroom practice remain underexplored in many regions of China. Studies suggest that primary mathematics teachers’ attitudes, beliefs about usefulness and ease of use, and infrastructural or policy-related contextual factors strongly influence whether and how AI tools are adopted in teaching (Li & Noori, 2024). Similarly, research indicates that teacher perceptions, AI literacy, and pedagogical beliefs play a crucial role in sustainable integration of AI in mathematics education (Lin et al., 2025). At the same time, emerging scholarship highlights that teachers in Chinese universities and teacher education programs demonstrate varied levels of proficiency, concern about ethical issues, and uneven awareness of AI-TPACK, which influences the effectiveness of AI integration (Xie & Luo, 2025).

Furthermore, broader analyses of generative AI’s impact on student learning and teacher practice underscore both opportunities for enhanced performance and risks such as overreliance, academic dishonesty, and content reliability (Fan et al., 2025; Li, 2025). These findings reveal the complexity of adopting AI in education and the importance of investigating not only usage but also the reasons teachers choose to embrace or resist such technologies.

AI in Education

AI is transforming education globally, particularly in China, where it is seen as a tool for both pedagogy and modernization under the “Next Generation Artificial Intelligence Development Plan” (Zawacki-Richter et al., 2019). Despite national efforts to integrate AI in classrooms and teacher training, teachers’ perspectives are critical since their acceptance and pedagogical decisions shape its educational impact (Chen et al., 2020). While teachers appreciate AI’s potential to improve efficiency, creativity, and personalized learning, concerns persist over ethical risks, accuracy, and overreliance (Chen, 2025; Fan et al., 2025). Barriers such as insufficient training, unclear policies, and limited resources hinder AI’s integration (Mehdaoui et al., 2024). Research indicates that institutional support and clear policies are crucial to guide teachers in integrating AI responsibly (Reuters, 2025).

Teachers’ Understanding of AI

Teachers’ understanding of AI is vital for its adoption, yet many still view it primarily as automation rather than a tool for personalization and assessment (Holmes et al., 2021). In China, awareness of AI’s potential is uneven, with urban schools generally having better infrastructure than rural ones (Zhao et al., 2022). Misconceptions may limit AI’s educational use, making it essential for teachers to have a clear understanding of its capabilities and ethical considerations (Chen et al., 2020).

Teachers’ Use of AI in Teaching Practice

The adoption of AI tools in classrooms is increasing in China, especially with government support for digital transformation (Du et al., 2025). Teachers use AI for grading, feedback, and monitoring student progress, with younger teachers particularly benefiting from AI’s ability to enhance their professional identity (Yao et al., 2023). However, implementation is uneven, with resource-poor schools struggling to integrate AI effectively (Zhao et al., 2022). Barriers include limited training, skepticism about AI’s pedagogical value, and gaps in AI literacy (Zhao et al., 2022; Xu et al., 2024).

Motivations for Using AI in Teaching

Teachers are motivated to use AI for its benefits, including personalized learning, reduced administrative workload, and improved student outcomes (Li & Noori, 2024). AI can also foster student engagement and creativity, especially in fields like foreign language learning (Ma, 2024). However, teachers’ adoption of AI is influenced by institutional support, technical facilities, and peer influence (Zhao et al., 2025), with those more confident in AI usage more likely to embrace it (Xie & Luo, 2025).

Reasons for Not Using AI in Teaching

Despite its potential, teachers have concerns about AI’s impact on their professional role, including issues like data privacy, bias, and the risk of overreliance, which may hinder critical thinking and academic integrity (Zhao & Dai, 2021; Holmes et al., 2021). In China, rural-urban disparities and equity concerns further exacerbate these issues (Zhao et al., 2022). Teachers also worry about AI’s potential to replace human interaction and erode traditional pedagogical values (SohuAI, 2025).

Support Systems for Deliberating AI Adoption

Effective AI adoption requires robust support systems, including professional development that addresses both technical and ethical aspects of AI (Luckin et al., 2016). Training initiatives should focus not just on technical skills but on how to integrate AI in ways that align with pedagogical goals (Zhai et al., 2021). Additionally, policy support and access to resources are crucial for successful integration, with clear guidelines and peer collaboration networks fostering responsible use (Yu & Wang, 2020). When these support systems are in place, teachers are better equipped to navigate the benefits and risks of AI in education.

Synthesis and Gaps in the Literature

Artificial intelligence (AI) is increasingly reshaping education worldwide, offering opportunities for personalized learning, adaptive assessment, and intelligent classroom management (Luckin et al., 2016; Zawacki-Richter et al., 2019). In China, AI is framed not only as a pedagogical tool but also as a strategic driver of modernization, with national policies such as the Next Generation Artificial Intelligence Development Plan pushing for its integration into classrooms, platforms, and teacher training (Zawacki-Richter et al., 2019; Chen et al., 2020). Teachers, however, remain central actors in AI adoption, and their conceptualizations significantly influence whether and how AI is integrated (Holmes et al., 2021; Chen & Zhang, 2020). Research indicates that teachers appreciate AI’s benefits in efficiency, personalization, and creativity (Li & Noori, 2024; Ma, 2024; Zhou & Peng, 2025), yet they also express concerns over ethical risks, resource gaps, data privacy, and the erosion of humanistic dimensions of education (Chen, 2025; Zhao & Dai, 2021; Holmes et al., 2021; Zhou & Peng, 2023). While motivations for AI use align with frameworks such as the Technology Acceptance Model (Davis, 1989), barriers such as uneven infrastructure, limited professional development, and skepticism regarding AI’s pedagogical value beyond efficiency persist (Zhao et al., 2022; Mehdaoui et al., 2024; Xie & Luo, 2025).

Despite a growing body of scholarship, several research gaps remain. First, much existing research highlights policy directives and technological potential but gives less attention to teachers’ lived experiences and nuanced pedagogical deliberations, particularly in Chinese higher education contexts. Second, while studies note urban-rural disparities in AI access and application (Zhao et al., 2022; Zhang et al., 2021), there is limited exploration of how these inequalities shape teachers’ perceptions, practices, and long-term professional identity. Third, current teacher training initiatives often emphasize technical competence but neglect ethical, cultural, and pedagogical considerations (Zhai et al., 2021; Li, 2019), raising questions about whether teachers are adequately prepared to balance innovation with responsible use. Finally, while motivations for AI adoption are relatively well-documented (Li & Noori, 2024; Lariba & Ibojo, 2025; Ma, 2024), reasons for resistance—particularly concerns over academic integrity, data security, and the preservation of human interaction—require deeper, context-specific examination (Holmes et al., 2021; Chen et al., 2020; SohuAI, 2025). Addressing these gaps will provide valuable insights into how teachers conceptualize, deliberate, and integrate AI in ways that not only enhance learning outcomes but also sustain the professional, ethical, and humanistic dimensions of education.

Study Framework

Theoretical Framework. This study draws from established models of technology adoption and pedagogical integration. The Technology Acceptance Model (TAM) suggests that teachers’ perceptions of AI’s usefulness and ease of use influence their adoption intentions (Davis, 1989). The Diffusion of Innovations Theory (DOI) expands this by framing adoption as a social process shaped by factors like relative advantage and institutional support (Rogers, 2003). The Technological Pedagogical Content Knowledge (TPACK) framework emphasizes aligning technology with pedagogy and content for effective AI integration (Mishra & Koehler, 2006). The Unified Theory of Acceptance and Use of Technology (UTAUT2) highlights social factors and motivation in sustaining AI adoption (Venkatesh et al., 2012). Together, these models illustrate how individual perceptions and systemic dynamics shape teachers’ engagement with AI.

Conceptual Framework. The conceptual framework emphasizes the interconnectedness of teachers’ understanding of AI, policy context, and classroom practice. Conceptual clarity is crucial for effective AI integration, as misconceptions limit AI’s pedagogical potential (Holmes et al., 2021; Zhao et al., 2022). National policies like China’s AI Development Plan provide direction, but successful adoption depends on resources, professional development, and institutional support (Zawacki-Richter et al., 2019; Li & Noori, 2024). In the classroom, AI offers personalized learning and efficiency but raises concerns about ethics, overreliance, and digital divides (Luckin et al., 2016; Chen et al., 2020; Fan et al., 2025). Adoption is thus shaped by perceived benefits and risks, moderated by policy and institutional supports.

Operational Framework. The operational framework connects teachers as the key link between policy and practice. National initiatives promote AI integration, but their impact depends on teachers’ knowledge and capacities (Zawacki-Richter et al., 2019; Chen et al., 2020). Teachers’ understanding varies, with urban educators generally more informed than rural ones (Holmes et al., 2021; Zhao et al., 2022). Institutional support and professional development influence AI adoption, as teachers weigh AI’s benefits—like efficiency—against risks like dependency and digital inequities (Fan et al., 2025; Xie & Luo, 2025). This framework shows that AI integration is a socio-educational process influenced by teacher cognition, external support, and policy structures.

Teacher Training and Support. Teacher training is central to AI adoption, as it provides the skills and pedagogical understanding necessary for effective use (Wang & Li, 2018). External supports, such as policy guidance and resource provision, create the conditions for successful integration (He & Liu, 2017). Ethical considerations act as a feedback loop, ensuring responsible AI use in classrooms (Li, 2019). Thus, AI adoption decisions are shaped by the interplay of external supports and internal teacher factors, such as knowledge, attitudes, and ethical reflection (Chen & Zhang, 2020).

Figure 1 Operational Framework Model

Figure 1 Operational Framework Model

Statement of the Problem

This study aims to fill several gaps. First, it seeks to clarify how teachers in China understand “artificial intelligence” or AI in educational settings, since conceptual clarity can shape attitudes, usage, and policy support. Second, it investigates in which situations and using which platforms teachers currently use AI to assist in their teaching. Third, given both the recognized potential and possible risks, the study examines the reasons why teachers believe they should use AI, as well as reasons why they believe they should not. Finally, recognizing that decisions about using technology are seldom purely individual, this research also explores what kinds of assistance (e.g., training, policy, resource support) teachers need in order to deliberate properly on whether to use or refrain from using AI in their teaching.

METHODOLOGY

This study adopted a constructivist/participatory paradigm, which emphasizes teachers’ subjective, socially constructed understandings of artificial intelligence (AI) and positions them as active contributors in articulating their needs for training, policy, and resources (Guba & Lincoln, 1994; Creswell & Poth, 2018; Mertens, 2015). A case study approach was used to provide an in-depth, context-specific exploration of how Chinese teachers understand, adopt, and deliberate on AI in education, integrating multiple sources of evidence and situating individual sense-making within broader institutional and cultural contexts (Yin, 2018; Stake, 1995; Merriam & Tisdell, 2016).

The study engaged four Chinese graduate students at St. Paul University Manila, all of whom are university teachers from diverse institutions and disciplines across China, including Anhui University of Chinese Medicine (Anhui Province), Xianyang Normal University (Shaanxi Province), Zhengzhou University (Henan Province), and Xi’an Qi Che University (Shanxi Province). Selected through convenience sampling, these participants provided perspectives spanning medical, artistic, cultural, and career-oriented education. The selection of four participants, although small, is appropriate given the study’s exploratory focus on in-depth insights into teachers’ perceptions and experiences with Artificial Intelligence (AI) in Chinese educational settings. Participants were chosen from diverse universities and disciplines across various provinces, including Chinese medicine, film, painting, career planning, and philosophy, which provides a broad range of perspectives on AI usage in different fields. The study employed convenience sampling to obtain relevant data from educators directly involved in the teaching process, prioritizing quality over quantity. While the sample size limits generalizability, it is sufficient for exploring the study’s core questions about AI understanding, usage patterns, and support needs. The diversity of backgrounds ensures a comprehensive view of AI’s impact across various educational contexts.

Data were collected through an open-ended, five-item survey questionnaire administered via Google Forms, designed around the study’s guiding research questions and validated by an expert to ensure clarity and relevance. A qualitative research design using thematic analysis (Braun & Clarke, 2006) was employed to identify patterns in participants’ narratives regarding their conceptualizations, practices, motivations, and concerns with AI. To deepen insights, thematic synthesis was used to integrate findings with broader contextual influences such as policy, training, and resources (Thomas & Harden, 2008; Chen & Zhang, 2020). This combination enabled the study to connect micro-level perspectives with macro-level systemic factors, ensuring both theoretical and practical relevance.

Ethical considerations included informed consent, confidentiality through anonymization, and sensitivity to China’s educational context. The study emphasized equity by representing participants from varied provinces and disciplines, while also reflecting on risks such as bias, surveillance, and diminished autonomy, consistent with calls for contextually rooted AI ethics frameworks in education (Ren & Ye, 2022; Wang & Huang, 2025).

RESULTS

How do the teachers understand AI?

Theme 1: AI as a Transformative and Global Trend. Teachers view AI as a powerful and inevitable force shaping education and society. One participant described it as “a transformative technology of the 21st century, developing at an astonishing pace and being widely applied across various fields”. Another emphasized its global dimension, stating, “Artificial intelligence is the advancement of the times and the trend of globalization. We should master and apply it for ourselves and our students.” This perspective highlights teachers’ awareness of AI’s broad societal role and the need to align education with global technological shifts.

Theme 2: AI as a Functional Assistant. Teachers also understand AI in terms of its practical utility for human work. One response defined it as “a technology that simulates human intelligent behavior, can assist humans in completing complex tasks, and allows machines to think and act like humans.” This reflects a functionalist view where AI is seen as a tool that enhances efficiency, reduces workload, and extends human capacity.

Theme 3: AI as Beneficial Yet Risky. Teachers express both appreciation for and caution about AI. One participant remarked, “Very great, useful, time-saving, but it will make us depend on it too much.” This highlights a dual perspective: while AI offers clear advantages in efficiency and effectiveness, it also carries risks of over-reliance and diminished human independence.

Interrelations Among Themes. These themes are interconnected in shaping how teachers understand AI. The perception of AI as a transformative and global trend (Theme 1) creates urgency for adoption in education, reinforcing the view of AI as a functional assistant (Theme 2) that can practically enhance teaching and learning. However, this optimism is tempered by concerns about dependence and risks (Theme 3), which introduce ethical and pedagogical caution into teachers’ understanding. Together, these themes reveal a balanced perspective: teachers see AI as inevitable and beneficial, but not without challenges, requiring both adoption and critical reflection in educational contexts.

In what instances and what platforms have teachers used artificial intelligence to assist in their teaching?

Theme 1: AI for Resource Generation and Lesson Preparation.  A recurring theme is the use of AI to generate, organize, and enrich teaching resources. One teacher explained, “When I prepare lessons, AI technology can quickly provide teaching resources of varying difficulty levels based on my teaching objectives, enriching my teaching content.” Another noted using AI to “create teaching materials: generating case studies, discussion questions, and examples.” These responses show that AI is primarily valued for reducing preparation time and expanding the variety of instructional content available. This theme links closely to efficiency and personalization, as the ability to produce diverse materials supports differentiated instruction and more tailored learning experiences.

Theme 2: AI in Lesson Design and Implementation. Teachers emphasized integrating AI into broader teaching processes, not only for preparation but also for structuring lessons and activities. One respondent highlighted, “AI is integrated into the teaching design and implementation process, focusing on students and emphasizing active exploration and personalized learning.” Another reinforced this by explaining that AI aids “lesson planning: brainstorming activity ideas and structuring course content.” Here, AI is seen as a collaborative tool that complements teachers’ pedagogical expertise. This theme connects to resource generation, as both focus on preparation, but extends further into classroom delivery, indicating AI’s role in shaping pedagogy rather than serving as a mere content provider.

Theme 3: AI for Student-Centered Learning and Feedback. Teachers identified AI as a means of supporting learner autonomy and personalized feedback. For instance, one teacher shared that “through intelligent learning systems, students can independently select learning content, control their progress, and adjust their learning strategies based on system feedback.” Another said they use AI for “providing student feedback: getting initial suggestions for improving essay clarity and structure.” These instances highlight AI’s contribution to fostering self-regulated learning while assisting teachers in giving timely, formative feedback. This theme deepens the previous two by showing how AI not only supports teachers but also directly enhances students’ agency and learning outcomes, creating a cyclical relationship where AI informs both teaching strategies and learner growth.

Theme 4: Platforms of Choice and Language-Specific Strengths. Respondents reported using a range of AI platforms: “The types of AI I have used include: Baidu AI, DeePSeek, Kimi, Doubao, ChatGPT,” with some specifying particular functions such as “DeepSeek: for its strong capabilities in Chinese and English; ChatGPT: for brainstorming and generating diverse content ideas; iFlytek Spark: for its excellent Chinese language processing.” These platform choices demonstrate that teachers strategically adopt tools based on linguistic strengths, usability, and alignment with their instructional needs. Platform choice underpins all the previous themes, as the functions of each AI system determine the ways teachers apply them in resource creation, lesson design, and student-centered learning. It also shows how teachers exercise agency in matching platform strengths to their teaching contexts.

Interrelations of Themes. Taken together, the themes reveal that teachers primarily use AI to streamline lesson preparation, enrich classroom implementation, and provide personalized student support, while platform selection is guided by task-specific strengths and language processing capabilities. The interrelations highlight a coherent cycle: AI supports teachers in preparation and design, enhances delivery through diverse strategies, empowers students via personalized learning, and is mediated by careful platform selection. This demonstrates that AI adoption in teaching is not limited to efficiency gains but extends into shaping pedagogy, promoting learner autonomy, and balancing innovation with teacher oversight. 

Why should teachers use AI in their teaching?

Theme 1: Efficiency and Time-Saving. Several teachers emphasized AI’s capacity to optimize teaching by automating routine tasks and saving time. One response noted that “AI can take on some knowledge transfer and classroom management tasks, allowing teachers to focus more on cultivating higher-level skills.” Another teacher highlighted that the technology is “amazing which can save my time and energy in looking for some specific info and teaching material.” This theme underscores AI’s role in reducing workload, allowing teachers to redirect their attention to more meaningful interactions with students. Efficiency acts as a foundation for the other themes, since time saved through automation enables teachers to invest more in innovation, personalization, and professional growth.

Theme 2: Innovation and Authentic Learning. Teachers valued AI for its role in stimulating creative pedagogy and student engagement. One teacher reflected that “introducing AI tools into the classroom allows students to engage with authentic and innovative problem-solving methods,” while another explained that AI supports “generating fresh ideas and diverse examples, making my classes more engaging and creative.” This demonstrates AI’s ability to enrich lessons with variety and relevance. Innovation builds on efficiency: when teachers are freed from repetitive tasks, they can experiment with creative teaching methods that enhance student-centered learning.

Theme 3: Personalization of Learning. Another strong theme is AI’s support for differentiated instruction. One teacher described how AI helps provide “additional, tailored support and resources to meet different student needs.” Intelligent systems can adapt to learners’ levels and provide timely feedback, promoting autonomy and deeper engagement. Personalization is closely linked to both efficiency and innovation—teachers save time and energy, which allows them to invest more effort in designing individualized learning pathways and using innovative methods that cater to diverse student profiles.

Theme 4: Teacher Development and Evolving Roles. Teachers also recognized AI’s impact on their professional growth and identity. One noted that “using AI in teaching forces me to continuously learn and update my knowledge and explore new teaching methods. This is not only a responsibility to my students, but also promotes my own professional development.” Similarly, AI is described as redefining teacher roles by shifting focus from transmitting knowledge to cultivating critical thinking, values, and emotional support. This theme integrates the previous three: efficiency reduces workload, innovation enriches pedagogy, and personalization enhances student outcomes—all of which push teachers to redefine their roles and continuously develop professionally.

Interrelations of Themes. The analysis reveals that teachers see AI as valuable in teaching primarily because it saves time, fosters innovation, enables personalization, and supports professional growth. These themes are interdependent: efficiency provides the space for innovation and personalization, while these, in turn, encourage teachers to evolve and embrace continuous learning. Ultimately, teachers view AI not as a replacement but as a powerful assistant that enhances their effectiveness and redefines their responsibilities in the classroom.

Why should teachers not use artificial intelligence in their teaching?

Theme 1: Risk of Inaccuracy. Teachers expressed concern that AI-generated content is “not necessarily accurate” and could introduce errors into teaching and learning if not carefully verified. This aligns with research showing that large language models may produce “plausible but incorrect or biased information,” which can mislead both instructors and students (Chen et al., 2020; Zhou & Peng, 2023). The emphasis on accuracy highlights that AI, while powerful, requires human oversight to ensure reliability and pedagogical soundness.

Theme 2: Undermines Critical Thinking.  Another prominent theme was the potential for AI to reduce students’ engagement in analytical reasoning and problem-solving. One teacher noted that AI “makes people think about questions less and less,” indicating a fear that over-reliance on AI might weaken higher-order cognitive skills. This concern reflects the Technology Acceptance Model (TAM) and AI-in-education literature, which warn that uncritical use of AI may foster academic passivity and hinder the development of independent learning and reflective thinking (Holmes et al., 2021; SohuAI, 2025).

Theme 3: Lacks the Human Element. Teachers emphasized that AI cannot replicate the mentorship, empathy, and nuanced judgment that human educators provide. As one respondent explained, AI cannot substitute for “the essential mentorship, empathy, and nuanced understanding” in the classroom. This theme resonates with the TPACK framework, which stresses that meaningful technology integration requires alignment of technological, pedagogical, and content knowledge, including the interpersonal and ethical dimensions of teaching (Luckin et al., 2016; Zhao & Dai, 2021).

Theme 4: Ethical and Privacy Concerns. Ethical issues and privacy risks emerged as a significant concern, encompassing academic integrity, data protection, and fairness in AI-driven assessments. Teachers highlighted the possibility of plagiarism and biased outputs, reinforcing the need for transparent, responsible, and ethically grounded AI use (Mehdaoui et al., 2024; Xie & Luo, 2025). These concerns also intersect with institutional policies and social influence factors from the UTAUT2 model, which underline the importance of creating supportive and ethically guided environments for technology adoption.

Interrelations of Themes. The four themes—risk of inaccuracy, undermining critical thinking, lack of human element, and ethical and privacy concerns—are closely interrelated. The risks of inaccurate information and ethical violations underscore the necessity of human oversight, while the potential erosion of critical thinking and absence of human interaction highlight that AI should complement, not replace, teachers. Collectively, these findings suggest that teachers’ cautious stance reflects a desire to balance technological benefits with pedagogical integrity, ethical responsibility, and the cultivation of students’ independent thinking.

What kind of assistance do Chinese teachers need to deliberate properly on the use or non-use of artificial intelligence in their teaching?

Theme 1: Practical Pedagogy and Alignment with Teaching Goals. Teachers emphasized the need for clear guidance on integrating AI effectively within their disciplinary context. One participant noted, “I should make sure whether AI is aligned to the teaching goal, I can’t use it merely for the sake of it,” while another stressed the importance of exploring “how to integrate AI technology into teaching practice, while guiding students to adapt to technological changes.” This theme reflects the necessity of pedagogical scaffolding to ensure AI supports learning objectives and enhances student engagement rather than serving as a superficial add-on (Mishra & Koehler, 2006; Luckin et al., 2016).

Theme 2: Institutional Guidelines and Policy Support.  Participants highlighted the importance of clear institutional policies and regulatory guidance. As one teacher explained, they needed “policy guidance from the school or education department to clarify the boundaries of use,” and emphasized the role of secure and compliant platforms. This theme indicates that ethical, legal, and administrative frameworks are crucial for teachers to feel confident in AI adoption, aligning with literature on ethical AI implementation and facilitating conditions in UTAUT2 (Venkatesh et al., 2012; Fan et al., 2025).

Theme 3: Technical Training and Skill Development. Hands-on experience and professional development were identified as essential for effective AI integration. One respondent suggested “relevant training and case studies to improve my ability to apply AI effectively,” while another recommended “workshops that focus on developing the skills to critically evaluate, refine, and leverage AI outputs for teaching.” This theme connects to the TPACK framework, underscoring the need for teachers to develop not only technological competence but also the ability to align technology with pedagogy and content knowledge (Mishra & Koehler, 2006).

Interrelations of Themes. These themes are interdependent: technical training equips teachers with the skills to implement AI, but effective application also requires understanding pedagogical goals and aligning tools accordingly. Institutional guidelines provide the ethical and regulatory scaffolding that ensures safe and responsible use, which in turn reinforces confidence in pedagogical and technical practices. Collectively, these forms of support empower teachers to deliberate carefully on AI adoption, balancing innovation, ethical considerations, and instructional effectiveness (Chen & Zhang, 2020; Holmes et al., 2021).

DISCUSSION

Chat GPT said:

Teachers’ understanding of artificial intelligence (AI) is shaped by three core themes: AI as a transformative technology, AI as a functional tool, and AI as both beneficial and risky. Teachers recognize AI as a transformative force in education that must be mastered to remain relevant, which aligns with policy directions that promote AI as a key element in modernizing education (Zawacki-Richter et al., 2019; Holmes et al., 2021). However, they also perceive AI as a tool that enhances teaching tasks by simulating human intelligence, reflecting the importance of perceived usefulness in AI adoption (Davis, 1989). At the same time, teachers express caution about over-reliance on AI, highlighting ethical concerns such as diminished autonomy, potential biases, and the erosion of the teacher’s central role in guiding learning, issues that resonate with broader pedagogical ethics (Luckin et al., 2016; Zhao & Dai, 2021). This tension between optimism and caution shows how teachers balance AI’s potential benefits with the risks outlined in policy frameworks, mirroring Rogers’ (2003) theory of weighing the advantages and drawbacks of innovation in adoption decisions.

The findings reveal how teachers engage with AI through four main uses: resource generation, lesson design, student-centered learning, and platform-specific adoption. These practices align closely with policy goals that advocate for AI’s integration into educational practice, supporting efficiency and personalized learning (Zawacki-Richter et al., 2019). Teachers value AI for its usefulness in lesson preparation and design, reinforcing the Technology Acceptance Model’s emphasis on technology’s perceived ease of use and utility (Davis, 1989). Moreover, teachers emphasize AI’s potential to foster personalized and adaptive learning, a central component of student-centered pedagogy that supports educational equity, as envisioned in national policy initiatives (Luckin et al., 2016; Holmes et al., 2021). The teachers’ critical evaluation of AI tools, based on their compatibility with teaching tasks, echoes Rogers’ (2003) concept of innovation adoption, which suggests that teachers’ engagement with AI is mediated by their own professional judgment and contextual factors. This interplay between policy, pedagogy, and practice illustrates the need for AI tools that not only align with teachers’ pedagogical objectives but also respect the local educational context, including digital divides and infrastructure constraints.

Teachers’ motivations for adopting AI can be understood through four interconnected themes: efficiency, innovation, personalization, and professional development. AI’s ability to automate routine tasks such as lesson preparation allows teachers to devote more time to higher-order teaching tasks, such as fostering critical thinking and providing emotional support. This aligns with the Technology Acceptance Model’s focus on perceived usefulness (Davis, 1989) and reflects how AI adoption can align with policy goals aimed at improving teacher productivity and student engagement. Innovation is another central motivation, as teachers report that AI tools enhance creativity in lesson planning and problem-solving, which is consistent with the TPACK framework that emphasizes the importance of integrating technology to foster innovative pedagogies (Mishra & Koehler, 2006). Additionally, teachers highlighted the personalized learning benefits of AI, which resonate with national educational goals of providing adaptive, student-centered learning environments. Lastly, teachers see AI as a tool for professional development, reshaping their role from knowledge transmitters to facilitators of higher-order thinking, a shift in teacher identity aligned with broader pedagogical goals that emphasize reflective teaching practices (Luckin et al., 2016). These interconnected motivations reveal how teachers’ use of AI is not merely a response to technological innovation but also a transformation of their professional identity and practice within the broader educational framework.

The analysis of teachers’ concerns about AI reveals the complex interaction of technological, pedagogical, and ethical considerations. Teachers expressed concerns about AI’s potential for inaccuracy, highlighting the importance of critical evaluation and responsible decision-making in the classroom (Chen et al., 2020; Zhou & Peng, 2023). This reflects the ethical responsibilities teachers have in ensuring the integrity of educational practices and student learning outcomes. Furthermore, concerns about AI undermining critical thinking echo pedagogical ethics that stress the importance of nurturing students’ analytical skills and problem-solving abilities, rather than fostering passive learning (Rogers, 2003; Holmes et al., 2021). Teachers also worried about the lack of human interaction in AI-based teaching, pointing to the irreplaceable value of mentorship, empathy, and the nuanced judgment required in the classroom (Mishra & Koehler, 2006; Luckin et al., 2016). These ethical concerns are not merely about technophobia but are rooted in teachers’ professional identity, emphasizing the need for responsible AI use that enhances, rather than replaces, the human elements of teaching. The intersection of these concerns with policy guidelines underscores the necessity for a balanced approach to AI adoption, one that ensures AI serves as a tool to support, not supplant, teachers’ ethical responsibilities.

Teachers’ needs for AI adoption can be categorized into three themes: practical pedagogy, institutional guidelines, and technical training. Teachers emphasized the importance of aligning AI with instructional goals and student learning outcomes, a core tenet of the TPACK framework, which emphasizes the integration of technology with pedagogy and content knowledge (Mishra & Koehler, 2006; Luckin et al., 2016). Institutional support, including clear policies and secure platforms, was viewed as essential for responsible AI use, in line with the UTAUT2 framework’s focus on facilitating conditions and social influence (Venkatesh et al., 2012; Fan et al., 2025). Technical training was also identified as critical to ensuring that teachers can effectively evaluate and apply AI in the classroom. These needs highlight the importance of aligning policy directions with pedagogical practices, ensuring that teachers have the resources, training, and ethical guidance necessary to adopt AI responsibly. The interplay between external supports, such as institutional policies, and internal teacher factors, such as professional development and ethical reflection, suggests that successful AI adoption requires a holistic approach that addresses both the technical and pedagogical aspects of teaching. This dynamic reflects the evolving role of teachers as facilitators in the AI era, guided by both technological competence and pedagogical ethics (Davis, 1989; Chen & Zhang, 2020; Holmes et al., 2021).

CONCLUSION

This study contributes to global discussions on responsible AI integration by providing unique insights into the Chinese context, where AI adoption is influenced by a convergence of rapid technological advancements, policy-driven initiatives, and cultural pedagogical norms. While AI is recognized globally as a transformative tool for education, Chinese teachers view it not only as a global trend but also as a functional assistant capable of enhancing efficiency, innovation, and personalization, though they remain mindful of its risks, including inaccuracy and ethical concerns (Davis, 1989; Mishra & Koehler, 2006; Luckin et al., 2016). Unlike Western contexts, where AI adoption often arises from individual teacher initiatives or institutional pilot programs (Holmes et al., 2019; Zhao & Dai, 2021), AI decisions in China are heavily shaped by national policies, such as the New Generation Artificial Intelligence Development Plan, which guide access to resources, professional development, and ethical standards (Zawacki-Richter et al., 2019; Li & Noori, 2024). This study highlights how the Chinese approach to AI adoption is a socio-educational negotiation, where national priorities, institutional mandates, and teacher agency intersect, offering critical insights into the complexities of AI integration in a policy-driven, context-specific setting (Chen & Zhang, 2020; Holmes et al., 2021; Xie & Luo, 2025).

To implement ethical and sustainable AI integration strategies in the Chinese educational context, universities and policymakers must take several key actions that align with both national priorities and the pedagogical needs of educators. The convergence of rapid technological advancement, policy-driven initiatives, and culturally specific norms requires a strategic, multi-faceted approach to AI adoption.

RECOMMENDATIONS FOR UNIVERSITIES

Promote Ethical AI Training for Educators. Universities should provide comprehensive professional development programs that not only focus on the technical aspects of AI but also emphasize its ethical implications. This includes educating teachers about potential risks such as bias, data privacy concerns, and the importance of maintaining critical thinking in the classroom. Programs should integrate ethical decision-making frameworks, ensuring that educators can navigate the challenges of AI responsibly (Holmes et al., 2021; Xie & Luo, 2025).

Align AI Integration with Pedagogical Goals. Following the TPACK framework, universities should ensure that AI tools are selected and integrated in ways that align with pedagogical goals. This means that AI should not be used as a standalone tool but should complement the teacher’s content knowledge and teaching practices. Faculty should be trained to leverage AI for personalized learning, efficient lesson planning, and student engagement, while maintaining the human touch essential to quality education (Mishra & Koehler, 2006; Luckin et al., 2016).

Foster a Collaborative Environment for AI Adoption. Universities should encourage collaborative platforms where teachers can share their experiences and best practices in integrating AI. This aligns with the UTAUT2 model, which emphasizes the importance of social influence and facilitating conditions. Creating communities of practice where educators can exchange knowledge about AI’s application in education fosters a culture of shared learning and collective responsibility (Venkatesh et al., 2012).

Ensure Access to AI Resources. In light of the uneven distribution of AI resources across urban and rural areas, universities should advocate for equitable access to AI technologies. This includes providing all educators with the necessary tools, training, and infrastructure to integrate AI effectively into their classrooms, regardless of their geographical location (Zhao & Dai, 2021).

Recommendations for Policymakers

Develop Clear AI Policies with Ethical Guidelines. Policymakers must craft comprehensive AI policies that provide clear ethical guidelines for AI use in education. These guidelines should address issues such as data privacy, algorithmic bias, and the need for transparency in AI systems. Ethical standards must be integrated into both the design and implementation phases of AI adoption, ensuring that AI tools are used responsibly and do not undermine the fundamental values of education (Chen & Zhang, 2020; Zhou & Peng, 2023).

Encourage Cross-Sector Collaboration. To ensure the effective and ethical integration of AI, policymakers should facilitate collaboration between government bodies, educational institutions, technology developers, and the private sector. This collaboration should focus on developing AI tools that are not only pedagogically effective but also culturally sensitive and aligned with the Chinese educational context (Li & Noori, 2024; Zawacki-Richter et al., 2019).

Support Continuous Professional Development. Policymakers should prioritize funding for ongoing professional development programs that equip teachers with the skills to use AI ethically and effectively. This includes addressing the digital divide and ensuring that teachers have the support needed to integrate AI into their classrooms in a way that enhances student learning outcomes while safeguarding their well-being (Venkatesh et al., 2012).

Create a Regulatory Framework for AI in Education. Policymakers must establish a regulatory framework that ensures AI tools are implemented in a way that prioritizes educational equity, fairness, and accountability. This framework should regulate the use of student data, provide clear protocols for ethical AI application, and ensure that AI’s role in education is constantly evaluated to avoid the risks of over-reliance on technology (Zhao et al., 2022; Xie & Luo, 2025).

By addressing these areas, universities and policymakers can ensure that AI integration in education is ethical, sustainable, and supportive of both pedagogical goals and professional teacher identities. These strategies will help create an educational environment in which AI acts as an empowering assistant, supporting teachers while respecting the human elements of teaching and learning.

Methodological Recommendations

Given the small sample size of this study, future research on AI adoption in education should aim to include a larger, more diverse sample to improve generalizability. A longitudinal study would provide deeper insights into how teachers’ perceptions and use of AI evolve over time, capturing the dynamic nature of AI integration. Additionally, employing a mixed-methods approach that combines qualitative interviews or focus groups with quantitative surveys could offer a more comprehensive understanding by quantifying trends across a broader population while maintaining in-depth exploration of individual experiences. These methodological enhancements would provide a more holistic view of the factors influencing AI adoption in education.

REFERENCES

  1. arXiv. (2025). Student perspectives on the benefits and risks of AI in education. arXiv. https://arxiv.org/abs/2505.02198
  2. Baidu. (2023). Baidu AI Cloud. https://cloud.baidu.com
  3. Byte Dance. (2023). Doubao AI assistant. https://www.bytedance.com
  4. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  5. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510
  6. Chen, L., & Zhang, X. (2020). Teacher training for AI integration in education: A Chinese perspective. Journal of Educational Technology Development, 35(2), 45–58. http://www.journals.edu.cn
  7. Chen, M., Huang, R., & Wang, Y. (2020). Teachers’ perceptions and practices of artificial intelligence in education: A case study in China. International Journal of Educational Technology in Higher Education, 17(1), 1–17. https://doi.org/10.1186/s41239-020-00203-2
  8. Chen, Q. (2025). Students’ perceptions of AI-powered feedback in English writing: Benefits and challenges in higher education. International Journal of Changes in Education. https://ojs.bonviewpress.com/index.php/IJCE/article/view/5580
  9. Chen, X., Zhang, X., & Zhang, Y. (2020). The role of artificial intelligence in education: Current progress and future prospects. Journal of Educational Technology Development and Exchange, 13(2), 65-78. https://doi.org/10.1002/j.2388-2067.2020.00138.x
  10. Chen, X., & Zhang, Y. (2020). Teachers’ perceptions and pedagogical practices in the AI-enhanced classroom: A critical review. Educational Technology & Society, 23(1), 35-47. https://www.jstor.org/stable/26851591
  11. Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches (4th ed.). SAGE Publications.
  12. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
  13. Deep Seek. (2025). DeepSeek large language model. https://www.deepseek.com
  14. Du, W., Li, X., Zhang, Y., & Chen, H. (2025). Factors influencing AI adoption by Chinese mathematics teachers. Scientific Reports, 15(1), 6476. https://doi.org/10.1038/s41598-025-06476-x
  15. Fairclough, N. (2003). Analyzing discourse: Textual analysis for social research. Routledge. https://doi.org/10.4324/9780203697078
  16. Fairclough, N. (2013). Critical discourse analysis: The critical study of language. Routledge.
  17. Fan, L., Deng, K., & Liu, F. (2025). Educational impacts of generative artificial intelligence on learning and performance of engineering students in China. arXiv. https://arxiv.org/abs/2505.09208
  18. Fan, X., Guo, Y., & Wang, H. (2025). Generative AI in education: Opportunities and risks. Journal of Educational Technology Development and Exchange, 18(1), 45–59.
  19. Fan, X., Zhang, Y., & Li, H. (2025). MOE issues guidance on how to teach AI in primary and secondary schools. Ministry of Education of the People’s Republic of China. Retrieved from https://en.moe.gov.cn/news/press releases/202412/t202412101166454.html
  20. Fan, Y., Liu, Y., & Luo, X. (2025). Challenges in adopting AI in education: Perspectives from teachers in China. Educational Research and Development, 33(1), 112-125. https://doi.org/10.1080/00101857.2025.1233456
  21. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105–117). SAGE Publications.
  22. He, S., & Liu, F. (2017). The role of resource support in AI adoption in Chinese schools. Chinese Journal of Educational Technology, 33(5), 72–81. http://www.cjet.com.cn
  23. Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
  24. Holmes, W., Bialik, M., & Fadel, C. (2021). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. https://curriculumredesign.org
  25. Huang, X., Zhang, Y., Li, M., & Zhao, W. (2025). Perceptions of AI in higher education: Insights from students at a top-tier Chinese university. Education Sciences, 15(6), 735. https://www.mdpi.com/2227-7102/15/6/735
  26. iFLYTEK. (2023). iFLYTEK Spark cognitive large model. https://www.iflytek.com/en
  27. Lan, Y., Wang, P., & Chen, J. (2024). Through tensions to identity-based motivations: Exploring teachers’ professional identity and motivations for AI integration. Computers & Education, 210, 04739. https://doi.org/10.1016/j.compedu.2024.104739
  28. Lariba, C. F. V., & Ibojo, D. T. M. (2025). Teachers’ attitudes towards the use of AI: A study of benefits, concerns, and support needs. International Journal of Research and Innovation in Social Science, 9(3), 5871–5876. https://doi.org/10.47772/IJRISS.2025.9302
  29. Li, H. (2019). Ethical considerations in the application of AI in Chinese education. Asian Journal of Education and Ethics, 11(1), 23–38.
  30. Li, J. (2019). Ethical implications of AI in the classroom: A review of Chinese perspectives. Technology in Education, 44(1), 23–35. http://www.teched.cn
  31. Li, M., & Noori, A. Q. (2024). Exploring the nexus of attitude, contextual factors, and AI utilization intentions: A PLS-SEM analysis among primary mathematics teachers in China. Asian Journal for Mathematics Education, 3(3), 289–311. https://doi.org/10.1177/27527241241234567
  32. Li, M., & Noori, A. Q. (2024). Exploring the nexus of attitude, contextual factors, and AI utilization intentions: A PLS-SEM analysis among primary mathematics teachers in China. Journal of Educational Research and Practice, 14(2), 55–72. https://doi.org/10.1177/27527263241269060
  33. Li, X., & Noori, A. (2024). Factors influencing primary school teachers’ adoption of AI tools in mathematics teaching. Computers & Education, 198, 104709. https://doi.org/10.1016/j.compedu.2023.104709
  34. Li, X. (2025). Artificial intelligence in teacher education: Examining critical thinking and creativity through AI usage. Forum for Education Studies, 3(2), Article 2727. https://ojs.acad-pub.com/index.php/FES/article/view/2727
  35. Li, Z., & Noori, S. (2024). Artificial intelligence in education: A policy-driven approach in China. Educational Policy Review, 56(2), 108-123. https://doi.org/10.1080/00131881.2024.1234567
  36. Lin, J., Zhang, Y., & Hu, W. (2025). Teachers’ AI literacy and sustainable integration of AI in mathematics education. International Journal of Artificial Intelligence in Education. Advance online publication. https://doi.org/10.1007/s40593-025-00345-8
  37. Lin, T., Zhang, J., & Xiong, B. (2025). Effects of technology perceptions, teacher beliefs, and AI literacy on AI technology adoption in sustainable mathematics education. Sustainability, 17(8), 3698. https://doi.org/10.3390/su17083698
  38. Liu, Q., Jiang, M., Wang, Y., & He, L. (2025). AI literacy in shadow education: Exploring Chinese EFL practitioners’ perceptions and experiences. Journal of China Computer-Assisted Language Learning.
  39. Liu-Yun, Z., & Wang, Y. (2024). Primary school teachers’ perceptions of using generative artificial intelligence in lesson planning in Liaoning, China. Journal of Buddhist Education & Research. https://so06.tci-thaijo.org/index.php/jber/article/view/283871
  40. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. https://www.pearson.com/news-and-research/working-futures/artificial-intelligence/intelligence-unleashed.html
  41. Ma, S. (2024). How AI tools can enhance children’s intrinsic motivation for foreign language learning. Advances in Engineering Innovation, 10, 49–53. https://doi.org/10.18063/aei.v10i1.2345
  42. Mertens, D. M. (2015). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods (4th ed.). SAGE Publications.
  43. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x
  44. Mehdaoui, A., Belhiah, H., & Li, J. (2024). Unveiling barriers and challenges of AI technology integration in education: Assessing teachers’ perceptions, readiness and anticipated resistance. Futurity Education, 4(2), 101–120. https://futurity-education.com/index.php/fed/article/view/370
  45. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x
  46. Moonshot AI. (2023). Kimi chatbot. https://kimi.moonshot.cn
  47. Iqbal, J., Asgarova, V., Hashmi, Z. F., Ngajie, B. N., Asghar, M. Z., & Järvenoja, H. (2025). Exploring faculty experiences with generative artificial intelligence tools integration in second language curricula in Chinese higher education. Education and Information Technologies, 30(5), 7143–7165. https://doi.org/10.1007/s10639-024-12989-1
  48. OpenAI. (2023). ChatGPT. https://openai.com/chatgpt
  49. PlagiarismSearch.com. (n.d.). Some reasons not to trust AI in the classroom. (Retrieved October 1, 2025, from https://plagiarismsearch.com/blog/reasons-to-be-skeptical-of-ai-in-teaching-environments)
  50. Reuters. (2025, April 17). China to rely on artificial intelligence in education reform bid. Reuters. https://www.reuters.com/world/asia-pacific/china-rely-artificial-intelligence-education-reform-bid-2025-04-17/
  51. Reuters. (2025, March 15). China’s Ministry of Education outlines AI education strategy. Reuters. https://www.reuters.com/
  52. Richardson, J., Hollis, E., & Kinsella, B. (2022). The impact of artificial intelligence in education: Benefits, challenges, and implications. TechTrends, 66(4), 481–492. https://doi.org/10.1007/s11528-022-00715-y
  53. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. https://doi.org/10.4324/9780203710753
  54. SohuAI. (2025). Dependence: How college students find a balance between technology and originality. https://www.sohu.com/a/845188283_121924584#:~:text=
  55. Stake, R. E. (1995). The art of case study research. SAGE Publications.
  56. Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC medical research methodology, 8(1), 45.
  57. Understanding teachers’ adoption of AI technologies: An empirical study from Chinese middle schools. (2025). Systems, 13(4), 302. https://www.mdpi.com/2079-8954/13/4/302
  58. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186-204. https://doi.org/10.1287/mnsc.46.2.186.11926
  59. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly, 157-178.
  60. Wang, M., & Li, W. (2018). Building teachers’ competence for integrating AI into teaching. Chinese Educational Research Journal, 37(6), 104–113. http://www.cerj.com.cn
  61. Wang, Q., & Li, Z. (2018). Teacher training and technology adoption: The role of professional development in China’s AI education reform. International Journal of Education Innovation, 10(1), 67–82.
  62. Wang, X., Liu, Z., & Huang, M. (2025). AI adoption in Chinese universities: Insights, challenges and strategies. Acta Psychologica, 250, 104731. https://doi.org/10.1016/j.actpsy.2025.104731
  63. Xie, M., & Luo, L. (2025). The status quo and future of AI-TPACK for mathematics teacher education students: A case study in Chinese universities. arXiv. https://arxiv.org/abs/2503.13533
  64. Xie, M., & Luo, L. (2025). The status quo and future of AI-TPACK for mathematics teacher education students: A case study in Chinese universities. arXiv Preprints. https://doi.org/10.48550/arXiv.2503.13533
  65. Xie, Q., & Luo, H. (2025). Teachers’ perceptions of AI-TPACK and ethical concerns in higher education. Asia-Pacific Journal of Teacher Education, 53(2), 155–172. https://doi.org/10.1080/1359866X.2025.1234567
  66. Xie, X., & Luo, H. (2025). Ethical and practical considerations for AI adoption in Chinese classrooms. AI & Society, 40(2), 253-267. https://doi.org/10.1007/s00146-025-10234-5
  67. Xu, S., Zhao, L., & Lin, J. (2024). An application of the UTAUT2 model to teacher education students’ adoption of AI for information-based teaching. Sage Open, 14(2), 21582440241290013. https://doi.org/10.1177/21582440241290013
  68. Yan, C., Zhou, L., & Sun, Y. (2023). Barriers to AI adoption in Chinese K-12 schools: A mixed-methods study. Asia Pacific Education Review, 24(2), 251–265. https://doi.org/10.1007/s12564-022-09752-9
  69. Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. arXiv. https://arxiv.org/abs/2303.13379
  70. Yao, N., Chen, Q., & Sun, R. (2023). Analyzing factors influencing primary school teachers’ AI technology acceptance in Western China. ACM Transactions on Computing Education, 23(4), 35. https://doi.org/10.1145/3637907
  71. Ye, L., Ismail, H. H., & Ling, H. (2025). Bridging the AI gap: A needs analysis of Chinese pre-service EFL teachers in developing intelligent-technological pedagogical content knowledge. Arab World English Journal, 16(1), 1–19. https://doi.org/10.24093/awej/vol16no1.1
  72. Yu, Y., & Wang, Z. (2020). Professional learning networks for technology integration in education: Insights from China. Educational Research Review, 27(4), 149–162. http://www.erreview.cn
  73. Zawacki-Richter, O., Baecker, E., & Jansen, D. (2019). The role of artificial intelligence in education: Current and future applications. Springer.
  74. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(39), 1–27. https://doi.org/10.1186/s41239-019-0171-0
  75. Zhai, X., Liu, H., & Wang, Y. (2024). Transforming teachers’ roles and agencies in the era of generative AI. arXiv Preprint. https://arxiv.org/abs/2410.03018
  76. Zhang, Y., Liu, H., & Chen, R. (2021). Artificial intelligence in Chinese education: Policies, practices, and challenges. Computers & Education, 165, 104146.
  77. Zhang, Y., Liu, Q., & Zhang, L. (2021). Policy frameworks for AI in Chinese education: Challenges and opportunities. Educational Policy Studies, 19(2), 201–214. http://www.edpolicy.cn
  78. Zhao, J., Li, M., & Xu, T. (2022). Teachers’ awareness of AI applications in Chinese schools. Educational Technology Research and Development, 70(6), 2745–2762. https://doi.org/10.1007/s11423-022-10123-4
  79. Zhao, X., & Dai, D. (2021). Teachers’ concerns regarding the adoption of artificial intelligence in the classroom: A survey from China. Journal of Educational Computing Research, 59(6), 1102-1123. https://doi.org/10.1177/07356331211015453
  80. Zhao, X., Li, Z., & Xu, C. (2022). Rural-urban disparities in AI adoption in education: Evidence from China. Computers & Education, 145, 103734. https://doi.org/10.1016/j.compedu.2019.103734
  81. Zhao, Y., & Dai, D. (2021). Teachers’ concerns regarding the adoption of artificial intelligence in the classroom: A survey from China. Journal of Educational Computing Research, 59(6), 1102-1123. https://doi.org/10.1177/07356331211015453
  82. Zhao, Y., Li, J., & Huang, T. (2022). Teachers’ perceptions of AI in Chinese primary and secondary schools: Urban-rural differences. Educational Research, 64(3), 321–338.
  83. Zhao, J., Wang, L., & Chen, Y. (2025). Understanding teachers’ adoption of AI technologies. Systems, 13(4), 302. https://doi.org/10.3390/systems13040302
  84. Zhao, J., Li, S., & Zhang, J. (2025). Understanding teachers’ adoption of AI technologies: An empirical study from Chinese middle schools. Systems, 13(4), 302. https://doi.org/10.3390/systems13040302
  85. Zhou, J., & Peng, Y. (2023). Ethical and privacy issues in AI-assisted education: Implications for teachers. AI & Society, 38(1), 23-35. https://doi.org/10.1007/s00146-023-01498-w
  86. Zhao, J., Xu, H., & Guo, P. (2022). Teachers’ understanding and use of AI in Chinese basic education: Regional disparities and challenges. Journal of Computer Assisted Learning, 38(6), 1453–1466. https://doi.org/10.1111/jcal.12696
  87. Zhou, M., & Peng, S. (2023). The benefits, risks and regulation of using ChatGPT in Chinese academia: A content analysis. Social Sciences, 12(7), 380. https://doi.org/10.3390/socsci12070380
  88. Zhao, L., & Dai, R. (2021). The reconstruction of teachers’ roles in the era of artificial intelligence. Chinese Social Sciences Today.
  89. Zhao, Y., Li, J., & Huang, T. (2022). Teachers’ perceptions of AI in Chinese primary and secondary schools: Urban-rural differences. Educational Research, 64(3), 321–338. https://doi.org/10.1080/00131881.2022.2045678
  90. Zhao, Y., Wang, C., & Huang, T. (2022). Teachers’ perceptions and practices of artificial intelligence in Chinese K–12 education. Education and Information Technologies, 27(9), 12673–12692. https://doi.org/10.1007/s10639-022-11141-9
  91. Zhou, M., & Peng, S. (2025). The usage of AI in teaching and students’ creativity: The mediating role of learning engagement and the moderating role of AI literacy. Behavioral Sciences, 15(5), 587. https://doi.org/10.3390/bs15050587
  92. Zhu, L., & Zhang, H. (2020). The integration of AI in education: A systematic review of research, challenges, and future directions. Educational Technology Research and Development, 68, 125-144. https://doi.org/10.1007/s11423-020-09767-x

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

4 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER