The Influence of Privacy, Bias, and Surveillance Concerns on Teachers’ Willingness to Use Artificial Intelligence in Education
- Joel V. Cubio, MST
- 3192-3208
- May 28, 2025
- Education
The Influence of Privacy, Bias, and Surveillance Concerns on Teachers’ Willingness to Use Artificial Intelligence in Education
Joel V. Cubio, MST
Division Research Coordinator, Purok Narra, Mabua, Tandag City. Surigao del Sur 8300, Philippines
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0240
Received: 20 April 2025; Accepted: 22 April 2025; Published: 28 May 2025
ABSTRACT
This study investigated how privacy, algorithmic bias, and surveillance concerns influenced teachers’ willingness to adopt Artificial Intelligence (AI) in educational settings. While the Department of Education (DepEd) in the Philippines had initiated several AI-related training programs and policy development efforts, ethical considerations remained inconsistently addressed across regions. Using a qualitative triangulation approach, the study analyzed data from document reviews, teacher-authored reflections, and Focus Group Discussions (FGDs) involving 60 teachers. Results revealed that privacy was a primary concern: 14 participants reported they would only use AI tools that guaranteed data protection, while others expressed hesitancy due to fears of data misuse. Concerns about algorithmic bias also emerged prominently, with 18 teachers citing AI’s reinforcement of stereotypes and questioning the fairness of AI-generated assessments. Regarding surveillance, 16 respondents viewed AI monitoring as a violation of classroom trust, citing its negative effects on student behavior and classroom dynamics. Furthermore, teachers expressed a need for clear institutional guidelines (20), ethical support (8), and comprehensive training (10) to bridge the gap between AI innovation and classroom implementation. These concerns often interacted and compounded one another, forming a complex web of ethical challenges that shaped teacher attitudes. The study concluded that successful AI integration in education required more than technical deployment—it demanded transparent policies, ethical safeguards, inclusive stakeholder engagement, and professional development tailored to educators’ lived realities. The findings aimed to inform future policy directions and support the ethical and effective implementation of AI technologies in schools.
Keywords: DepEd AI Initiatives, Qualitative Triangulation, Thematic Analysis and Document Analysis, Tandag City Division-Philippines
INTRODUCTION AND RATIONALE
The integration of Artificial Intelligence (AI) in education is progressing rapidly and reshaping pedagogical practices. AI tools offer promising capabilities such as personalized learning pathways, streamlined administrative processes, and enhanced instructional delivery (Luckin et al., 2016; Holmes et al., 2022). These innovations are increasingly embedded into educational environments, positioning AI as a transformative force in teaching and learning. However, despite these potential benefits, the adoption of AI in classrooms has been met with caution and resistance. A growing body of research highlights that teachers—often the frontline users of educational technologies—harbor concerns about privacy, algorithmic bias, and surveillance, all of which may significantly influence their willingness to engage with AI tools (Zawacki-Richter et al., 2019; Regan & Jesse, 2019).
Privacy concerns are largely rooted in the extensive data collection AI systems require. Teachers worry about the security of sensitive student information and the potential misuse of data in environments that lack transparency (Cios & Zapala, 2021; Williamson & Eynon, 2020). Likewise, perceived algorithmic biases—where AI technologies may replicate or exacerbate social prejudices—have raised doubts about the fairness and objectivity of AI-generated outputs, especially in contexts such as student performance assessment or behavior monitoring (Binns, 2018; O’Neil, 2016). These biases risk undermining trust in AI tools and challenge the ethical grounding of their use in educational settings.
Surveillance-related concerns have also gained prominence, particularly with the rise of AI-powered proctoring tools and learning analytics that enable real-time tracking of student behavior. Teachers have expressed discomfort over the potential for these technologies to infringe on student autonomy and create environments characterized more by control than by collaboration and creativity (Andrejevic & Selwyn, 2020). In addition, variations in digital literacy and ethical awareness among educators further shape how AI is perceived and adopted. Teachers with more advanced understanding of AI and its ethical implications may either embrace the technology more confidently or scrutinize its implementation more critically (Howard et al., 2021; Holmes et al., 2021).
Given these intersecting issues, it becomes essential to understand the factors shaping teacher engagement with AI. Therefore, this study was guided by the key research question: How do concerns related to privacy, algorithmic bias, and surveillance collectively influence teachers’ willingness and readiness to adopt Artificial Intelligence (AI) technologies in educational settings? This inquiry aimed to unpack the nuanced interplay among these ethical concerns and to generate actionable insights for policymakers, school leaders, and educational technology developers. By grounding AI integration in ethical and pedagogical realities, the study aspired to support the development of educational technologies that are not only innovative but also trustworthy, fair, and responsive to the values of educators and learners alike.
Research Questions
- To what extent do privacy concerns affect teachers’ willingness to adopt AI tools in their teaching practice?
- How do perceived algorithmic biases in AI influence teachers’ trust and acceptance of AI technologies in the classroom?
- What are teachers’ perceptions of surveillance risks associated with AI, and how do these perceptions impact their usage decisions?
- How do privacy, bias, and surveillance concerns interact to shape overall teacher attitudes toward integrating AI in education?
LITERATURE REVIEW
Artificial Intelligence (AI) is increasingly being integrated into educational settings, offering personalized learning, automated assessment, and data-driven insights to enhance teaching practices (Luckin et al., 2016). AI systems such as intelligent tutoring systems, chatbots, and predictive analytics aim to support teachers and improve student outcomes (Holmes et al., 2019). However, while AI presents numerous opportunities, its implementation is often met with concerns regarding ethical use, particularly among educators.
Teachers’ willingness to adopt AI is influenced by their perception of its usefulness, ease of use, and ethical implications (Zawacki-Richter et al., 2019). Studies show that perceived value and training support adoption, but apprehension about privacy and control often delays integration (Chen et al., 2020). Moreover, digital literacy and awareness of AI ethics are critical determinants of teacher confidence in using AI tools (Chatterjee & Bhattacharjee, 2020).
Privacy is a major barrier to AI adoption. AI tools often require large amounts of personal data for functioning, raising concerns about student and teacher data confidentiality (Williamson & Eynon, 2020). Teachers worry that sensitive information may be misused, leaked, or accessed without proper authorization. These fears can lead to resistance against technology adoption (Zeide, 2019).
Algorithmic bias occurs when AI systems perpetuate inequalities by producing skewed or unfair outcomes based on race, gender, or socio-economic status (Noble, 2018). Teachers, especially in diverse classrooms, may hesitate to adopt AI tools if they perceive the systems as unreliable or biased against certain groups (Binns, 2018). These perceptions undermine trust and reduce the likelihood of use in instruction.
AI’s capabilities for real-time tracking and behavior prediction have prompted concerns about surveillance culture in schools. Teachers fear constant monitoring of both students and themselves leading to feelings of mistrust and reduced autonomy (Manolev, Sullivan, & Slee, 2019). This surveillance aspect can create a psychological barrier, deterring educators from embracing AI-powered platforms.
Teachers with higher digital literacy and ethical training are more likely to evaluate AI tools critically and adopt them responsibly (Kumar & Rose, 2021). Professional development that addresses data privacy laws, ethical AI usage, and responsible technology integration is key to alleviating fears and building confidence (Jobin, Ienca, & Vayena, 2019).
Privacy, bias, and surveillance concerns do not exist in isolation. Their interplay creates a complex web that affects educators’ attitudes and decisions. Understanding how these concerns intersect helps in designing AI systems and training programs that are aligned with teachers’ needs and ethical expectations (Cios & Zapala, 2021).
METHODOLOGY
Research Design
This study followed a qualitative research design using a triangulation approach to better understand how teachers’ concerns about privacy, bias, and surveillance shape their willingness to use Artificial Intelligence (AI) in education. By using different sources of information, the research aimed to capture a fuller picture of the various factors that influence teachers’ views and readiness to adopt AI in their teaching practices.
A qualitative approach was chosen because it allows for a deeper look into teachers’ personal experiences, the messages they receive from institutions, and the conversations they have with one another. To do this, the study brought together three methods of data collection: document analysis, review of qualitative feedback, and Focus Group Discussions (FGDs). This combination helped connect what’s written in official policies with what teachers are actually feeling and experiencing in schools.
PARTICIPANTS AND SAMPLING
The participants in this study were selected through purposive sampling, targeting teachers across various educational levels within the division who were likely to have encountered discussions or training related to Artificial Intelligence (AI) in education. This method ensured that the individuals involved could provide rich, relevant insights aligned with the study’s focus on privacy, bias, and surveillance concerns. A total of 60 informants participated in the Focus Group Discussions (FGDs), representing a diverse mix of elementary, junior high, and senior high school teachers. This diversity allowed the study to capture a broad spectrum of experiences and perspectives across grade levels and subject areas. Selection criteria included active teaching status, prior exposure to ICT-related initiatives, and willingness to participate in open dialogue. In addition to FGDs, qualitative feedback was gathered from Five (5) select documents authored by teachers themselves, such as reflective notes, training evaluations, and Learning Action Cell (LAC) reports either, which further supplemented the study with organically produced insights. While institutional texts such as select Five (5) DepEd Memoranda and national advisories did not involve direct participants, they were integral in framing the broader context within which teachers operate. This combined participant pool—from both direct engagement (FGDs) and naturally occurring documents—ensured a well-rounded representation of voices and perspectives critical to understanding educators’ willingness to adopt AI in light of ethical and privacy-related concerns.
Data Collection
To investigate how concerns about privacy, bias, and surveillance influence teachers’ willingness to use Artificial Intelligence (AI) in education, this study adopted a triangulated data collection strategy. The approach integrated document analysis, qualitative feedback, and Focus Group Discussions (FGDs) to capture both institutional perspectives and personal experiences surrounding AI use in the classroom.
First, existing institutional documents—including DepEd Memoranda, policy guidelines on ICT and AI integration, and advisories from the National Privacy Commission—were critically analyzed to determine the official stance on AI implementation in education. These documents offered insight into how the Department of Education addresses ethical considerations, data privacy, and bias mitigation in relation to AI. To ensure consistency and depth, a Document Analysis Matrix was utilized. Each document was examined based on key criteria such as title, author, publication date, purpose, central themes, supporting evidence, possible bias, and its relevance to the core issues of privacy, bias, and surveillance.
Second, qualitative feedback was gathered from teacher-authored reports, reflections from training activities, and Learning Action Cell (LAC) session minutes. These sources provided authentic narratives reflecting educators’ evolving perceptions, personal concerns, and actual encounters with AI tools and policies within school contexts.
Third, FGDs were conducted with a diverse group of teachers from various education levels to probe deeper into their lived experiences and collective attitudes. The group setting encouraged open discussion about institutional influences, specific concerns over student data and surveillance, perceived or experienced algorithmic bias, and overall readiness to integrate AI in teaching practices.
All data sources were analyzed thematically that allow research to identify recurring patterns of concern, resistance, and endorsement. This multi-source strategy not only enriched the understanding of teachers’ willingness to adopt AI but also ensured the validity and reliability of findings by capturing multiple viewpoints across formal policy and grassroots-level experience—thus directly addressed the study’s core inquiry on how privacy, bias, and surveillance concerns shape educator engagement with AI.
Data Analysis
To make sense of the data collected through multiple qualitative sources, this study employed a thematic analysis approach designed to uncover recurring patterns and insights across institutional texts, personal narratives, and group discussions. The goal of the analysis was to explore how privacy, bias, and surveillance concerns shape teachers’ willingness to integrate Artificial Intelligence (AI) into their educational practices.
The first stage of analysis involved the document analysis of official materials such as DepEd Memoranda, policy guidelines, and advisories from the National Privacy Commission. These documents were examined using a Document Analysis Matrix, which helped categorize content by title, author, publication date, intent, central themes, institutional position, and relevance to key ethical concerns. Patterns were identified by comparing the language, tone, and framing used across these documents to assess how institutional narratives on AI, privacy, and bias were constructed and communicated.
In the second stage, teacher-generated qualitative feedback—including training reflections, Learning Action Cell (LAC) minutes, and internal reports—was reviewed and coded inductively. Initial codes were derived from repeated phrases or issues raised by teachers, which were then grouped into broader themes such as fear of surveillance, uncertainty about data privacy, trust in AI tools, and institutional pressure. This phase helped surface how teachers internalize and react to AI-related policies within the school setting.
The third stage involved analyzing transcripts from Focus Group Discussions (FGDs) with 60 teachers. These transcripts were also coded thematically, starting with open coding to capture emergent ideas and followed by axial coding to organize these ideas around central categories. Themes from the FGDs were triangulated with findings from the documents and teacher feedback to identify commonalities, contradictions, and contextual insights. Key thematic categories included perceived risks of AI, trust in institutional safeguards, equity and bias in AI tools, and teachers’ readiness or resistance to adopt such technologies.
Throughout the process, manual coding and memoing techniques were used to ensure depth and accuracy, with data managed through spreadsheet matrices and coded summaries. Triangulation among the three sources enhanced the credibility and validity of the findings, ensuring that interpretations were grounded in diverse yet interconnected forms of evidence. The outcome of this analysis was a set of rich, well-supported themes that directly addressed the core research questions and reflected both systemic influences and individual educator perspectives regarding AI integration in education.
Table 1.A Document Analysis Matrix
Document Title/Source | Author/Date | Type of Document | Purpose | Main Ideas | Evidence/Quotes (Privacy, Bias, and Surveillance) | Perspective |
Document 1 | ||||||
Document 2 | ||||||
Document 3 |
Table 1.B FGD Analysis Matrix
Research Question | Informant Code | Response |
1 | ||
2 | ||
3 | ||
4 | ||
Total |
Ethical Considerations
This study strictly adhered to ethical research standards to protect the rights, dignity, and well-being of all participants. Prior to data collection, informed consent was obtained from all teacher-participants involved in the Focus Group Discussions (FGDs) to ensure that they were fully aware of the study’s purpose, procedures, and their right to withdraw at any point without penalty. Anonymity and confidentiality were maintained throughout the research process by removing identifiable information from transcripts, documents, and reports. Data from teacher-authored reflections, Learning Action Cell (LAC) minutes, and training feedback were used only with prior permission and were treated with utmost sensitivity. The study also respected institutional boundaries by using publicly available or officially granted documents such as DepEd Memoranda and policy advisories. Given the sensitive nature of the topics—privacy, bias, and surveillance—particular care was taken to foster a respectful and non-judgmental environment during FGDs, ensuring that participants felt safe to express their opinions. Ethical clearance was secured from the relevant academic or institutional review board, affirming the study’s commitment to integrity, transparency, and the ethical treatment of human subjects.
RESULTS AND DISCUSSION
DepEd AI Initiatives: Training, Policy, and Ethics
Table 2. Document Analysis Matrix on Department of Education Initiatives Related to AI Integration in Education.
Document Title/Source | Type of Document | Purpose | Main Ideas | Evidence/Quotes (Privacy, Bias, and Surveillance) | Perspective |
DepEd CAR Regional Memorandum No. 865, s. 2024 | Official Government Memorandum (Training Implementation Document) | To announce and provide guidelines for the Conduct of the AI Immersion Day for Teachers on December 9, 2024 and ensure alignment of participation with teachers’ professional development needs. | Promote understanding of AI in education; explore classroom applications; equip teachers with practical knowledge; address data privacy, bias, and equity; promote collaboration and ethical use. | …ethical considerations related to the use of AI in education, including data privacy, algorithmic bias, and equity issues. | Balanced and proactive; acknowledges both potential and ethical responsibilities of AI in education. |
Fostering the ‘I’ in AI Webinar Series | Memorandum Series RM No. 641, s.2024 MEMORANDUM OM-OUOPS-2024-13-04807 |
To promote awareness and understanding of AIED (AI in Education) in Southeast Asia. | Webinars on AI integration, ethical implications, policy discussions, use of tools | AI in education, ethical concerns, teacher competencies, regional collaboration. | Supportive and informative; promotes inclusive AI integration in education while acknowledging challenges and ethical dimensions. |
Fostering the ‘I’ in AI Webinar Series by SEAMEO INNOTECH | Memoranda, Activity Notes, Webinar Schedules RM No. 651, S.2024 | To promote and disseminate webinar series on AI in education targeting Southeast Asian educators and policymakers. | Enhance AI literacy; address ethical concerns; integrate AIED in classrooms; share regional practices; explore tools like Google Gemini; align with human-centered learning goals. | AI Governance in Education: Navigating Ethical Implications and Policy Challenges mentions on ethics, misconceptions, and need for teacher competencies. | Educational and forward-looking; focuses on awareness and responsible integration of AIED. Highlights potential but balances it with policy and ethical guidance, reflecting proactive concern. |
Let’s Be Future Ready: Strategic Integration of AI in Education for the Digital Future | Memorandum and Concept Note
RM ORD-2024: 1289 |
To increase educators’ understanding of AI transformative role in education, focusing on personalized learning and human-centered approaches while addressing ethical issues such as data privacy and emotional intelligence. | Promotes tailored AI learning experiences; emphasizes ethical issues including data privacy and emotional understanding; includes discussions from industry and teacher perspectives | Addresses the ethical challenges related to AI, particularly around data privacy, security, and the limitations of AI in replicating human interaction and emotional understanding | Cautiously optimistic; highlights the benefits of AI while openly discussing potential drawbacks, particularly ethical concerns, and encourages thoughtful integration into educational systems. |
Workshop on the Development of Policy Guidelines on the Utilization of Generative Artificial Intelligence (AI) for Teaching and Learning | DepEd Region VIII Memorandum No. 925, s. 2024 | To announce the participants and details of workshops for developing policy guidelines on the utilization of Generative AI for teaching and learning. | Workshops focused on creating, validating, and finalizing AI policy guidelines; conducted both online and in-person; participation by teachers from various divisions. | No direct discussion on ethical concerns; focused on policy formulation logistics. | Practical and procedural; focused on organizing and implementing policy development workshops rather than engaging with critical ethical debates. |
Table 2 reflected the Philippine Department of Education’s growing efforts to integrate Artificial Intelligence (AI) into education, particularly through teacher training, policy development, and regional collaboration. These initiatives emphasized both the potential of AI and its ethical implications. For instance, DepEd CAR’s Regional Memorandum No. 865 (2024) equipped teachers with practical AI knowledge while addressing privacy, bias, and equity—responding to calls for critical AI literacy (Zawacki-Richter et al., 2019). The SEAMEO INNOTECH-led webinar series also promoted ethical awareness, teacher competencies, and regional dialogue (DepEd, 2024a; Holmes et al., 2022). In contrast, Region VIII’s memorandum (DepEd, 2024b) focused on organizing policy workshops but lacked explicit attention to privacy or surveillance concerns, which are essential in AI education discourse (Williamson & Eynon, 2020). Meanwhile, the “Let’s Be Future Ready” initiative (DepEd, 2024c) addressed these gaps by highlighting emotional and ethical dimensions in AI use, including data privacy and the limits of machine-human interaction (Luckin, 2017). Overall, these documents showed that DepEd pursued both ethical and procedural paths toward AI integration, aligning with international recommendations for responsible and inclusive educational technologies.
Emerging Teacher Concerns on AI Integration
Table 3: Document Analysis Matrix of AI-Related Educational Materials.
Document Title/Source | Type of Document | Purpose | Main Ideas | Evidence/Quotes (Privacy, Bias, and Surveillance) | Perspective |
LAC Session at Lutucan Central School | Session Guide | To promote awareness and share insights from a Learning Action Cell (LAC) session focused on the strategic integration of AI in education. | AI as a transformative tool in education; potential benefits in teaching strategies, personalized learning, and classroom efficiency; importance of ethical and responsible use. | ‘…while ensuring ethical and responsible use of technology.’ | Optimistic and promotional; highlights benefits of AI with brief ethical acknowledgment, but lacks discussion on potential risks like privacy breaches or algorithmic bias. |
UbD Template – When Technology May Byte | Lesson Plan | To teach students about Artificial Intelligence liability, its ethical implications, and programming through hands-on chatbot activities. | AI liability; responsible use of AI; legal and ethical implications of AI behavior; student exploration of programming and critical thinking around AI systems. | Students will explore the process of determining who is at fault, such as what happens in the legal system; Tricking the bot will allow them to see the flaws in technology; Turing Test explanation. | Cautiously optimistic; presents AI as a powerful tool with potential liabilities, encouraging critical reflection rather than blind acceptance. |
Exploring Computer Science – Human Computer Interaction, Lesson: Day 19 – Artificial Intelligence | Lesson Plan | To help students understand the concept of artificial intelligence, distinguish between human and computer intelligence, and explore the ethical and functional aspects of AI through interactive activities. | Differences between human and machine intelligence; natural language understanding; machine learning models; AI’s ability to ‘learn’; ethical reflections on whether computers are truly intelligent. | Questions such as ‘What does it mean for a machine to learn?’ and ‘Are computers intelligent, or do they only behave intelligently?’ hint at deeper issues of autonomy and trust in AI systems, though privacy and surveillance are not explicitly addressed. | Analytical and educational; focused on inquiry-based learning and student exploration rather than promoting or critiquing AI adoption; lacks engagement with critical perspectives such as bias or surveillance. |
Introduction to Artificial Intelligence – COSC 4142 | Academic Lecture/Module | To introduce undergraduate students to the foundational concepts, applications, and philosophical perspectives of AI. | Definition and types of intelligence; history and evolution of AI; necessity, goals, and applications of AI; approaches (think/act like a human/ rationally); philosophical, mathematical foundations. | Mentions ‘Security and Surveillance’ under applications of AI in everyday life and public safety (e.g., smart infrastructure, facial recognition). | Informative and comprehensive; leans toward promoting AI as a transformative tool but briefly touches on ethical concerns such as surveillance; minimal coverage of bias or privacy risks. |
Nueva Ecija LAC Session Guide | LAC Session Guide | To train teachers on ethical and effective AI use in education. | Educator awareness of AI’s impact, ethics, and practical | Discuss ethical considerations including data privacy, | Balanced, with critical reflection on ethics and implementation. |
Table 3 illustrated how Artificial Intelligence (AI) had been integrated into educational contexts, showcasing its perceived potential alongside growing ethical concerns. Various materials—including lesson plans, modules, and session guides—generally framed AI as a tool to enhance teaching strategies, support personalized learning, and improve efficiency (Brady, n.d.; District of Columbia Public Schools, 2012–2013; Guevarra et al., 2024). The LAC session at Lutucan Central School promoted AI positively, briefly noting the importance of ethical use (DepEd Tayo, 2025). Likewise, Brady’s UbD lesson plan had encouraged students to examine AI liability through practical activities, fostering ethical reflection. The curriculum from the District of Columbia Public Schools focused on understanding human vs. computer intelligence but lacked attention to surveillance and bias—issues prominent in AI ethics literature (O’Neil, 2016; Binns, 2018). Meanwhile, the Ambo University module (COSC 4142) introduced AI’s philosophical and societal roles and referenced surveillance applications such as facial recognition (Ambo University, n.d.). The Nueva Ecija LAC Session Guide provided the most balanced approach, addressing concerns like data privacy, algorithmic bias, and equity in AI use, echoing Regan and Jesse’s (2019) call for ethical AI adoption. Overall, while the documents supported AI integration, they varied in how deeply they engaged with ethical challenges. Most emphasized responsible use, yet few directly addressed issues like privacy, bias, or surveillance—key concerns outlined by Zawacki-Richter et al. (2019), suggesting a need for stronger digital ethics integration in teacher education.
Privacy and AI Adoption
Table 4: Themes and Interpretations of Privacy Concerns in AI Adoption – Frequency counts, quotes, and interpretations on privacy barriers.
Theme | Frequency Count | Exemplar Quote | Interpretation |
Data Security and Trust in AI Tools | 14 | I would consider using AI, but only if I’m assured that student data is fully encrypted and protected. | Data protection is a prerequisite for AI adoption. Without secure systems, educators hesitate to use AI tools. |
Privacy Concerns as a Barrier to Adoption | 12 | AI might make my job easier, but not if it means my students’ private information is at risk. | Privacy concerns hinder adoption, emphasizing the importance of safeguarding personal data in educational contexts. |
Perceptions of Invasiveness and Surveillance | 12 | It feels like the AI is watching every move my students make—this isn’t just teaching, it’s surveillance. | AI’s perceived invasiveness, particularly in terms of surveillance, deters educators from full AI adoption. |
Institutional Trust and Ethical Assurance | 11 | If DepEd guarantees privacy protections, I would trust AI tools more. | Institutional trust and regulatory assurances from bodies like DepEd can mitigate privacy concerns and facilitate AI adoption. |
Concerns Over Misuse of Student Data | 11 | I’m worried that AI tools could end up using my students’ data for purposes other than education. | Concerns over misuse of student data highlight the need for strong ethical frameworks and data protection measures. |
Based on responses from 60 informants, several key themes emerged regarding privacy and AI adoption in education. The most common concern, raised by 14 informants, was the need for AI tools to guarantee data protection, emphasizing the importance of secure systems for gaining teachers’ trust (Zawacki-Richter et al., 2019). Twelve respondents cited privacy concerns as a major barrier to adoption, aligning with Holmes et al. (2022) on how privacy anxieties deter digital innovation. Another 12 found AI tools overly invasive due to excessive data collection, reflecting concerns about digital surveillance (Williamson & Eynon, 2020). Eleven informants would be more willing to try AI if the Department of Education (DepEd) ensured privacy, suggesting that institutional trust influences adoption (Luckin, 2017). Similarly, eleven respondents expressed concerns about the misuse of student data, highlighting the need for ethical AI frameworks (Cios & Zapala, 2020). These findings underscore the importance of trust, data ethics, and institutional responsibility in AI adoption.
Teacher Perceptions of AI Bias and Trust in Education
Table 5: Teacher Perceptions of AI Bias and Trust – Frequency counts, quotes, and interpretations on algorithmic bias and trust in AI tools.
Theme | Frequency Count | Exemplar Quote | Interpretation |
Reinforcement of Stereotypes | 18 | I’ve seen AI programs suggest materials that favor certain groups over others. | AI tools may perpetuate existing social biases, undermining fairness in education. |
Distrust of Recommendations and Assessments | 13 | When AI makes decisions about my students, I’m not sure if they’re truly based on merit or biased data. | Algorithmic outputs may reinforce biases, affecting educational recommendations and assessments. |
Hesitation in Full AI Integration | 13 | I don’t want to fully rely on AI for teaching tasks if it’s going to be biased against certain groups of students. | Concerns over fairness discourage full AI integration into teaching practices. |
Concerns About Fairness for Diverse Learners | 11 | I’m unsure if AI treats all students equally—it might work for some but not for others. | Skepticism about AI’s ability to serve diverse learners fairly, particularly in terms of equity. |
Cautious Use in Student Evaluation | 11 | I don’t fully trust AI with assessments—there’s too much at stake with my students’ grades. | Teachers are cautious in using AI for student evaluations due to potential bias and fairness issues. |
Based on responses from 60 informants, concerns about algorithmic bias heavily influenced trust and AI adoption in education. Eighteen informants expressed that “some AI tools reinforce stereotypes,” which harmed their confidence in using AI for teaching. Thirteen noted that “bias in AI makes it hard to trust recommendations or assessments,” reflecting fears that AI might amplify social biases (Eubanks, 2018; Noble, 2018). Another 13 teachers mentioned that “algorithmic bias discourages me from fully integrating AI,” showing hesitation due to fairness concerns. Eleven informants expressed uncertainty about AI’s fairness, questioning its ability to serve diverse learners equitably (Williamson & Eynon, 2020). Lastly, eleven teachers stated, “I use AI cautiously, especially in student evaluation,” indicating a wary but not dismissive stance. These responses highlight how perceived bias shapes educators’ cautious adoption of AI, reinforcing calls for transparent and fair AI systems in education (Holmes et al., 2022).
Teachers’ Concerns About AI Surveillance and Its Impact on Classroom Dynamics
Table 6: Teacher Concerns About AI Surveillance – Frequency counts, quotes, and interpretations on AI surveillance in education
Theme | Frequency Count | Exemplar Quote | Interpretation |
Violation of Classroom Trust | 16 | AI’s constant monitoring undermines the safe space we try to create in class. | AI surveillance violates the trust essential for maintaining a safe and respectful learning environment. |
Unintended Surveillance and Data Overreach | 15 | The AI might track things it wasn’t designed to, raising concerns about what data is collected and who has access to it. | Teachers are concerned about AI tools tracking unintended data, which raises issues around data privacy and misuse. |
Impact on Teacher Comfort and Student Participation | 13 | The feeling of being constantly watched affects how freely my students participate in class discussions. | Surveillance affects both teachers’ comfort and student participation, creating a less open learning environment. |
Changes in Student Behavior Due to Constant Monitoring | 10 | Students become more focused on ‘behaving well’ in front of the AI, rather than authentic engagement. | Continuous monitoring may lead students to behave in ways that are more performative rather than authentically engaged. |
Privacy, Ethics, and Wellbeing Concerns | 14 | AI surveillance could compromise the wellbeing of my students, and I can’t ignore those risks. | Concerns about privacy, ethics, and student wellbeing deter educators from fully embracing AI surveillance technologies. |
Based on responses from 60 informants, surveillance-related concerns were a major factor in teachers’ hesitation to adopt AI in the classroom. The most common issue, cited by 16 informants, was that “surveillance features feel like a violation of classroom trust,” which teachers saw as undermining a safe learning environment (Selwyn, 2019). Fifteen informants expressed concerns that “AI tools may track more than intended,” reflecting anxieties over data overreach and opaque AI data collection practices (Zuboff, 2019). Thirteen teachers noted that “AI monitoring makes students uncomfortable and affects participation,” suggesting a negative impact on classroom dynamics (Regan & Jesse, 2019). Ten informants mentioned that “constant monitoring changes how students behave,” indicating performative behavior rather than authentic engagement. Lastly, 14 informants stated that “surveillance risks discourage them from using AI tools,” highlighting privacy, ethics, and student wellbeing concerns (Williamson & Hogan, 2020). These concerns emphasize the need for transparent, human-centred AI design in education.
Teachers’ Concerns on AI: Ethics, Training, and Institutional Gaps
Table 7: Teacher Concerns on AI Ethics, Training, and Institutional Gaps – Frequency counts, quotes, and interpretations on AI adoption challenges.
Theme | Frequency Count | Exemplar Quote | Interpretation |
Demand for Clear Ethical Guidelines and Institutional Support | 20 | I need clear ethical guidelines before implementing AI in my classroom. | Teachers need formal support and clear guidelines to feel confident in adopting AI responsibly in the classroom. |
Interconnectedness of Privacy, Bias, and Surveillance Concerns | 19 | Privacy, bias, and surveillance are all part of the same problem—they make me hesitant to trust AI in the classroom. | Teachers perceive privacy, bias, and surveillance as interconnected, which heightens their ethical concerns about AI. |
Need for Training and Professional Development | 10 | I need training to understand how to use AI responsibly in the classroom. | Professional development is seen as critical for AI integration, as educators feel unprepared to use AI effectively. |
Balancing Ethical Concerns with Practical Teaching Needs | 8 | It’s hard to balance ethical AI use with the real demands of teaching, especially when the guidelines are unclear. | Teachers struggle to balance ethical AI use with practical classroom demands, highlighting the need for institutional clarity. |
Fear of Technological and Cultural Barriers | 7 | I’m afraid that AI might be too complex for me to use effectively, and the students might not accept it. | Fear of complexity and resistance to AI adoption reflect both emotional and technical barriers to AI integration. |
Based on responses from 60 informants, concerns about privacy, bias, and surveillance emerged as major barriers to AI adoption in education. The most common concern, cited by 20 informants, was the need for clear guidelines and institutional support before adopting AI. Nineteen respondents noted that privacy, bias, and surveillance are interconnected, creating overlapping ethical risks that complicate trust in AI (Cummings & Ferris, 2020). Ten teachers emphasized the need for training, highlighting professional development as critical for AI integration (Zawacki-Richter et al., 2019). Eight informants noted the difficulty in balancing ethics with teaching demands without institutional support, and seven teachers expressed fear of using AI due to emotional and cultural barriers (Popenici & Kerr, 2017). These concerns underscore the need for robust policy frameworks and comprehensive training to ensure ethical, informed AI use in schools.
SUMMARY OF FINDINGS, CONCLUSION, AND RECOMMENDATIONS
Findings
This study examined the Department of Education’s (DepEd) initiatives on Artificial Intelligence (AI) and explored teacher concerns around privacy, bias, and surveillance impacting AI adoption. Document analysis revealed DepEd’s growing efforts through training and policy development, though ethical issues like surveillance and bias were unevenly addressed (DepEd, 2024a; Holmes et al., 2022). Among 60 informants, 14 would only use AI tools with guaranteed data protection, and 12 cited privacy as a key barrier. Institutional trust in DepEd’s privacy assurances also shaped adoption decisions (Zawacki-Richter et al., 2019; Luckin, 2017). Regarding algorithmic bias, 18 noted AI tools reinforce stereotypes, and 11 questioned fairness in AI assessments (Eubanks, 2018; Noble, 2018). Surveillance concerns were also strong, with 16 teachers stating it violated classroom trust and others reporting negative impacts on student behavior (Selwyn, 2019; Zuboff, 2019). The most common call—raised by 20 teachers—was for clear institutional guidelines. Teachers also stressed the need for training (10) and ethical support (8) (Cummings & Ferris, 2020; Popenici & Kerr, 2017). These findings highlight the need for ethical guidance, policy clarity, and teacher readiness to ensure responsible AI integration in schools.
Conclusion
Teachers in the Philippine educational context is open to AI integration but remain cautious due to unresolved concerns surrounding privacy, bias, and surveillance. While DepEd has begun proactive steps through training and policy formulation, many educators feel unprepared and unsupported when it comes to ethical implementation. This study highlighted that successful AI adoption in education depends not only on access to technology but on systemic trust, institutional clarity, and a strong ethical framework. Without these, AI risks exacerbating inequalities and creating discomfort in learning environments rather than enhancing them.
Recommendation
To support the ethical integration of Artificial Intelligence (AI) in education, several recommendations are proposed. First, the Department of Education (DepEd) should develop comprehensive, national-level AI guidelines that explicitly address ethical concerns related to privacy, algorithmic bias, and surveillance. Second, professional development programs must include mandatory training on AI ethics and digital privacy to enhance teacher competence and confidence in responsible AI use. Third, transparency must be prioritized by ensuring that AI tools approved for school use clearly disclose their data collection, storage, and usage processes, and implement robust consent protocols for both students and educators. Fourth, inclusive policy development should be pursued by involving teachers, students, and key stakeholders in co-creating AI policies that reflect classroom experiences and promote shared responsibility. Finally, DepEd should support ethical innovation through pilot projects that integrate feedback loops, ethical audits, and safeguards to explore AI’s potential while minimizing risks and reinforcing accountability.
Future Research
Future studies should investigate the long-term impact of AI integration on teaching practices, student learning outcomes, and classroom dynamics across various educational levels. Emphasis should be placed on examining how teachers’ digital literacy, ethical training, and institutional support influence their adaptation to AI tools. Further research should also focus on student perceptions of AI—particularly in relation to privacy, bias, and surveillance—to complement teacher perspectives and provide a more comprehensive view. Longitudinal research is encouraged to assess how educator attitudes and adoption patterns evolve over time as policy and technology shift. Comparative studies across regions or school types could also yield insights into contextual factors affecting AI implementation in education.
Translational Research
Translational research should focus on converting empirical findings into practical solutions that directly inform educational practice and policy. This includes developing AI ethics toolkits, teacher-friendly guidelines, decision-making frameworks, and classroom audit checklists. Such tools can help educators navigate ethical challenges and use AI responsibly. Partnerships among policymakers, teachers, and AI developers are vital in designing scalable, culturally responsive, and human-centered AI systems tailored to the unique needs of schools. Integrating feedback from pilot implementations into policy development can further ensure that ethical standards are upheld while supporting innovation in real-world educational environments.
ACKNOWLEDGMENT
The author sincerely extends heartfelt gratitude to Dr. Gregoria T. Su, Schools Division Superintendent of Tandag City Division, for her unwavering support and encouragement throughout the conduct of this study. Special thanks are also given to Jasmin R. Lacuna, CESE – Assistant Schools Division Superintendent, for her valuable insights and continued guidance. Appreciation is likewise extended to Dr. Jeanette R. Isidro, Chief of the Curriculum Implementation Division (CID), and Dr. Gregorio C. Labrado, Chief of the School Governance and Operations Division (SGOD), for their support and leadership that made this research possible. To the 60 Focus Group Discussion (FGD) participants and the 31 School Heads who willingly shared their time, experiences, and perspectives—your contributions were vital to the success and depth of this study. Lastly, sincere thanks to the Planning and Research Section of the Tandag City Division for their technical assistance and collaborative efforts throughout the research process.
REFERENCES
- Ambo University. (n.d.). Introduction to Artificial Intelligence – COSC 4142. Ambo University Institute of Technology.
- Andrejevic, M., & Selwyn, N. (2020). Facial recognition technology in schools: Critical questions andconcerns. Learning, Media and Technology, 45(2), 115–128. https://doi.org/10.1080/17439884.2020.1686014
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, 149–159. https://doi.org/10.1145/3287560.3287598
- Brady, B. (n.d.). UbD Template: When Technology May Byte. Virginia Beach City Public Schools. Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modeling. Education and Information Technologies, 25, 3443–3463.
- Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278.
- Cios, K. J., & Zapala, A. (2020). Ethical implications of artificial intelligence in education. Journal of Educational Computing Research, 58(6), 1105–1121. https://doi.org/10.1177/0735633120930640
- Cios, K. J., & Zapala, R. (2021). Ethical and privacy concerns in artificial intelligence-based educational systems. AI & Society, 36, 357–366. https://doi.org/10.1007/s00146-020-01048-3
- Cummings, C., & Ferris, H. (2020). Ethical implications of AI in education. Journal of Educational Technology Ethics, 8(2), 101–118.
- DepEd. (2024a). Fostering the ‘I’ in AI Webinar Series. Department of Education, SEAMEO INNOTECH.
- DepEd. (2024b). Region VIII Participants to the Workshop on the Development of Policy Guidelines on the Utilization of Generative Artificial Intelligence (AI) for Teaching and Learning. Department of Education, Region VIII.
- DepEd. (2024c). Let’s Be Future Ready: Strategic Integration of AI in Education for the Digital Future. Department of Education, National Capital Region.
- DepEd Cordillera Administrative Region. (2024). Regional Memorandum No. 865, s. 2024: Conduct of AI Immersion Day for Teachers.
- DepEd Tayo, Lutucan Central School – Sariaya West District – Quezon. (2025, March 7). LAC Session at Lutucan Central School: Embracing the Digital Future! [Facebook post].
- District of Columbia Public Schools. (2012–2013). Exploring Computer Science – Human Computer Interaction, Lesson: Day 19 – Artificial Intelligence. https://wisccomputmhs.wikispaces.com
- Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
- Guevarra, E. G., Miranda, F., & Lachica, P. G. V. (2024, February 23). LAC Session Guide: Impact of Artificial Intelligence in Education. Department of Education – Nueva Ecija.
- Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
- Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
- Holmes, W., Perez-Paredes, P., & O’Reilly, T. (2022). Ethics in artificial intelligence in education: Issues and debate. British Journal of Educational Technology, 53(2), 291–310. https://doi.org/10.1111/bjet.13105
- Howard, S. K., Tondeur, J., & Ma, J. (2021). What to teach? Strategies for developing digital competence in teacher education. Computers & Education, 165, 104149. https://doi.org/10.1016/j.compedu.2021.104149
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.
- Kumar, H., & Rose, C. P. (2021). Ethics-aware AI teacher training: Lessons from digital pedagogy. Computers & Education, 160, 104028.
- Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1(3), 0028. https://doi.org/10.1038/s41562-017-0028
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.
- Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44(1), 36–51.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
- Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 1–13.
- Regan, P. M., & Jesse, J. (2019). Ethical challenges of edtech, big data and personalized learning: Twenty first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. https://doi.org/10.1007/s10676-019-09519-1
- Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. https://doi.org/10.1080/17439884.2020.1798995
- Williamson, B., & Hogan, A. (2020). Commercialisation and privatisation in/of education in the context of COVID-19. Education International. https://ei-ie.org
- Zeide, E. (2019). Artificial intelligence in education: The importance of teacher and student rights. Arizona Law Review, 61(3), 637–669.
- Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education: Challenges and opportunities. International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0
APPENDIX 1. ISSUED DEPED MEMORANDA
Table 2. List of Select Policies by Department of Education Initiatives Related to AI Integration in Education.
Document Title/Source | Author/Date | Type of Document | Purpose | Main Ideas | Evidence/Quotes (Privacy, Bias, and Surveillance) | Perspective |
DepEd CAR Regional Memorandum No. 865, s. 2024 | DepEd Cordillera Administrative Region CAR November 28, 2024 | Official Government Memorandum (Training Implementation Document) | To announce and provide guidelines for the Conduct of the AI Immersion Day for Teachers on December 9, 2024 and ensure alignment of participation with teachers’ professional development needs. | Promote understanding of AI in education; explore classroom applications; equip teachers with practical knowledge; address data privacy, bias, and equity; promote collaboration and ethical use. | …ethical considerations related to the use of AI in education, including data privacy, algorithmic bias, and equity issues. | Balanced and proactive; acknowledges both potential and ethical responsibilities of AI in education. |
Fostering the ‘I’ in AI Webinar Series | Department of Education, SEAMEO INNOTECH
June 24, 2024 June 10, 2024 |
Memorandum Series
RM No. 641, s.2024 MEMORANDUM |
To promote awareness and understanding of AIED (AI in Education) in Southeast Asia. | Webinars on AI integration, ethical implications, policy discussions, use of tools | AI in education, ethical concerns, teacher competencies, regional collaboration. | Supportive and informative; promotes inclusive AI integration in education while acknowledging challenges and ethical dimensions. |
Fostering the ‘I’ in AI Webinar Series by SEAMEO INNOTECH | DepEd National Capital Region NCR
June 27, 2024 |
Memoranda, Activity Notes, Webinar Schedules
RM No. 651, S.2024 |
To promote and disseminate webinar series on AI in education targeting Southeast Asian educators and policymakers. | Enhance AI literacy; address ethical concerns; integrate AIED in classrooms; share regional practices; explore tools like Google Gemini; align with human-centered learning goals. | AI Governance in Education: Navigating Ethical Implications and Policy Challenges mentions on ethics, misconceptions, and need for teacher competencies. | Educational and forward-looking; focuses on awareness and responsible integration of AIED. Highlights potential but balances it with policy and ethical guidance, reflecting proactive concern. |
Let’s Be Future Ready: Strategic Integration of AI in Education for the Digital Future | DepEd NCR
December 3, 2024 |
Memorandum and Concept Note
RM ORD-2024: 1289 |
To increase educators’ understanding of AI transformative role in education, focusing on personalized learning and human-centered approaches while addressing ethical issues such as data privacy and emotional intelligence. | Promotes tailored AI learning experiences; emphasizes ethical issues including data privacy and emotional understanding; includes discussions from industry and teacher perspectives | Addresses the ethical challenges related to AI, particularly around data privacy, security, and the limitations of AI in replicating human interaction and emotional understanding | Cautiously optimistic; highlights the benefits of AI while openly discussing potential drawbacks, particularly ethical concerns, and encourages thoughtful integration into educational systems. |
Workshop on the Development of Policy Guidelines on the Utilization of Generative Artificial Intelligence (AI) for Teaching and Learning | DepEd Region VIII-Eastern Visayas
August 3, 2024 |
DepEd Region VIII Memorandum No. 925, s. 2024 | To announce the participants and details of workshops for developing policy guidelines on the utilization of Generative AI for teaching and learning. | Workshops focused on creating, validating, and finalizing AI policy guidelines; conducted both online and in-person; participation by teachers from various divisions. | No direct discussion on ethical concerns; focused on policy formulation logistics. | Practical and procedural; focused on organizing and implementing policy development workshops rather than engaging with critical ethical debates. |
APPENDIX 2. SELECT SLAC, LESSON PLANS/MODULE.
Table 3. Select documents on Learning Action Cell (LAC) minutes, and Lesson Plan/Module.
Document Title/Source | Author/Date | Type of Document | Purpose | Main Ideas | Evidence/Quotes (Privacy, Bias, and Surveillance) | Perspective |
LAC Session at Lutucan Central School | DepEd Tayo, Lutucan Central School – Sariaya West District – Quezon
March 7, 2025 |
DepEd News Article | To promote awareness and share insights from a Learning Action Cell (LAC) session focused on the strategic integration of AI in education. | AI as a transformative tool in education; potential benefits in teaching strategies, personalized learning, and classroom efficiency; importance of ethical and responsible use. | ‘…while ensuring ethical and responsible use of technology.’ | Optimistic and promotional; highlights benefits’ of AI with brief ethical acknowledgment, but lacks discussion on potential risks like privacy breaches or algorithmic bias. |
UbD Template – When Technology May Byte | Brooke Brady
Undated (Curriculum Document) |
Lesson Plan | To teach students about Artificial Intelligence liability, its ethical implications, and programming through hands-on chatbot activities. | AI liability; responsible use of AI; legal and ethical implications of AI behavior; student exploration of programming and critical thinking around AI systems. | Students will explore the process of determining who is at fault, such as what happens in the legal system; Tricking the bot will allow them to see the flaws in technology; Turing Test explanation. | Cautiously optimistic; presents AI as a powerful tool with potential liabilities, encouraging critical reflection rather than blind acceptance. |
Exploring Computer Science – Human Computer Interaction, Lesson: Day 19 – Artificial Intelligence | District of Columbia Public Schools (2012–2013) | Lesson Plan | To help students understand the concept of artificial intelligence, distinguish between human and computer intelligence, and explore the ethical and functional aspects of AI through interactive activities. | Differences between human and machine intelligence; natural language understanding; machine learning models; AI’s ability to ‘learn’; ethical reflections on whether computers are truly intelligent. | Questions such as ‘What does it mean for a machine to learn?’ and ‘Are computers intelligent, or do they only behave intelligently?’ hint at deeper issues of autonomy and trust in AI systems, though privacy and surveillance are not explicitly addressed. | Analytical and educational; focused on inquiry-based learning and student exploration rather than promoting or critiquing AI adoption; lacks engagement with critical perspectives such as bias or surveillance. |
Introduction to Artificial Intelligence – COSC 4142 | Ambo University Institute of Technology / Undated | Academic Lecture/Module | To introduce undergraduate students to the foundational concepts, applications, and philosophical perspectives of AI. | Definition and types of intelligence; history and evolution of AI; necessity, goals, and applications of AI; approaches (think/act like a human/rationally); philosophical, mathematical foundations. | Mentions ‘Security and Surveillance’ under applications of AI in everyday life and public safety (e.g., smart infrastructure, facial recognition). | Informative and comprehensive; leans toward promoting AI as a transformative tool but briefly touches on ethical concerns such as surveillance; minimal coverage of bias or privacy risks. |
Nueva Ecija LAC Session Guide | DepEd Nueva Ecija, Feb 23, 2024 | LAC Session Guide | To train teachers on ethical and effective AI use in education. | Educator awareness of AI’s impact, ethics, and practical | Discuss ethical considerations including data privacy, | Balanced, with critical reflection on ethics and implementation. |