Beyond the Algorithm: Towards a Human-Centered AI Pedagogy
- Purnomo M Antara
- Raja Mayang Delima Mohd Beta
- Nadhrathul Ain Ibrahim
- Adila Talip
- 7651-7660
- Oct 23, 2025
- Education
Beyond the Algorithm: Towards a Human-Centered AI Pedagogy
Purnomo M Antara1*, Raja Mayang Delima Mohd Beta2, Nadhrathul Ain Ibrahim3, Adila Talip4
1,2,3Faculty of Business and Management, Universiti Teknologi MARA (UiTM) Cawangan Negeri Sembilan, Kampus Rembau
4Faculty of Business and Management, Universiti Teknologi MARA (UiTM) Cawangan Kedah, Kampus Sungai Petani
*Corresponding Author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0570
Received: 18 September 2025; Accepted: 24 September 2025; Published: 23 October 2025
ABSTRACT
The integration of Artificial Intelligence (AI) in education has prompted significant excitement and rapid innovation, yet it also raises critical concerns regarding techno-centrism, where the emphasis on algorithmic efficiency often overshadows humanistic educational goals. This paper proposes a Human-Centered AI Pedagogy (HCAIP) framework, which advocates for a reimagined approach to AI in education, focusing on augmentation, literacy, and connection. By positioning AI as a supportive scaffold that enhances the teacher-student relationship, this framework emphasizes the importance of educators acting as pedagogical designers, ethical guides, and community builders. Within the HCAIP model, students transition from passive consumers of technology to empowered critical agents who engage thoughtfully with AI tools and explore their ethical implications. Furthermore, it challenges the narrative of individualized learning by advocating for collaborative inquiry that fosters creativity and problem-solving skills. The contributions of this conceptual work offer a theoretical roadmap for educators, policymakers, and technologists, guiding the ethical and pedagogically sound integration of AI in educational contexts. Empirical studies are encouraged to test the HCAIP principles in real-world settings and develop specific AI tools that align with its vision. Additionally, research should focus on long-term outcomes regarding student development within this framework. This paper posits that the path of AI in education is not predetermined; by consciously making value-driven decisions, stakeholders can ensure that technology enhances our shared humanity, enriching the educational landscape and promoting holistic growth for all learners.
Keywords: Human-Centered AI Pedagogy, Educational Technology, Critical AI Literacy
INTRODUCTION
The integration of Artificial Intelligence (AI) in education has garnered significant attention as it promises to create innovative and personalized learning experiences. Tools like adaptive learning platforms, intelligent tutoring systems, and automated assessment mechanisms are being rapidly incorporated into educational settings, showcasing the potential to transform traditional teaching approaches. The capabilities of AI facilitate personalized learning paths that cater to diverse student needs, enhancing engagement and comprehension. Furthermore, these technologies can increase teacher efficiency by automating routine tasks, enabling educators to concentrate more on instruction and interaction. Data-driven instructional insights provided by AI tools can aid in tailoring pedagogical strategies to optimize student learning outcomes (Selwyn, 2022; Chen et al., 2020). Despite the promising advantages of AI in education, concerns arise regarding the dominance of techno-centrism in educational discourse. The design and deployment of educational AI often prioritize algorithmic efficiency, potentially overshadowing critical humanistic educational goals (Selwyn, 2022; (Holmes et al., 2021). This techno-centric approach risks dehumanizing the educational experience by reducing students to mere data points, thus restructuring learning as a process of mere optimization rather than personal growth. Such a paradigm may diminish the role of teachers, leading to deskilling that minimizes professional judgment, creativity, and essential relational skills in pedagogical practice (Selwyn, 2022; Borenstein & Howard, 2020). Moreover, the pedagogical implications of AI may lead to a narrowing of educational practices, particularly as AI tends to excel in delivering procedural knowledge at the expense of fostering critical thinking, creativity, and socio-emotional competencies. This concern is articulated in various studies highlighting the importance of maintaining a balanced curriculum that does not overly rely on AI’s strengths in procedural automation but rather supports a holistic educational experience Park & Kwon, 2023). To combat these potential pitfalls, it is essential for educational stakeholders to advocate for human-centered design and pedagogical frameworks that emphasize the depth of learning and the richness of human interaction in educational contexts (Holmes et al., 2021; Park & Kwon, 2023). While the rise of AI in education brings transformative potential, the risks associated with a techno-centric focus must be carefully navigated. Educational policies and practices should strive to maintain a balance between leveraging AI’s efficiencies and nurturing an enriching, humanistic learning environment. This paper argues for a paradigm shift from a techno-centric model to a Human-Centered AI Pedagogy (HCAIP). The HCAIP framework emphasizes the critical importance of the teacher-student relationship, critical inquiry, and holistic development as primary drivers of AI integration in educational contexts. As educational landscapes become increasingly infused with AI-driven technologies, it is imperative that the human aspects of teaching and learning remain at the forefront. This shift aims to mitigate the risks associated with a purely techno-centric approach that may neglect the intrinsic values of education, such as empathy, creativity, and interpersonal communication (Singh et al., 2022; (Park & Kwon, 2023). This paper presents the HCAIP framework, outlining its core principles and how it re-centers educational practices on humanistic elements. In discussing the implications of this framework, the discourse will encompass how it can transform educational practices, enhance the role of educators, and ultimately promote deeper learning experiences. Lastly, the paper will conclude with recommendations for educators, policymakers, and AI developers to collaboratively nurture this shift toward a more human-centered approach in educational AI applications (Park & Kwon, 2023; Mayer et al., 2006). This paper aims to contribute to the evolving conversation on AI in education by advocating for an alignment of technological advancements with fundamental educational values and goals. Emphasizing the critical need for a balance between technology and human values will further ensure that AI supports rather than supplants the rich, complex processes inherent in teaching and learning (Park & Kwon, 2023; Mayer et al., 2006).
LITERATURE REVIEW
Artificial Intelligence (AI) in Educational Practice
The current state of AI in educational practice is marked by the prevalence of several dominant forms, namely Adaptive Learning Systems, Intelligent Tutoring Systems (ITS), and Learning Analytics Dashboards. Each of these technologies embodies specific pedagogical assumptions and approaches to learning that significantly shape educational experiences. Adaptive Learning Systems utilize algorithms to tailor educational content to individual learner needs, aiming to optimize learning outcomes by adjusting the complexity and support levels based on real-time performance data. This model is rooted in behaviorist principles, focusing on measurable outcomes and reinforcement strategies (KESGİN, 2025). Through their emphasis on personalized content delivery, these systems promote efficient learning pathways that substantiate the application of cognitive load theory by managing the information presented to students. Intelligent Tutoring Systems (ITS) are arguably among the most sophisticated AI technologies in education, implementing real-time feedback and personalized instruction similar to the interactions one might expect from human tutors. The design of ITSs often relies on behavioral and cognitive theories of learning, incorporating strategies that engage students through problem-solving and practice (Laaziri et al., 2018; Ji & Yuan, 2022). The underlying cognitive approach is evident in how these systems assess student performance and adapt instructional methods accordingly, often prioritizing procedural skills over deeper conceptual understanding. Learning Analytics Dashboards serve as another form of AI in education, providing educators and institutions with insights generated from student interactions and data analytics. These dashboards are powerful tools for monitoring performance trends and informing instructional strategies; however, they also reflect a technocratic perspective that may prioritize quantifiable data over qualitative educational experiences (Bozkurt et al., 2021). As such, they often perpetuate a behaviorist framework by simplifying the complex nature of learning into statistics and metrics that can be easily analyzed and acted upon.
The pedagogical assumptions underlying these technologies often reflect a blend of behaviorism and cognitivism. While behaviorism focuses on observable behavior and reinforcement, cognitivism emphasizes the internal processes of thinking and understanding (Ning et al., 2024). As AI systems in education arise primarily from these behavioral and cognitive theories, they predominantly advocate for a model of education that prioritizes rote learning, skill acquisition, and efficiency over creativity and critical inquiry. Consequently, this can give way to concerns regarding the superficiality of learning that these systems might engender if not complemented by human-centered pedagogical practices that include opportunities for exploration and self-directed learning (Zhang & Shang, 2015). While the integration of AI technologies in education offers powerful tools for enhancing personalized learning experiences, the underlying pedagogical frameworks of these technologies necessitate examination and critique. An over-reliance on behaviorist and cognitivist models risks narrowing the educational landscape, potentially sidelining essential elements like creativity, critical thinking, and the relational dynamics of teaching and learning. The discourse surrounding the integration of technology in education often assumes a neutral stance regarding the tools and systems employed. However, drawing from critical theory and educational philosophy, it is essential to challenge this notion of neutrality. Technological solutionism, the idea that technology can adequately resolve complex educational issues, raises significant concerns regarding algorithmic bias, ethics of data collection, and the historical context of educational technology’s purported transformative impact.
Algorithmic Bias and Societal Inequities
One of the central critiques of technological solutionism in education is the prevalence of algorithmic bias. Educational algorithms, designed to personalize learning experiences, can reflect and amplify existing societal biases, thereby perpetuating inequities among different student demographics. Research indicates that biases in algorithms often stem from the data used to train them, which may lack representation of marginalized groups (Baker & Hawn, 2021; Dieterle et al., 2022). According to Baker and Hawn, racial, ethnic, and socioeconomic biases are particularly concerning, and without rigorous scrutiny, these biases can lead to systematic disadvantages for underrepresented students in educational settings (Baker & Hawn, 2021). Thus, the belief that AI and technology serve as neutral tools oversimplifies the complexities surrounding their deployment and overlooks the social and ethical implications inherent in their design and application.
Ethics of Student Data Collection and Surveillance
Another critical issue relates to the ethics of data collection and surveillance inherent in many AI educational technologies. The systematic gathering of student data, while intended to enhance learning experiences, raises significant privacy concerns. The vast amounts of data collected can lead to surveillance-like practices that undermine student autonomy and trust (Dieterle et al., 2022). Ethical dimensions of data use have often been inadequately examined in many educational contexts, leading to potential exploitation of student information without their informed consent (Dieterle et al., 2022). This surveillance aspect challenges the notion of technology as merely a facilitator, highlighting instead a regulatory role that could infringe upon student rights and dignity.
Historically, the advent of new educational technologies has often been accompanied by hyperbolic claims regarding their transformative potential. While the intent behind the integration of technology is often to enhance learning experiences and outcomes, such assertions frequently overlook the fundamental role of human relationships and pedagogical practices in education. The work of scholars such as Katsaris and Vidakis asserts that despite advancements in adaptive e-learning systems, these technologies can offer limited insights into the holistic educational experience, which is inherently relational and complex (Katsaris & Vidakis, 2021). The emphasis on technological solutions can lead to the neglect of critical pedagogical approaches that foster deep learning, critical inquiry, and emotional engagement. By prioritizing dominant narratives surrounding technology, there is a risk of undermining the essential human factors pivotal in delivering meaningful education (Sahlgren, 2021). Critiques grounded in critical theory and educational philosophy expose the shortcomings of assuming neutrality in technology within educational settings. Algorithmic biases can perpetuate societal inequities, ethical concerns regarding data surveillance challenge privacy norms, and the historical tendency to overestimate the impact of technology risks diminishing the role of humanistic pedagogy. To navigate these complexities, a more nuanced approach is necessary, one that emphasizes collaboration between technology and the essential elements of teaching and learning that fundamentally shape educational outcomes.
Foundations for a Human-Centered Approach
To ground a Human-Centered Approach in education, it is essential to engage with foundational learning theories that emphasize the social, cultural, and relational aspects of learning. Key theoretical anchors, Vygotsky’s Sociocultural Theory, Dewey’s Experiential Learning, and Freire’s Critical Pedagogy, highlight the significance of these dimensions in educational contexts.
Vygotsky’s Sociocultural Theory
Vygotsky’s Sociocultural Theory posits that social interaction plays a fundamental role in cognitive development, particularly through the concept of the Zone of Proximal Development (ZPD). This concept emphasizes the potential growth a learner can achieve with guidance from a more knowledgeable other, such as a teacher or peer. Vygotsky argued that learning occurs within social contexts, whereby cultural tools, including language, facilitate cognitive growth. This perspective underscores the relational aspect of learning, demonstrating that knowledge is co-constructed through interactions. Thus, this theory challenges the individualistic views often associated with technology, asserting that educational success relies heavily on the teacher-student relationship and collaborative learning environments.
Dewey’s Experiential Learning
Dewey’s philosophy of education centers on experiential learning, where learners actively engage with their environments and face real-world problems. Dewey stressed the importance of inquiry-based processes, whereby learners reflect on their experiences to construct meaning and knowledge. His approach promotes the notion that knowledge is not a static entity but a dynamic process shaped through interaction with the world. Learning technologies that support experiential learning must therefore facilitate exploration, inquiry, and critical thinking rather than simply delivering predetermined content. By embedding real-world relevance into the learning process, educators can create opportunities for students to interact meaningfully with their social and cultural contexts.
Freire’s Critical Pedagogy
Freire’s Critical Pedagogy further expands the relational aspect of education by emphasizing dialogue and problem-posing education. He argued against the “banking model” of education, where students are viewed as passive recipients of knowledge. Instead, Freire advocated for an educational framework that empowers students to question and challenge the status quo, fostering critical consciousness about their sociocultural realities. Freire’s approach asserts that education should be a collaborative process, driven by dialogue that respects diverse perspectives and experiences. This emphasis on dialogue highlights the importance of community and relational dynamics in education, challenging the notion that technology can replace the nuanced interactions that occur in a traditional classroom setting.
The application of Vygotsky’s, Dewey’s, and Freire’s theories in the context of a Human-Centered Approach inscribes social, cultural, and relational dimensions into the fabric of educational practices. Recognizing these foundations enables educators and technologists to cultivate learning environments that prioritize meaningful interactions among students and between students and teachers, ultimately supporting holistic development and critical engagement.
Conceptual Gap
Drawing the analysis to a conclusion, it is evident that while significant research has been conducted on the technical capabilities of Artificial Intelligence (AI) in education, there remains a considerable gap in the literature regarding the synthesis of technological advancements with humanistic pedagogical approaches. This paper aims to bridge that gap by theorizing how AI can serve more humanistic educational ends rather than merely technical or mechanistic goals.
Existing scholarship has extensively explored the functionalities of AI technologies, such as adaptive learning systems and intelligent tutoring systems, detailing their potential to enhance engagement, personalization, and efficiency in educational settings (Göksel & Bozkurt, 2019; Zandri, 2025). However, the discussions surrounding these technologies often fail to adequately address the ethical implications, human relational dynamics, and pedagogical frameworks essential for genuine learning experiences. For example, while AI tools can drive personalized education, they need to be implemented in ways that honor the social and cultural contexts of learners, as emphasized by Vygotsky, Dewey, and Freire (Göksel & Bozkurt, 2019; Zandri, 2025). Moreover, the discourse surrounding humanistic pedagogy tends to remain separate from discussions on educational technology, leading to a disconnect that neglects fundamental aspects of the teaching and learning relationship (Adeleye et al., 2024). As such, there is an urgent need for conceptual work that integrates these strands, effectively theorizing a Human-Centered AI Pedagogy that emphasizes the importance of a collaborative, inquiry-driven learning environment. This paper seeks to fill the conceptual void by proposing a framework that illustrates how AI technologies can be used not simply as tools for efficiency but as broader facilitators for meaningful, humanistic educational experiences. It will explore the intersections of technology and pedagogy, aiming to construct a coherent vision that leverages AI for the holistic development of learners.
The Conceptual Framework: Human-Centered Ai Pedagogy (Hcaip)
Introduction to the HCAIP Framework
The core goal of the Human-Centered AI Pedagogy (HCAIP) framework is to ensure that AI is implemented as a tool that augments human intelligence and fosters connections among educators and students, rather than attempting to replace these essential human elements. This vision reframes AI from being seen as a mechanistic solution and instead presents it as a “scaffold”, a supportive structure that enhances teaching and learning processes. Visualizing AI in this manner signifies that it should serve as a supportive platform for teachers, acting as a “Socratic partner” that stimulates inquiry and promotes interactive dialogues, rather than a mere resource on an automated assembly line. Consequently, the HCAIP framework seeks to cultivate an educational environment where AI empowers individuals, nurturing their capabilities and facilitating collaboration in the learning journey.
Pillar 1: Teacher / Educator as Pedagogical Designer (AI for Augmentation)
Within the HCAIP framework, one of the foundational pillars emphasizes the role of teachers as pedagogical designers empowered through AI for augmentation. AI technologies should be designed to handle rote, administrative, or data-processing tasks—such as the initial grading of factual quizzes or curating educational resources—thereby freeing up valuable teacher time for high-impact human activities. When teachers are liberated from routine tasks, they can focus on mentoring students, facilitating complex discussions, providing nuanced emotional support, and designing creative learning experiences that engage students more profoundly. In this model, the teacher remains the “human-in-the-loop,” essential for making final pedagogical decisions based on AI-provided insights. This balance leverages the efficiency of AI while ensuring that educational decisions remain rooted in the professional wisdom and relational understanding that human educators bring to the classroom (Akgün & Greenhow (2021); Onesi-Ozigagun et al., 2024). Ultimately, this empowers educators to enrich their teaching practices through thoughtful integration of AI, cultivating deeper connections with their students and fostering collaborative learning environments.
Pillar 2: Student as Critical Agent (AI for Literacy)
The second pillar of the HCAIP framework centers on empowering students as critical agents rather than passive users of AI. To fully harness AI’s potential, the curriculum must incorporate “Critical AI Literacy,” which involves helping students understand how algorithms work, where data comes from, and the existence of bias that can influence AI outputs. By cultivating this literacy, educators can empower students to approach AI tools responsibly and with a critical mindset, questioning outputs and understanding the broader ethical implications of AI in society. Through this emphasis on critical literacy, students engage not only with technological tools but also with the ethical dimensions of their applications. This initiative prepares them to navigate a digital world shaped by AI while also challenging them to be informed consumers and potentially creators of intelligent systems (Winckelmann, 2023; Sahlgren, 2021). Central to this pillar is the goal of fostering a generation that actively questions and collaborates with AI, ensuring that they are equipped to address the ethical dilemmas posed by technology in their communities and beyond.
Pillar 3: Learning as Collaborative Inquiry (AI for Connection)
The third pillar of the HCAIP framework redefines the narrative surrounding AI from being solely a tool for individualized learning to one that facilitates collaborative inquiry. AI technologies can serve as dynamic information resources for group projects or simulations of complex systems that students can analyze collectively. This collaborative dimension challenges the typical application of AI in education and invites students from diverse backgrounds to engage in shared experiences, fostering creativity and critical thinking. Furthermore, AI can facilitate global connections among classrooms, enabling students to collaboratively address complex, open-ended problems that require diverse perspectives and teamwork. By positioning AI as a catalyst for collaboration, the HCAIP framework emphasizes the importance of human interaction and creativity, reinforcing that technology should enhance, rather than detract from, the richness of social learning experiences (Ray & Ray, 2024; Animashaun et al., 2024). This collaborative inquiry not only enriches the learning experience but also prepares students for complex, real-world challenges requiring dynamic teamwork and innovative problem-solving abilities.
The Human-Centered AI Pedagogy framework presents a multi-faceted approach that seeks to harmonize AI technology with essential human elements in education. By focusing on teachers as designers, students as critical agents, and learning as collaborative inquiry, this framework aims to redefine the educational landscape through a lens that prioritizes human-centric values and connections, ultimately paving the way for a more equitable and innovative future in education.
DISCUSSION AND EXPECTED OUTCOMES
The Redefined Role of the Educator
The implementation of the Human-Centered AI Pedagogy (HCAIP) framework signifies a transformative shift in the role of educators from being mere “information deliverers” to becoming “learning architects,” “ethical guides,” and “community builders.” This evolution recognizes that the integration of AI and educational technology is not meant to minimize the teacher’s impact but rather to enhance it. Educators are now positioned to orchestrate learning experiences that are dynamic and reflect the diverse needs of their students. As “learning architects,” teachers design learning environments that facilitate inquiry and foster critical thinking, rather than simply transmitting knowledge. They become “ethical guides,” ensuring that the use of AI in the classroom adheres to ethical practices that prioritize students’ well-being and privacy. Moreover, as “community builders,” teachers foster collaborative learning cultures where students engage with one another in meaningful dialogue and cooperative problem-solving. This shift professionalizes, rather than de-skills, the role of the teacher by emphasizing their expertise in pedagogy, their ethical responsibilities, and the importance of fostering community Jabsheh (2024)Jabsheh, 2024).
The Empowered Student Experience
Within the HCAIP model, the student experience is reimagined with a focus on increasing agency, emphasizing higher-order thinking, and fostering a nuanced understanding of the digital world they inhabit. The curriculum is designed to empower students to engage actively in their learning processes rather than passively consuming information. Given access to AI tools and platforms, students are encouraged to contribute to their own education actively, cultivating critical, collaborative, and creative competencies essential for their future endeavors. Students develop these competencies through engagement with meaningful and authentic tasks that go beyond rote memorization. The focus shifts to inquiry-based and project-driven learning, where students can explore real-world issues, collaborate with peers, and engage in complex problem-solving tasks (Bi, 2023; Cong‐Lem, 2022). By fostering critical reflection on their use of digital tools, students not only gain content knowledge but also become aware of the ethical implications of technology, preparing them to navigate the complexities of a digitized society (Nawab, 2023; Bi, 2023).
Implications for AI Technology Design and Policy
The successful implementation of the HCAIP framework necessitates a paradigm shift in the design philosophy of educational technology developers, moving from automation to augmentation. This involves creating AI-driven tools that support and enhance human actions rather than attempting to replace them. Collaboration between educators and technology developers is essential, as co-designing tools can ensure that they address real pedagogical needs, align with ethical standards, and enhance learning experiences (Meguid & Collins, 2017; Liaw et al., 2019). Moreover, educational policymakers must prioritize robust professional development for educators, focusing not solely on technical skills but also on pedagogy and ethical considerations. Such training should equip teachers to effectively integrate AI in their classrooms, allowing them to leverage these technologies to enhance their pedagogical practices and student outcomes (Newman, 2018; Erbil, 2020). This policy direction stresses the importance of preparing educators to navigate the ethical landscape of AI in education while ensuring equitable access and meaningful use of technology across diverse learning contexts.
Potential Challenges
While the HCAIP framework outlines a promising path forward, several potential barriers to its implementation must be acknowledged. First, significant investment in teacher training is essential to ensure that educators are adequately prepared to utilize AI tools effectively. Without proper training, the potential advantages of AI integration could be compromised. Second, issues of equitable access to technology remain a critical concern. Disparities in access can exacerbate existing inequalities among students, hindering the effectiveness of AI-enhanced learning environments. Schools and policymakers need to address these access disparities by investing in infrastructure and resources that ensure all students can benefit from AI technologies (Cong‐Lem, 2022; Wang, 2024). Data privacy and security pose another challenge, particularly in light of the increasing surveillance and data collection associated with educational tools. Ensuring that student data is handled ethically and transparently is essential to foster trust among educators, students, and parents (Wang, 2024; Kakai, 2024). Lastly, the commercial interests of EdTech companies may not always align with pedagogical goals, creating conflicts that could detract from the educational mission. It is crucial for stakeholders to prioritize educational values over profit when it comes to the design and deployment of AI in education (Liaw et al., 2019; Zhou, 2024). While the HCAIP framework offers a compelling vision for integrating AI into education, addressing these implementation challenges will require collaborative efforts from educators, policymakers, and technology developers to ensure that the shift remains truly centered around human values in learning.
CONCLUSION AND RECOMMENDATIONS
This paper has articulated the pressing challenges associated with techno-centrism within the education sector, where educational technologies often prioritize technical efficiencies at the expense of humanistic values. The Human-Centered AI Pedagogy (HCAIP) framework offers a viable alternative by proposing a model that integrates AI for augmentation, literacy, and connection. In emphasizing the collaborative and relational aspects of teaching and learning, the HCAIP framework seeks to re-envision the role of technology as a supportive partner in education rather than a replacement for the human elements that are essential for meaningful learning experiences (Bibi, 2024; Popenici & Kerr, 2017).
Besides, the primary contribution of this conceptual paper is to provide a much-needed theoretical roadmap for educators, policymakers, and technologists. This roadmap is designed to guide the ethical and pedagogically sound integration of AI into educational contexts. By foregrounding the principles of human-centered design, critical literacy, and collaborative inquiry, this paper serves as a foundational resource for stakeholders invested in developing educational practices that honor both technological advancements and the inherent human elements of teaching and learning (Donkoh & Amoakwah, 2024; Shneiderman, 2020).
To further advance the HCAIP framework, it is essential to call for empirical research to test its principles in real-world classroom settings. Such research will help contextualize the theoretical claims made in this paper and provide a clearer understanding of how the HCAIP framework operates in practice. Additionally, the development and study of specific AI tools designed explicitly to support the HCAIP model would provide invaluable insights into making technology an effective ally in education. Finally, longitudinal studies examining the long-term impacts of this approach on student development, engagement, and learning outcomes would help to substantiate the benefits of integrating AI within a human-centered framework (Liao & Varshney, 2021; Qian, 2023).
On the other hand, the path of AI in education is not predetermined. By making conscious, value-driven choices, stakeholders can shape a future where technology serves to enhance our shared humanity and deepen our capacity for learning, rather than diminish it. As we embrace the capabilities of AI, it is vital that we ensure these tools foster connections, promote critical agency, and reflect ethical practices, ultimately enriching the educational landscape for all learners (Botella, 2023; Balta, 2023).
REFERENCES
- Adeleye, O., Eden, C., & Adeniyi, I. (2024). Innovative teaching methodologies in the era of artificial intelligence: a review of inclusive educational practices. World Journal of Advanced Engineering Technology and Sciences, 11(2), 069-079. https://doi.org/10.30574/wjaets.2024.11.2.0091
- Akgün, S. and Greenhow, C. (2021). Artificial intelligence in education: addressing ethical challenges in k-12 settings. Ai and Ethics, 2(3), 431-440. https://doi.org/10.1007/s43681-021-00096-7
- Animashaun, E., Familoni, B., & Onyebuchi, N. (2024). Advanced machine learning techniques for personalising technology education. Computer Science & It Research Journal, 5(6), 1300-1313. https://doi.org/10.51594/csitrj.v5i6.1198
- Baker, R. and Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052-1092. https://doi.org/10.1007/s40593-021-00285-9
- Balta, N. (2023). Ethical considerations in using ai in educational research., 2(1), 14205. https://doi.org/10.51853/jorids/14205
- Bi, L. (2023). Application of vygotsky’s sct in chinese efl classroom., 891-900. https://doi.org/10.2991/978-2-494069-97-8_113
- Bibi, A. (2024). Navigating the ethical landscape: ai integration in education. EATP, 1579-1585. https://doi.org/10.53555/kuey.v30i6.5546
- Borenstein, J. and Howard, A. (2020). Emerging challenges in ai and the need for ai ethics education. Ai and Ethics, 1(1), 61-65. https://doi.org/10.1007/s43681-020-00002-7
- Botella, C. (2023). Prospects for creating an ai system to enhance trumpet learning. IJMP, 1(1), 40-48. https://doi.org/10.61629/ijmp.v1i1.33
- Bozkurt, A., Karadeniz, A., Bañeres, D., Guerrero-Roldán, A., & Rodríguez, M. (2021). Artificial intelligence and reflections from educational landscape: a review of ai studies in half a century. Sustainability, 13(2), 800. https://doi.org/10.3390/su13020800
- Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: a review. Ieee Access, 8, 75264-75278. https://doi.org/10.1109/access.2020.2988510
- Cong-Lem, N. (2022). Emotion and its relation to cognition from vygotsky’s perspective. European Journal of Psychology of Education, 38(2), 865-880. https://doi.org/10.1007/s10212-022-00624-x
- Dieterle, E., Dede, C., & Walker, M. (2022). The cyclical ethical effects of using artificial intelligence in education. Ai & Society, 39(2), 633-643. https://doi.org/10.1007/s00146-022-01497-w
- Donkoh, S. and Amoakwah, A. (2024). Use and challenges of learner-centered pedagogy: basic school teachers’ perspective. European Journal of Education and Pedagogy, 5(1), 66-71. https://doi.org/10.24018/ejedu.2024.5.1.774
- Erbil, D. (2020). A review of flipped classroom and cooperative learning method within the context of vygotsky theory. Frontiers in Psychology, 11. https://doi.org/10.3389/fpsyg.2020.01157
- Göksel, N. and Bozkurt, A. (2019). Artificial intelligence in education., 224-236. https://doi.org/10.4018/978-1-5225-8431-5.ch014
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S., … & Koedinger, K. (2021). Ethics of ai in education: towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504-526. https://doi.org/10.1007/s40593-021-00239-1
- Jabsheh, A. (2024). Behaviorism, cognitivism, and constructivism as the theoretical bases for instructional design. Technium Education and Humanities, 7, 10-28. https://doi.org/10.47577/teh.v7i.10576
- Jabsheh, A. (2024). Relevancy and outlook of the technology-enhanced education within digital contents, resources and tools. ijmer, 3(1), 24-34. https://doi.org/10.32996/ijmer.2024.3.1.4
- Ji, S. and Yuan, T. (2022). Conversational intelligent tutoring systems for online learning: what do students and tutors say?., 292-298. https://doi.org/10.1109/educon52537.2022.9766567
- Kakai, H. (2024). The mixed methods treasure hunt: reflecting on the legacy of dr. michael d. fetters in teaching mixed methods research. Journal of Mixed Methods Research, 18(3), 404-414. https://doi.org/10.1177/15586898241252203
- Katsaris, I. and Vidakis, N. (2021). Adaptive e-learning systems through learning styles: a review of the literature. Advances in Mobile Learning Educational Research, 1(2), 124-145. https://doi.org/10.25082/amler.2021.02.007
- KESGIN, K. (2025). Mapping and modeling the role of artificial intelligence in science education: from bibliometrics to classroom integration.. https://doi.org/10.21203/rs.3.rs-6542160/v1
- Laaziri, M., Khoulji, S., Benmoussa, K., & Kerkeb, M. (2018). Outlining an intelligent tutoring system for a university cooperation information system. Engineering Technology & Applied Science Research, 8(5), 3427-3431. https://doi.org/10.48084/etasr.2158
- Liao, Q. and Varshney, K. (2021). Human-centered explainable ai (xai): from algorithms to user experiences.. https://doi.org/10.48550/arxiv.2110.10790
- Liaw, S., Tan, K., Wu, L., Tan, S., Choo, H., Yap, J., … & Ignacio, J. (2019). Finding the right blend of technologically enhanced learning environments: randomized controlled study of the effect of instructional sequences on interprofessional learning. Journal of Medical Internet Research, 21(5), e12537. https://doi.org/10.2196/12537
- Mayer, R., Johnson, W., Shaw, E., & Sandhu, S. (2006). Constructing computer-based tutors that are socially sensitive: politeness in educational software. International Journal of Human-Computer Studies, 64(1), 36-42. https://doi.org/10.1016/j.ijhcs.2005.07.001
- Meguid, E. and Collins, M. (2017). Students’ perceptions of lecturing approaches: traditional versus interactive teaching. Advances in Medical Education and Practice, Volume 8, 229-241. https://doi.org/10.2147/amep.s131851
- Nawab, A. (2023). Exploring the dilemmas and their influence on teacher identity development during practicum: implications for initial teacher education. International Social Science Journal, 74(251), 53-68. https://doi.org/10.1111/issj.12441
- Newman, S. (2018). Vygotsky, wittgenstein, and sociocultural theory. Journal for the Theory of Social Behaviour, 48(3), 350-368. https://doi.org/10.1111/jtsb.12174
- Ning, Y., Zhang, C., Xu, B., Zhou, Y., & Wijaya, T. (2024). Teachers’ ai-tpack: exploring the relationship between knowledge elements. Sustainability, 16(3), 978. https://doi.org/10.3390/su16030978
- Onesi-Ozigagun, O., Ololade, Y., Eyo-Udo, N., & Ogundipe, D. (2024). Revolutionizing education through ai: a comprehensive review of enhancing learning experiences. International Journal of Applied Research in Social Sciences, 6(4), 589-607. https://doi.org/10.51594/ijarss.v6i4.1011
- Park, W. and Kwon, H. (2023). Implementing artificial intelligence education for middle school technology education in republic of korea. International Journal of Technology and Design Education, 34(1), 109-135. https://doi.org/10.1007/s10798-023-09812-2
- Popenici, S. and Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1). https://doi.org/10.1186/s41039-017-0062-8
- Qian, C. (2023). Research on human-centered design in college music education to improve student experience of artificial intelligence-based information systems. Journal of Information Systems Engineering & Management, 8(3), 23761. https://doi.org/10.55267/iadt.07.13854
- Ray, S. and Ray, D. (2024). Artificial intelligence in education: navigating the nexus of innovation and ethics for future learning landscapes. International Journal of Research -Granthaalayah, 11(12). https://doi.org/10.29121/granthaalayah.v11.i12.2023.5464
- Sahlgren, O. (2021). The politics and reciprocal (re)configuration of accountability and fairness in data-driven education. Learning Media and Technology, 48(1), 95-108. https://doi.org/10.1080/17439884.2021.1986065
- Selwyn, N. (2022). The future of ai and education: some cautionary notes. European Journal of Education, 57(4), 620-631. https://doi.org/10.1111/ejed.12532
- Shneiderman, B. (2020). Human-centered artificial intelligence: three fresh ideas. Ais Transactions on Human-Computer Interaction, 109-124. https://doi.org/10.17705/1thci.00131
- Singh, N., Gunjan, V., Mishra, A., Mishra, R., & Nawaz, N. (2022). Seistutor: a custom-tailored intelligent tutoring system and sustainable education. Sustainability, 14(7), 4167. https://doi.org/10.3390/su14074167
- Wang, Z. (2024). Effects of teachers’ roles as scaffolding in classroom instruction. Advances in Vocational and Technical Education, 6(2). https://doi.org/10.23977/avte.2024.060229
- Winckelmann, S. (2023). Predictive algorithms and racial bias: a qualitative descriptive study on the perceptions of algorithm accuracy in higher education. Information and Learning Sciences, 124(9/10), 349-371. https://doi.org/10.1108/ils-05-2023-0045
- Zandri, S. (2025). Innovative digital pedagogies in mathematics and science learning. Jurnal Penelitian Pendidikan Ipa, 11(5), 68-72. https://doi.org/10.29303/jppipa.v11i5.11390
- Zhang, Z. and Shang, X. (2015). Interrogative sentence generation and dialogue management in intelligent tutoring system.. https://doi.org/10.2991/ameii-15.2015.11
- Zhou, X. (2024). Sociocultural theory in early childhood education. Lecture Notes in Education Psychology and Public Media, 51(1), 190-196. https://doi.org/10.54254/2753-7048/51/20240981