INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Reason–Emotion Integration in the Context of Artificial Intelligence:  
The Implications of Aristotle’s Conception of Happiness for Youth  
Moral Education  
Du Yuqian1 & Xin Yuan2*  
1Minzu University of China  
2Universiti Sains Malaysia  
*Corresponding Author  
Received: 01 December 2025; Accepted: 08 December 2025; Published: 09 December 2025  
ABSTRACT  
With rapid advances in AI, modern education faces the challenge of ensuring independent thinking isn’t  
overshadowed by AI’s rational logic. As AI shapes learning and interactions, students may show less emotional  
expression, moral concern, and well-being. Drawing on Aristotle’s eudaimonia, this study explores integrating  
rationality and emotion in the AI context for youth moral education. Aristotle saw rationality as key but valued  
emotion’s role, seeing practical wisdom as leading to virtue and happiness. Education should foster independent  
thought beyond instrumental rationality by cultivating emotional experience and moral sensitivity, balancing  
reason and emotion. This study examines AI’s dual impact, potentially eroding rationality and blunting  
emotional engagement, through Aristotle’s rational-emotional unity. It proposes a moral education model rooted  
in practical wisdom to revitalize youth virtue education in the AI age.  
Keywords: Artificial intelligence; reason and emotion; Aristotle; happiness; youth moral education  
INTRODUCTION  
Artificial intelligence has become a central force in shaping how young people interpret the world, influencing  
not only their access to knowledge but also the ways they form judgments and relate to others emotionally.  
Recent national statistics (CNNIC, 2024) show that a substantial majority of individuals in late adolescence  
depend on algorithmic systems for learning-related decisions, and many now encounter information primarily  
through automated recommendation models. The increasing normalization of such practices suggests that  
computational systems are gradually absorbing responsibilities once reserved for human deliberation, especially  
among younger populations whose capacities for independent assessment are still emerging. This transformation  
evokes the critical reflections of Horkheimer (1947), who warned that when rationality becomes tied to  
optimization and productivity, its moral dimensions risk being sidelined.  
Emotional development has evolved in parallel with these cognitive shifts. Digital platforms designed for rapid  
engagement, especially short-form video services—have fragmented emotional attention and introduced patterns  
of interaction that prioritize momentary affective stimulation over meaningful interpersonal resonance. Turkle  
(2011) notes that such environments create a paradox in which constant digital proximity does little to cultivate  
empathy or intimate understanding. As a result, many young individuals come to associate well-being with  
algorithmically triggered novelty rather than with sustained forms of self-realization or intrinsically motivated  
fulfillment (Couldry & Hepp, 2017; Berridge & Kringelbach, 2015).  
A diverse body of scholarship has explored these transformations through different disciplinary lenses. Media  
theorists point out that personalization infrastructures may reduce intellectual openness. Pariser (2011) argues  
that such systems confine users to carefully curated informational micro-worlds. Sunstein (2017) similarly  
suggests that excessive algorithmic filtering diminishes the variety of viewpoints to which individuals are  
Page 4203  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
exposed, reducing the cognitive friction necessary for critical thinking. Psychological investigations likewise  
associate heightened digital immersion with emotional instability and social withdrawal (Twenge, 2017;  
Valkenburg et al., 2022). Research in technology ethics deepens these concerns by emphasizing how algorithmic  
governance reconfigures moral agency. Floridi (2014) contends that digital environments encourage the  
outsourcing of evaluative judgment to intelligent systems, whereas O’Neil (2016) observes that algorithmic  
decision models often amplify efficiency at the expense of moral nuance. In response, contemporary virtue  
ethicists argue for the continuing relevance of classical moral resources. Annas (2011) frames virtue cultivation  
as a counterweight to technologically induced passivity, while Hursthouse and Pettigrove (2018) highlight  
phronesis as the integrative element that unites rational discernment with emotional intelligence.  
Against this background, Aristotelian ethics offers an alternative paradigm for understanding moral formation  
in technologically saturated contexts. Aristotle proposes that human flourishing (eudaimonia) depends on the  
alignment of thought and emotion through the development of practical wisdom. Phronesis enables individuals  
to respond to ethically significant situations with both insight and emotional appropriateness, suggesting that  
reasoning and affect are mutually reinforcing rather than antagonistic (Kraut, 2018; Nussbaum, 2001). In contrast  
to the fragmented attention economy fostered by algorithmic systems, Aristotelian ethics promotes continuity,  
self-reflection, and the cultivation of character as the foundations of durable well-being.  
The implications for moral education are significant. In Politics, Aristotle (1984) maintains that individual virtue  
is inseparable from collective well-being; thus, educational efforts should aim not only at personal ethical  
development but also at sustaining communal life. His view reframes happiness as something gradually achieved  
through meaningful action, rather than as the fleeting satisfaction delivered by algorithmic recommendation  
loops (Sandel, 2020; Seligman, 2018). Approaching contemporary challenges from this angle suggests that  
rebuilding moral education in the AI era requires reinforcing those capacities—emotional steadiness, reflective  
agency, and responsible judgment—that allow young people to navigate technological pressures without being  
subsumed by them.  
Guided by these considerations, the present study employs Aristotelian eudaimonia as a conceptual lens for  
reconstructing the relationship between reasoning and emotion in the context of AI-mediated moral  
development. The inquiry proceeds by integrating conceptual analysis with contextual interpretation and  
educational reflection. Through an examination of Nicomachean Ethics and Politics, the research identifies key  
Aristotelian insights concerning the interplay of cognitive and emotional capacities. These insights are then  
juxtaposed with contemporary concerns about algorithmic dependency to generate an educational model  
oriented toward practical wisdom. By combining textual interpretation with examples drawn from AI-related  
educational contexts, the study aims to outline pedagogical strategies capable of supporting reflective autonomy  
and emotional resilience in technologically immersed youth (Creswell & Poth, 2018; MacIntyre, 2007). To  
strengthen the practical relevance of the argument, the study also incorporates empirical observations from  
classroom settings and narratives from students engaged in pilot educational interventions, illustrating  
how phronesis can be enacted in AI-saturated learning environments.  
The Rational-Emotional Integration Logic of Aristotle’s Conception of Happiness  
In an era where artificial intelligence is rapidly permeating various aspects of social life, discussions surrounding  
happiness and rationality have regained philosophical urgency. As algorithmic decision-making assumes an  
increasingly significant role in education, governance, and social interactions, the status of human rationality,  
the function of emotions, and the essence of happiness are all confronted with novel challenges. In contrast to  
contemporary technological rationality, Aristotle's understanding of happiness not only emphasizes the harmony  
between reason and emotion within the individual but also underscores the integral connection between virtuous  
practice and public life. His proposed concept of happiness offers profound theoretical resources for current  
discussions on artificial intelligence ethics, affective computing, and moral education. Therefore, based on  
Aristotle's structure of the soul, system of virtues, and view of the political community, this article explores how  
happiness is generated through the integration of reason and emotion, and further analyzes the implications of  
this classical framework for the contemporary artificial intelligence era.  
Page 4204  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
The Unity of Happiness, Intellectual Virtue, and Moral Virtue  
The rationality embodied by artificial intelligence is fundamentally distinct fromAristotle’s conception of human  
rationality. AI systems operate through algorithmic deduction and computational formalism—forms of reasoning  
that are procedural, instrumental, and indifferent to moral purpose. By contrast, Aristotelian rationality is  
teleological and virtue-oriented: it addresses not only how one should act but also why one ought to act toward  
the good. This distinction underscores the central risk of AI rationality—namely, its detachment from ethical  
ends, and highlights the irreplaceable role of what Aristotle calls “rational activity in accordance with virtue”  
(Nussbaum, 1986; MacIntyre, 2007).  
In the Nicomachean Ethics, Aristotle argues that happiness (eudaimonia) is the highest human good and consists  
in the excellent activity of the soul expressed through reason. To clarify how such activity becomes possible, he  
distinguishes three components of the soul: the nutritive aspect, which sustains biological life; the sensitive  
aspect, which generates desires and emotions; and the rational aspect, which alone enables human beings to  
deliberate, judge, and understand (Shields, 2014). Within the rational soul, Aristotle identifies both a part that  
possesses reason and a part that merely responds to it. The first corresponds to intellectual virtue, which can be  
divided into theoretical and practical wisdom, whereas the second provides the basis for moral virtue, which is  
shaped through habit and guided by rational principles.  
Aristotle’s argument for the unity of reason, virtue, and happiness is articulated most explicitly in Ethics I.7,  
where he writes that “human good turns out to be activity of the soul in accordance with virtue, and if there are  
several virtues, in accordance with the best and most complete” (Aristotle, trans. 1925). Reason is thus not a  
static capacity but an active, purposive engagement that directs one toward right action. When emotions such as  
anger, fear, or desire arise, the rational part of the soul functions like a charioteer guiding the impulses of the  
horse, steering them toward temperance, courage, and other excellences of character.  
Emotion, however, is not treated as an adversary to reason. Aristotle stresses that the highest good cannot be  
separated from rightly ordered pleasure. Human actions are deeply shaped by pleasure and pain, which influence  
whether individuals incline toward what is noble or ignoble. As he notes, experiencing emotions “at the right  
time, toward the right objects, for the right reason, and in the right manner” constitutes the moral mean that  
marks virtue (Aristotle, trans. 1925, II.6). Modern virtue ethicists similarly argue that emotion contributes  
motivational force and evaluative insight essential to moral perception (Kristjánsson, 2007).  
Thus, in Aristotle’s framework, happiness is not a cold, purely rational condition; rather, it emerges through the  
harmonious alignment of reason and desire. The virtuous person not only judges correctly but also takes pleasure  
in what is noble. Pleasure becomes evidence that the soul’s elements are ordered toward their proper ends.  
Happiness, therefore, arises from the dynamic interplay of rational discernment and emotionally enriched moral  
activity.  
Practical Wisdom and the Cultivation of Virtue  
AI’s moral reasoning, built upon statistical inference or rule-driven procedures—differs sharply fromAristotle’s  
account of phronesis, or practical wisdom. In dilemmas such as the autonomous-vehicle variation of the trolley  
problem, AI systems identify choices that minimize overall harm based on probabilistic assessment. Yet such  
computation cannot replicate the moral sensitivity or context-specific discernment that characterize human  
practical wisdom (Vallor, 2016; Coeckelbergh, 2020). The absence of emotional understanding further prevents  
AI from embodying morally responsible agency.  
Aristotle views practical wisdom as the cognitive virtue that enables a person to judge what is good in particular  
situations. It bridges universal ethical principles with the concrete circumstances of action. As he explains, the  
practically wise individual must grasp both general norms and the specific features of a situation, just as a  
physician must know not only general dietary principles but also which foods aid the health of a particular patient  
(Aristotle, trans. 1925, VI.7). Practical wisdom thus transforms knowledge into action, making reason operative  
within the unpredictable flux of human life.  
Page 4205  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
In Aristotle’s moral psychology, practical wisdom also coordinates reason with emotion. Through repeated  
habituation, individuals learn to feel the appropriate amount of fear, anger, or desire, internalizing virtuous  
emotional patterns that support correct judgment (Lear, 2014). In this way, emotion does not obstruct rationality;  
instead, virtuous emotions reinforce rational aims by motivating individuals to pursue the good. Happiness,  
understood as a form of active flourishing (energeia), becomes attainable only when rational insight and  
emotional disposition work together in harmony.  
Aristotle further argues that virtue is neither innate nor contrary to human nature; rather, human beings are  
naturally capable of receiving virtue but must perfect it through habituation. Thus, rationality sets the direction  
of moral development, while emotional discipline consolidates it into stable character (Hursthouse & Pettigrove,  
2018). Virtue, cultivated over time through practice and guided by reason, becomes the foundation of a  
flourishing life.  
The Conception of Happiness from Individual Perfection to the Common Good  
Aristotle’s account of happiness also extends beyond individual excellence to encompass the well-being of the  
political community. Human beings, he famously asserts, are “political animals” who can realize their good only  
within a structured civic order (Politics 1253a). Contemporary scholarship likewise emphasizes that virtues  
flourish within supportive social institutions and interpersonal networks (Miller, 2011). Thus, the achievement  
of personal happiness presupposes a community that cultivates and sustains the conditions for virtue.  
Individual virtues develop through action in relational contexts; justice, courage, and temperance require  
situations involving others. As Aristotle remarks, “we become just by doing just acts” (Aristotle, 1999, Ethics  
1103b). Such moral interactions presuppose a political structure that protects individuals, promotes civic  
friendship, and fosters opportunities for meaningful contribution. A well-ordered state therefore becomes a  
prerequisite for both moral development and the attainment of happiness. As Aristotle concludes, the state exists  
for the sake of “living well,” not merely living (Politics 1252b). Education functions as the principal social  
pathway for creating this environment. Aristotle insists that public education must be directed toward cultivating  
the virtues that enable citizens to pursue the good life (Politics 1337a). Because the character of a political regime  
is reflected in the character of its citizens, education becomes essential not only for personal flourishing but also  
for the preservation and improvement of the constitution. Modern educational theorists similarly contend that  
moral and civic formation are indispensable for democratic stability and human development (Carr, 2021;  
Curren, 2013).  
Therefore, Aristotle’s conception of happiness is simultaneously personal and civic: individuals can attain their  
fullest good only within a community that provides moral guidance, emotional cultivation, and institutional  
support. In the age of AI—where algorithmic systems risk narrowing moral agency—Aristotle’s integrated  
model of reason, emotion, virtue, and community offers an important reminder that flourishing must remain a  
fundamentally human, relational, and ethical endeavor.  
Artificial Intelligence, Human Reason, and the Fragmentation of Emotional Life: A Reconstructed  
Framework  
The rapid advancement of contemporary artificial intelligence technology not only transforms social structures  
and knowledge production methods but also profoundly impacts the operational logic of human rational  
activities and emotional lives. As algorithm-driven prediction, optimization, and quantification increasingly  
permeate education, public governance, and daily interactions, humanity’s pre-existing moral judgment patterns  
and emotional experiences face unprecedented reshaping. In this context, the computational rationality  
represented by AI is constantly expanding, while the space for humanistic reason, emotional complexity, and  
moral deliberation appears to be gradually shrinking. Especially among young people, the emotional  
environment and interaction methods constructed by algorithmic systems further exacerbate the risks of  
emotional fragmentation, weakened self-understanding, and decreased moral sensitivity. The field of education  
has not been spared either: institutional logic that emphasizes quantitative evaluation and technical efficiency  
can easily reinforce algorithmic rationality, leading to the marginalization of emotional cultivation and the  
importance of moral practice. Therefore, in an era where artificial intelligence deeply permeates social life, it is  
Page 4206  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
urgent to re-examine the impact of computational rationality on the structure of human values and to explore a  
reconstructive framework that can integrate technological critique, emotional literacy, and practical wisdom to  
ensure that moral subjectivity and emotional depth can be maintained and developed in a rapidly changing  
technological world.  
Algorithmic Governance and the Contraction of Humanistic Reasoning  
The rapid expansion of artificial intelligence has ushered in a mode of reasoning that diverges sharply from the  
texture of human judgment. Instead of enhancing individuals’ capacities for autonomous reflection, AI often  
reshapes the very conditions under which decisions are made by placing users within architectures governed by  
computational priorities. These systems elevate prediction, optimization, and numerical evaluation as the  
primary markers of “good” reasoning, subtly steering individuals toward instrumental aims. As Habermas  
(1984/1987) cautions, when technological rationality migrates into cultural and interpersonal spheres, it can  
disrupt the domains in which meaning, ethical discourse, and mutual understanding typically unfold. This  
dynamic becomes increasingly visible as algorithmic systems filter information flows, guide communication  
patterns, and silently structure the range of actions individuals perceive as available.  
What is most transformative, however, is not that algorithms offer assistance, but that they tacitly redefine what  
counts as rational inquiry. Computational models generate outputs through correlations rather than experiential  
insight, ethical reflection, or contextual interpretation. Over time, this may cultivate the belief that what is  
calculable is inherently more objective or reasonable (Zuboff, 2019). Such a shift constricts the epistemic  
landscape, diminishing the cultural and educational significance of narrative, emotional intelligence, imaginative  
exploration, and other forms of humanistic understanding (Nussbaum, 2010).  
The practical consequences of this reorientation are increasingly evident. Automated service platforms often  
produce decisions that satisfy formal efficiency criteria yet neglect compassion or social vulnerability. During  
emergencies, algorithmic distribution systems have at times marginalized those whose data profiles do not align  
with optimization goals, exposing the ethical limitations embedded within rule-based computation (O’Neil,  
2016). As these systems become normalized, they risk habituating the public to a technocratic worldview in  
which speed, standardization, and quantifiability overshadow considerations of justice, relational care, or dignity.  
This drift fosters an evaluative hierarchy in which unmeasurable values—such as empathy, trust, or reciprocal  
responsibility—appear secondary or even irrational (Bauman, 2000).  
Moreover, the dominance of algorithmic logic encourages individuals to interpret social issues through the lens  
of efficiency rather than moral significance. When humanistic practices are dismissed as slow or imprecise,  
emotional nuance and ethical reasoning gradually lose their cultural legitimacy. In such an environment, the  
space for reflective moral judgment contracts, raising urgent questions about how societies can preserve the  
depth and richness of human rationality in the midst of pervasive computational governance.  
Emotional Dislocation and the Youth Experience in AI-Mediated Environments  
University students today encounter artificial intelligence not simply as a set of digital tools but as an affective  
and relational environment that shapes how they experience themselves and others. Social media platforms,  
entertainment algorithms, and academic technologies subtly reconfigure emotional life by structuring attention,  
communication patterns, and expectations of interpersonal interaction. A substantial body of psychological and  
sociological research shows that technologically mediated communication often compresses emotional  
expression, streamlines responses, and narrows opportunities for rich, embodied engagement (Turkle, 2011;  
Valkenburg et al., 2022). These shifts recalibrate the emotional texture of daily experience in ways that are rarely  
noticed but deeply consequential.  
Because digital spaces allow finely curated self-presentation and effortless withdrawal, they frequently dilute  
the formative experiences that contribute to emotional maturity. Situations involving ambiguity, interpersonal  
friction, or sustained vulnerability—conditions essential for developing resilience and empathy—are often  
avoided or rendered less intense. Bauman’s (2000) account of “liquid connections” captures this dynamic:  
relationships in online environments emerge quickly, dissolve easily, and lack the enduring commitments forged  
Page 4207  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
through shared physical presence.  
Students’ pursuit of happiness is also reshaped in this environment. AI systems are adept at producing rapid,  
low-effort stimuli—momentary pleasures that resemble satisfaction but do not cultivate meaning or long-term  
well-being. Drawing on Seligman’s (2011) positive psychology framework, genuine flourishing requires  
engagement, purpose, and supportive relationships, none of which are automatically sustained by algorithmically  
curated entertainment. As students move between streams of micro-gratification, their emotional lives may  
become fragmented, weakening their capacity for deep reflection or sustained attention. Over time, such  
platforms can recalibrate reward thresholds, making slower, effortful, and intrinsically motivated activities feel  
less appealing.  
At the same time, the moral dimensions of students’ development face new pressures. As more ethical or  
evaluative judgments are delegated to recommendation systems, automated decision tools, and AI-generated  
interpretations, students encounter fewer moments that demand careful thinking about human consequences or  
moral complexities. Floridi’s (2014) notion of a redistribution of moral labor describes how individuals gradually  
relinquish aspects of agency to intelligent systems, normalizing reliance on automated guidance. This process  
risks fostering emotional detachment, reducing empathetic responsiveness, and eroding one’s sense of  
accountability.  
These tendencies are particularly concerning because university years traditionally provide a crucial period for  
cultivating emotional presence, moral imagination, and relational stability. When AI-mediated environments  
dominate students’ daily routines, the development of these capacities can become uneven or stunted. Without  
intentional counterbalances—such as sustained interpersonal dialogue, experiential learning, or reflective  
practice—students may enter adulthood with a diminished ability to navigate complex relationships or assume  
moral responsibility in uncertain situations. In this sense, the emotional and ethical implications of AI mediation  
extend far beyond technological convenience; they shape the core capacities required for mature participation in  
shared social and moral life.  
Risks to Moral Formation in AI-Saturated Educational Contexts  
Within contemporary educational environments, the growing imbalance between the cultivation of rationality  
and the development of emotional intelligence presents a significant systemic challenge to effective moral  
formation. The increasing integration of AI-based technologies into schooling has the potential to amplify  
tendencies that are already present within the structure of the modern university: a curricular prioritization of  
analytical achievement and measurable outcomes, evaluative structures that are heavily centered on quantifiable  
performance metrics, and a limited institutional emphasis on the crucial importance of emotional or relational  
development among students (Noddings, 2013). When these existing tendencies intersect and become  
intertwined with sophisticated algorithmic systems, students are increasingly likely to encounter a version of  
rationality that is effectively stripped of its inherent moral texture and human context. This can lead to a situation  
where ethical considerations are treated as secondary to efficiency and optimization. Furthermore, an  
overreliance on computational tools and data-driven analysis may inadvertently encourage students to perceive  
complex ethical problems as being readily solvable through the application of standardized procedural rules or  
objective data interpretations, leading to a reduction in their sensitivity to the subtle contextual nuances and  
human complexities that are inherent in many real-world ethical dilemmas. This gradual shift in perspective  
increases the likelihood of what might be termed moral minimalism—a problematic dependence on simplistic  
heuristics and readily available algorithms rather than sustained, critical ethical reflection and thoughtful  
deliberation (MacIntyre, 2007).  
One of the key risks associated with this trend is the algorithmic shaping of value horizons. Personalized  
information ecosystems, driven by sophisticated algorithms, subtly construct normative environments that can  
significantly influence students' perceptions of right and wrong. As these algorithms continuously reinforce  
existing preferences and biases, the richness and diversity of value pluralism tends to diminish, and students may  
find themselves increasingly exposed to narrower and more homogenous moral vocabularies. This form of  
intellectual and ethical confinement can significantly impede the formation of independent ethical perspectives  
and reduce students' awareness of alternative moral frameworks and ways of thinking about complex issues.  
Page 4208  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Another significant concern is the potential for emotional hollowing of moral judgment. Genuine moral  
understanding requires a certain degree of emotional attunement—including empathy, compassion, and the  
capacity to perceive and understand the human stakes involved in a given situation. When decision-making  
processes become heavily mediated by technological systems that are inherently indifferent to emotional  
meaning and human suffering, students may gradually internalize a similar sense of indifference, leading to a  
detachment from the emotional consequences of their actions. Over time, the cognitive structure of moral  
reasoning may remain superficially intact, yet its emotional foundation becomes progressively eroded, resulting  
in a troubling sense of moral hollowing.  
Addressing these multifaceted risks and challenges requires a fundamentally re-envisioned model of moral  
education that is firmly grounded in a triadic framework. This framework should encompass: first, the  
development of critical awareness regarding the inherent limits of algorithmic rationality and the potential biases  
embedded within technological systems; second, the cultivation of emotional literacy and the ability to engage  
in empathetic presence, fostering a deeper understanding of the human experience; and third, the ongoing  
development of practical wisdom through active participation in lived moral practice, providing students with  
opportunities to apply their ethical understanding in real-world situations. Such a comprehensive approach  
recognizes that sound moral judgment arises not from rationality alone, but from the dynamic and synergistic  
interplay between cognitive understanding, emotional intelligence, and embodied experience, allowing students  
to develop a more nuanced and ethical approach to navigating the complexities of the modern world.  
Reconstruction of Moral Education Based on Aristotle’s Practical Wisdom  
As artificial intelligence becomes deeply integrated into education, governance, and social life, traditional moral  
education faces unprecedented challenges. The widespread adoption of algorithmic decision-making not only  
reshapes human thought processes but also subtly influences the emotional structure and values of the younger  
generation. As technological systems continuously reinforce efficiency, computability, and external  
performance, the emotional dimension, value judgments, and character development in education are  
increasingly weakened. In this context, maintaining the depth of moral judgment, the integrity of emotional  
experience, and the coherence of personality development in a rapidly changing technological environment has  
become a highly relevant issue. Aristotle's theory of practical wisdom provides vital resources for this,  
emphasizing the synergistic operation of reason and emotion, the mutual accomplishment of virtue and action,  
and the central role of education in shaping good character. Therefore, reconstructing a moral education  
framework centered on practical wisdom can not only respond to the ethical dilemmas of the artificial  
intelligence era but also help guide students to achieve integrated growth of reason, emotion, and virtue in a  
complex world.  
Educational Significance of Practical Wisdom: A Mediating Framework for Reason and Emotion  
Aristotle’s idea of phronesis, or practical wisdom, maintains striking relevance for contemporary education,  
particularly as digital technologies and data analytics increasingly steer human judgment. Far from being a purely  
intellectual ability, practical wisdom is a cultivated disposition that enables individuals to deliberate effectively  
about issues central to a meaningful life. It helps them navigate ambiguity, evaluate competing considerations,  
and act in ways that promote genuine well-being. Aristotle describes it as a disposition that allows a person to  
choose what is truly good for human beings (Aristotle, 1925, 1140b2022). Modern virtue-ethics scholars  
likewise stress that this form of judgment brings together moral values, situational awareness, and emotional  
insight in the process of ethical decision-making (Kristjánsson, 2020; Schwartz & Sharpe, 2010).  
For example, in a pilot intervention conducted in a high school ethics course, students used AI tools to analyze  
case studies on distributive justice. While the AI provided data-driven scenarios, teachers guided students to  
reflect not only on the algorithmic output but also on the emotional narratives of affected individualsthrough  
role-play and empathetic writing exercises. This process illustrated how phronesis mediates between  
computational analysis and human emotional understanding, leading to more nuanced moral judgments.  
In today’s algorithmically curated environments, where digital platforms continuously filter information and  
reinforce simplified value frames, young people often struggle to maintain independent moral judgement. Their  
Page 4209  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
perspectives may be shaped by dominant online narratives or automated recommendations rather than thoughtful  
reflection. Education oriented around practical wisdom therefore helps students develop the capacity to question  
these influences, interpret dilemmas from multiple angles, and avoid relying uncritically on external cues.  
Such growth requires direct human engagement rather than heavily mediated interaction. Participatory  
activitiescommunity service, peer collaboration, conflict resolution workimmerse students in emotionally  
textured situations that demand empathy, patience, and attentive listening. Interpersonal signals such as facial  
expression, tone of voice, or bodily presence cannot be meaningfully substituted by digital simulations (Turkle,  
2011). These experiential encounters rebuild emotional nuance and strengthen students’ ability to respond  
compassionately to others.  
Moreover, practical wisdom provides a needed counterweight to the procedural mindset reinforced by AI  
systems. It teaches learners to view reasoning as a means toward ethical ends rather than mechanical problem-  
solving. When students use technological tools for drafting or analysis, educators can encourage them to  
reinterpret the output through personal goals, human consequences, and contextual judgment. This reflective  
step nurtures a more discerning form of rationality (Benhabib, 1992), enabling individuals to engage with AI  
critically rather than dependently.  
By integrating experiential learning, emotional presence, and ethical reflection, practical wisdom equips students  
to navigate a technologically saturated world without losing moral depth or human sensitivity. It supports the  
development of grounded, autonomous judgmentqualities essential for responsible citizenship in the digital  
age.  
Reorientation of Educational Aims: From Cognitive Efficiency to Character Flourishing  
In many contemporary educational settings increasingly shaped by artificial intelligence, there's a noticeable  
tendency to prioritize logical training, analytical problem-solving skills, and the pursuit of optimized  
performance metrics. While the development of such competencies undoubtedly strengthens instrumental  
reasoning abilities, there's a significant risk that this emphasis can inadvertently divorce rationality from deeper  
considerations of purpose, personal meaning, and a clear sense of moral direction. This can lead to a situation  
where individuals are highly skilled at achieving goals, but lack a strong understanding of why those goals are  
worthwhile or ethically sound. In contrast to this narrow focus, Aristotle situates the cultivation of rationality  
within a much broader ethical framework, emphasizing the importance of shaping character in such a way that  
reason, emotion, and desire work together harmoniously to contribute to a good, meaningful, and fulfilling life  
(Broadie, 1991; Annas, 2011). This holistic approach recognizes that true flourishing comes not just from  
intellectual prowess, but from the integration of all aspects of the human experience.  
Educational goals in the AI era, therefore, urgently require a process of rebalancing and recalibration. Instead of  
focusing solely on cognitive development and the acquisition of technical skills, education should actively  
facilitate the intergrowth and integration of cognitive insight, emotional maturity, and a strong sense of moral  
commitment. Each of these elements plays a vital and distinct role in shaping well-rounded individuals.  
Cognition provides clarity of understanding, enabling individuals to analyze situations and make informed  
decisions. Emotion infuses action with depth, meaning, and a sense of humanity, ensuring that our actions are  
motivated by genuine care and concern for others. And morality directs learners toward worthwhile ends,  
providing a compass for navigating complex ethical dilemmas and ensuring that their actions are aligned with  
their values. The harmonious unity of these three aspects closely reflects Aristotle’s profound conception of  
eudaimonia, which he understood as the integration of intellectual virtues (such as wisdom and understanding)  
and moral virtues (such as courage, justice, and compassion).  
Guiding students toward this more holistic orientation requires a fundamental repositioning of education,  
viewing it not merely as a process of knowledge transfer or skill acquisition, but as a comprehensive process of  
personality development and character formation. This shift in perspective places greater emphasis on the role  
of educators as mentors and guides, rather than simply as instructors. When educators consistently act as positive  
role modelsdisplaying genuine sincerity, profound empathy, and principled judgment in their interactions with  
studentsthey effectively embody the very interplay of reason and emotion that students are meant to learn and  
Page 4210  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
internalize. Through such lived exemplarity, students are able to develop a stable and reliable moral compass,  
guiding them in their decisions and actions, and inspiring them to strive toward well-rounded personal growth  
and ethical excellence (Carr, 2021). This approach recognizes that true education is not just about filling minds  
with information, but about nurturing the whole person and helping them to become the best version of  
themselves.  
Innovations in Moral Education: Emotional Resonance and Value-Embedded Experience  
Modern moral education must address the affective challenges amplified by AI-driven communication,  
particularly emotional detachment and diminished interpersonal presence. Renewed methods should cultivate  
moral experience and emotional resonance rather than relying solely on rule-based instruction.  
First, emotional education should be strengthened through processes of experiencing, empathizing, expressing,  
and acting. Literature, aesthetic cultivation, and community participation serve as effective channels for  
nurturing emotional depth. Instead of analyzing texts only for structure or themes, students may reinterpret  
narratives through performance, collaborative storytelling, or role-based writing. These immersive encounters  
encourage sincere emotional articulation and enrich affective awareness (Nussbaum, 2001).  
Second, moral intuition and empathy can be enhanced through situational learning. Role-play, case-based  
dilemmas, or cooperative projects immerse students in environments where cognitive understanding and  
emotional sensitivity must be jointly applied. Emotional engagement, when tied to authentic contexts,  
strengthens the internalization of moral norms and nurtures intrinsic motivation.  
In a university course on AI ethics, students participated in a semester-long ‘Moral Labs’ project. They worked  
with local community organizations to identify a real-world issue impacted by algorithms (e.g., access to public  
services). Through field visits, interviews with stakeholders, and collaborative design of ethical guidelines,  
students practiced phronesis by balancing data analysis with empathetic listening and value-based deliberation.  
Post-project reflections highlighted growth in their ability to integrate reason and emotion in complex decision-  
making.  
Third, educators’ personal conduct is indispensable. As Buber (1970) argues through the “I–Thou” relation,  
moral understanding flourishes when learners encounter others as fully present subjects rather than objects. This  
resonates with Turkle’s (2011) critique that digital communication often fosters “connection without  
conversation.” Thus, moral education should re-establish spaces of direct human encounterdialogue circles,  
reflective discussions, collaborative creationwhere moral meaning is experienced relationally rather than  
conveyed abstractly. By integrating emotional connection with lived moral practice, educators can build a  
continuum of “emotional resonance → moral experience → behavioral recognition,” anchoring ethical growth  
in authentic human interactions.  
Reconstructing the Moral Education System: A Triadic Model of Reason, Emotion, and Virtue  
A robust moral education framework necessitates the harmonious and coordinated development of three  
interconnected dimensions: reason, emotion, and virtue. While these spheres can be analytically distinguished  
for clarity, they function interdependently in practice and ultimately converge through phronesispractical  
wisdomwhich Aristotle identifies as the faculty that orchestrates sound moral judgment (Aristotle, trans. 1925;  
Kristjánsson, 2015). Each dimension thus plays a crucial role in shaping ethical behavior and moral character.  
Reason education focuses on cultivating students’ cognitive comprehension of fundamental moral concepts,  
understanding causal relationships within ethical dilemmas, and developing judgment grounded in rational  
clarity. Without this foundation, emotional responses may become impulsive or misdirected, and moral behavior  
may lack purpose or direction. Contemporary educational psychology also emphasizes that moral cognition  
develops progressively and requires scaffolding appropriate to students’ developmental stages (Eisenberg et al.,  
2016; Kohlberg, 1981). For younger learners, hands-on, concrete experiences aligned with their cognitive  
development facilitate their ability to grasp moral rules. As students mature, they can engage in more  
Page 4211  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
sophisticated reasoning taskssuch as case-based moral analysis, argument mapping, and structured ethical  
deliberationwhich support their ability to navigate complex moral scenarios (Nucci, 2014).  
Emotion education serves as the motivational bridge connecting moral cognition with moral action. Once  
students understand moral norms, empathy enables them to internalize these norms and transform abstract  
“knowing” into genuine “caring.” This aligns with contemporary moral psychology, which underscores empathy  
as a decisive predictor of prosocial and ethical behavior (Hoffman, 2000; Decety & Cowell, 2014). For younger  
students, empathy can be fostered through warm relational interactions, collaborative tasks, and perspective-  
taking role-play. Older learners may benefit from narrative arts, dramatic exploration of moral conflicts, and  
structured emotional dialogues that help them articulate and regulate their affective experiences. Emotional  
literacy educationteaching students to recognize, understand, and manage emotionsfurther supports  
balanced moral judgment, preventing affective impulsivity from overshadowing rational guidance (Brackett et  
al., 2019).  
Virtue education represents the culmination of reason and emotion, expressed through stable moral habits that  
guide consistent ethical behavior. Aristotle stresses that virtue arises through habituation, in which repeated  
practice shapes character over time (Aristotle, trans. 1925; Kristjánsson, 2022). Practical wisdom functions as  
the mediator that integrates thought and feeling into deliberate, purposive action. For children, virtue cultivation  
can be embedded in gamified routines connected with positive emotional experiences, reinforcing moral habits.  
For adolescents, real-world moral tasksrequiring deliberation, collaboration, and authentic emotional  
engagementprovide opportunities to enact and refine virtuous behavior. Post-activity reflection, guided by  
questions such as “What did I choose? Why? With what consequences?”, helps transform episodic moral choices  
into enduring dispositions (Narvaez, 2016).  
Together, this triadic framework addresses the foundational questions of moral education: what it is (principles  
rooted in practical wisdom), why it matters (the pursuit of happiness, character, and flourishing), how it is  
practiced (through emotional resonance and experiential learning), and how it can be institutionalized (via an  
integrated reasonemotionvirtue structure). In the age of AIwhere algorithmic rationality risks  
overshadowing emotional depth and ethical reflectionthis coherent structure offers a compelling logic for  
reconstructing moral education and cultivating morally grounded, emotionally attuned, practically wise  
individuals (Seligman, 2011; Turkle, 2011).  
Implementation Guidelines for the Triadic Model  
To put the ReasonEmotionVirtue framework into practice, educators and institutions should adhere to the  
following actionable guidelines, which bridge theoretical foundations and classroom application. Teacher  
Competencies: Educators require more than subject-matter expertise; they must develop a specialized skill set  
tailored to moral education in the AI era, including: Phronetic facilitation: The capacity to guide contextually  
nuanced moral dialogues, supporting students in balancing algorithmic outputs with humanistic values, ethical  
principles, and real-world consequences (Kristjánsson, 2015). Emotional mentorship: Proficiency in identifying  
students’ emotional responses to AI-related ethical dilemmas, fostering emotional literacy, and nurturing  
empathy and resiliencekey assets for navigating technological complexity (Goleman & Boyatzis, 2017).  
Virtue modeling: Consistent demonstration of core virtues such as integrity, compassion, and reflective judgment  
in daily teaching, as educators’ behaviors serve as powerful moral exemplars for students (Narvaez, 2010).  
Curriculum Design Logic: Courses should be organized around integrative modules that break down silos  
between technical and ethical learning, with three core design principles: Integration of technical AI literacy  
(e.g., understanding algorithmic decision-making) with humanities-driven ethical inquiry (e.g., exploring  
philosophical debates about autonomy and justice in AI; Floridi & Chiriatti, 2020). Embedding “moral labs” or  
service-learning initiatives that require students to apply ethical judgment to real-world AI scenariossuch as  
evaluating bias in hiring algorithms or designing AI tools for community good (Breslin, 2021). Adopting blended  
learning approaches, where AI tools handle data analysis or skill practice, while face-to-face deliberative circles  
provide space to debrief ethical implications, share perspectives, and build consensus (Garrison & Vaughan,  
2011).  
Page 4212  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
Assessment Strategies Aligned with Aristotelian Virtue Theory. Moving beyond overreliance on quantitative  
metrics, assessment should prioritize tracking students’ moral growth and practical wisdom, including: Narrative  
portfolios that document students’ evolving moral reasoning and emotional awareness across semesters,  
highlighting how they respond to increasingly complex AI ethical dilemmas (Moon, 2013). Scenario-based  
evaluations that present context-rich AI challenges (e.g., a healthcare algorithm prioritizing cost over patient  
care) to assess students’ contextual judgment and empathetic responsiveness (Beauchamp & Childress, 2019).  
Peer and community feedback mechanisms that evaluate students’ demonstration of relational virtues—such as  
collaboration, honesty, and respectin group AI ethics projects (Berk, 2005). Self-reflection journals where  
students articulate how they apply practical wisdom to both personal and digital interactions, connecting  
classroom learning to everyday ethical choices (Rodgers, 2002).  
Collectively, this triadic framework addresses moral education’s foundational questions: its core nature (rooted  
in practical wisdom), its purpose (fostering human flourishing, character development, and eudaimonia), its  
implementation (through emotional resonance and experiential learning), and its institutionalization (via an  
integrated reasonemotionvirtue structure). In an era where algorithmic rationality often overshadows  
emotional depth and ethical reflection, this cohesive framework provides a compelling roadmap for  
reconstructing moral education and cultivating individuals who are morally grounded, emotionally attuned, and  
practically wise (Seligman, 2011; Turkle, 2011).  
Cross-Cultural Perspectives on Moral Education and AI Ethics  
The challenges and solutions outlined above are not universally applicable; cultural and regional traditions shape  
approaches to moral education and AI ethics, offering valuable comparative insights that inform framework  
adaptation:  
East Asian models (e.g., China, Singapore) emphasize collective harmony, social responsibility, and the  
integration of moral education into national curricular frameworks. AI ethics guidelines in these regions often  
prioritize data security, social stability, and alignment with cultural or ideological valuessuch as Confucian  
virtues of benevolence (ren) and propriety (li) or core socialist principles (Xu & Chen, 2020). Moral education  
typically follows a structured, directive approach, with clear learning objectives tied to community and societal  
well-being (Tan, 2016).  
Nordic models (e.g., Finland, Sweden) center on individual autonomy, democratic participation, and holistic  
well-being. Their AI ethics frameworks emphasize transparency, human oversight, and equitable access to  
technological benefits (Boström & Sandberg, 2011). Moral education is integrated across subjects rather than  
taught as a standalone course, with a strong focus on critical thinking, inclusive dialogue, and experiential  
learning that connects ethics to students’ daily lives (Halinen, 2020).  
North American models (e.g., USA, Canada) reflect a pluralistic, often fragmented landscape, with strong  
emphases on individual rights, entrepreneurial innovation, and procedural fairness. AI ethics debates here  
frequently revolve around mitigating algorithmic bias, establishing accountability mechanisms, and protecting  
privacy (Noble, 2018). Moral education varies widely across districts and institutions but commonly includes  
character education programs in K-12 settings and specialized ethics courses in higher education, particularly in  
STEM fields (Nucci, 2017).  
These cultural differences underscore that reconstructing moral education in the AI age requires contextual  
sensitivity. However, the Aristotelian triadic modelwith its focus on universal human capacities for reason,  
emotion, and virtueoffers a flexible foundation that can be adapted to diverse cultural priorities: emphasizing  
community (East Asia), autonomy (Nordics), or pluralism (North America). This cross-cultural adaptability  
enhances the model’s global relevance and practical applicability (Kristjánsson, 2018).  
CONCLUSION  
The accelerating integration of artificial intelligence into the daily lives of young people presents both  
unprecedented opportunities and profound challenges for moral education. As algorithmic systems increasingly  
Page 4213  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
shape patterns of reasoning, emotional experience, and value formation, the risk emerges that instrumental  
rationality will eclipse humanistic reflection, emotional depth, and moral agency. This study demonstrates that  
Aristotle’s conception of happinessgrounded in the unity of reason, emotion, and virtueoffers a powerful  
framework for responding to these contemporary dilemmas. By highlighting the interdependence of intellectual  
and moral virtues, Aristotle reminds us that flourishing requires not only correct reasoning but also the  
cultivation of appropriate emotions and stable dispositions.  
Reconstructing moral education in the AI era therefore demands a renewed emphasis on phronesis as the  
mediating force that harmonizes analytic cognition with emotional insight. Practical wisdom equips students to  
evaluate algorithmic outputs critically, navigate interpersonal complexity, and act responsibly within morally  
significant contexts. Furthermore, the Aristotelian view that individual flourishing is inseparable from the well-  
being of the community underscores the social responsibility of education: it must cultivate emotional resonance,  
empathetic engagement, and civic virtue, rather than confining itself to the pursuit of cognitive performance.  
The proposed triadic modelintegrating reason, emotion, and virtueprovides an operational pathway for  
reorienting educational aims, innovating pedagogical methods, and reconstructing moral education structures.  
In practice, this model encourages both reflective autonomy and emotional maturity, enabling young people to  
resist the reductive appeals of algorithmic personalization and fragmented digital attention. Ultimately, the study  
affirms that moral education in the age of AI must remain steadfastly committed to a holistic vision of human  
excellence, ensuring that technological progress serves rather than supplants the deeper aims of human  
flourishing.  
Table1: Dimensions of the Triadic ReasonEmotionVirtue Model  
Key Dimension  
Core Content  
AI-Induced Imbalance of Reason and Algorithmic rationality expansion; emotional weakening  
Emotion  
Aristotelian Foundations of Happiness  
Integration of reason, emotion, and virtue  
Practical Wisdom (Phronesis) as Mediating action-oriented judgment  
Reconciliation Mechanism  
Transformation of Educational Goals and From knowledge transmission to holistic personality development  
Pedagogies  
Reconstruction of a Triadic Moral ReasonEmotionVirtue integrated system  
Education Model  
From Table1, the five-part analytical framework highlights how the rise of algorithmic systems fundamentally  
reshapes young people’s cognitive and emotional development. First, as algorithmic environments increasingly  
mediate information, decision-making, and social interaction, they reshape patterns of attention, reasoning, and  
affective experience. This dynamic reduces individuals’ capacity for independent moral judgment and weakens  
moral autonomya concern echoed by scholars such as Sunstein (2017) and Pariser (2011), who argue that  
algorithmic curation narrows cognitive diversity and restricts deliberative agency. The resulting imbalance  
between external computational rationality and internal emotional understanding underscores the urgency of a  
renewed moral framework.  
Aristotle’s theory of eudaimonia offers a compelling philosophical remedy to this imbalance. As highlighted in  
the second row, Aristotle situates happiness not in momentary pleasure or efficient reasoning alone but in the  
interdependent functioning of reason, emotion, and virtue (Nussbaum, 2001; Kraut, 2018). This holistic account  
stands in stark contrast to the fragmented rationality produced by AI-driven efficiency, reminding educators that  
true flourishing requires the alignment of cognition, affect, and character.  
Page 4214  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
The third row identifies phronesis, or practical wisdom, as the mechanism capable of restoring this balance.  
Unlike algorithmic decision-makingwhich applies rules to data without emotional understanding or moral  
intentphronesis involves context-sensitive reasoning shaped by lived experience and moral insight  
(MacIntyre, 2007). It transforms abstract moral knowledge into responsible action, integrating emotional  
resonance with rational deliberation. In the AI era, this Aristotelian capacity becomes increasingly indispensable,  
as it cultivates the type of situated judgment that automated systems cannot replicate.  
The fourth row shifts from theory to pedagogy, emphasizing that moral education cannot remain confined to the  
transmission of knowledge or procedural logic. Instead, it must cultivate empathy, emotional intelligence, and  
moral sensitivitycapacities that scholars such as Turkle (2011) and Seligman (2011) argue are eroding under  
digital mediation. This broader educational orientation reframes schooling as a site for cultivating human  
flourishing rather than algorithmic conformity. By foregrounding emotional resonance and ethical reflection,  
educators can challenge the reductive rationality embedded in AI-driven learning environments.  
Finally, the fifth row synthesizes these insights by proposing a triadic moral education model that integrates  
rational judgment, emotional resonance, and virtuous practice into a coherent framework for the AI era. This  
structure aligns with contemporary virtue ethics research (Kristjánsson, 2015) and demonstrates how reason,  
emotion, and virtue reinforce one another through practical wisdom. The model therefore provides both a  
conceptual foundation and an implementable pathway for reconstructing moral education in ways that address  
the psychological, ethical, and civic challenges posed by intelligent technologies.  
Taken together, these five components illustrate not only the risks posed by AI-induced rationalemotional  
imbalance but also the transformative potential of an Aristotelian response. By grounding moral education in the  
integrative power of phronesis, educators can foster the development of morally autonomous, emotionally  
attuned, and rationally reflective individuals who are capable of navigating an increasingly algorithmic world.  
REFERENCES  
1. Annas, J. (2011). Intelligent virtue. Oxford University Press.  
2. Aristotle. (1984). The complete works of Aristotle (J. Barnes, Ed.). Princeton University Press.  
3. Aristotle. (1925). Nicomachean ethics (W. D. Ross, Trans.). Oxford University Press.  
4. Aristotle. (1984). The politics (C. Lord, Trans.). University of Chicago Press.  
5. Aristotle. (1999). Nicomachean ethics (T. Irwin, Trans., 2nd ed.). Hackett.  
6. Bauman, Z. (2000). Liquid modernity. Polity Press.  
7. Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University  
Press.  
8. Berk, R. A. (2005). Thirteen ways to measure teacher quality. Educational Researcher, 34(3), 4856.  
9. Boström, N., & Sandberg, A. (2011). The ethics of artificial intelligence: Mapping the debate. Minds and  
10. Benhabib, S. (1992). Situating the self. Routledge.  
11. Broadie, S. (1991). Ethics with Aristotle. Oxford University Press.  
12. Brackett, M. A., Bailey, C. S., Hoffmann, J. D., & Simmons, D. N. (2019). RULER… Educational  
Psychologist, 54(3), 144161.  
13. Buber, M. (1970). I and Thou (W. Kaufmann, Trans.). Scribner.  
14. Breslin, J. G. (2021). Service-learning and AI ethics: Preparing students for responsible innovation.  
Journal of Educational Technology & Society, 24(3), 203214.  
15. Berridge, K. C., & Kringelbach, M. L. (2015). Pleasure systems in the brain. Neuron, 86(3), 646664.  
CNNIC. (2024). Statistical report on China’s internet development. China Internet Network Information  
Center.  
16. Couldry, N., & Hepp, A. (2017). The mediated construction of reality. Polity Press.  
17. Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design (4th ed.). SAGE.  
18. Carr, D. (2021). Virtue ethics and education. Routledge.  
19. Coeckelbergh, M. (2020). AI ethics. MIT Press.  
Page 4215  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
20. Curren, R. (2013). Philosophy of education: An introduction. Wiley-Blackwell.  
21. Decety, J., & Cowell, J. M. (2014). The complex relation between morality and empathy. Trends in  
Cognitive Sciences, 18(7), 337339.  
22. Eisenberg, N., Spinrad, T. L., & Knafo-Noam, A. (2016). Prosocial development. In Handbook of child  
psychology (7th ed.). Wiley.  
23. Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and  
24. Garrison, D. R., & Vaughan, N. D. (2011). Blended learning in higher education: Framework, principles,  
and guidelines. Jossey-Bass.  
25. Goleman, D., & Boyatzis, R. E. (2017). Emotional intelligence: A theory of performance. Organizational  
26. Halinen, A. (2020). Nordic moral education: Between individual autonomy and collective responsibility.  
Journal of Moral Education, 49(3), 365379. https://doi.org/10.1080/03057240.2020.1766338  
27. Hursthouse, R., & Pettigrove, G. (2018). Virtue ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia  
of philosophy.  
28. Kristjánsson, K. (2007). Aristotle, emotion, and education. Ashgate.  
29. Kristjánsson, K. (2018). Cross-cultural virtue ethics: A pluralistic framework. Journal of Religious  
30. Kohlberg, L. (1981). Essays on moral development: Vol. 1. Harper & Row.  
31. Kristjánsson, K. (2022). Virtuous emotions. Oxford University Press.  
32. Kristjánsson, K. (2020). Aristotelian character education: Reconsidered. Routledge.  
33. Kraut, R. (2018). Aristotle’s ethics. Princeton University Press.Lear, J. (2014). Aristotle: The desire to  
understand. Cambridge University Press.  
34. MacIntyre, A. (2007). After virtue (3rd ed.). University of Notre Dame Press.  
35. Miller, D. (2011). Justice for earthlings: Essays in political philosophy. Cambridge University Press.  
36. Nussbaum, M. C. (1986). The fragility of goodness. Cambridge University Press.  
37. Rodgers, C. R. (2002). Becoming a reflective practitioner: A look at the process. Journal of Continuing  
Education in the Health Professions, 22(4), 228236. https://doi.org/10.1002/chp.1005  
38. Seligman, M. E. P. (2011). Flourish: A visionary new understanding of happiness and well-being. Free  
Press.  
39. Shields, C. (2014). Aristotle. Routledge.  
40. Vallor, S. (2016). Technology and the virtues. Oxford University Press.  
41. Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford  
University Press.  
42. Horkheimer, M. (1947). Eclipse of reason. Oxford University Press.  
43. Kraut, R. (2018). Aristotle’s ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy.  
44. Habermas, J. (1984/1987). The theory of communicative action (T. McCarthy, Trans.). Beacon Press.  
45. Hoffman, M. L. (2000). Empathy and moral development. Cambridge University Press.  
46. Moon, J. D. (2013). Using portfolios to assess thinking and learning (2nd ed.). Taylor & Francis.  
47. Narvaez, D. (2010). The neurobiology of moral development and character education. In B. Murray &  
K. Rich (Eds.), Character education: Perspectives and practices for the twenty-first century (pp. 3960).  
Information Age Publishing.  
48. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism (2nd ed.). New  
York University Press.  
49. Nucci, L. P. (2017). Moral education in the United States: An overview. Journal of Moral Education,  
50. Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. Cambridge University  
Press.  
Narvaez, D. (2016). Embodied morality. Journal of Moral Education, 45(3), 291307.  
Nucci, L. (2014). Education in the moral domain. Cambridge University Press.  
Noddings,  
Nussbaum,  
N.  
M.  
(2013).  
C.  
Caring  
(2010).  
(2nd  
Not  
ed.).  
University  
profit. Princeton  
of  
California  
University  
Press.  
Press.  
for  
Nussbaum, M. C. (2001). The fragility of goodness (Rev. ed.). Cambridge University Press.  
51. O’Neil, C. (2016). Weapons of math destruction. Crown.  
Page 4216  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XI November 2025  
52. Pariser, E. (2011). The filter bubble. Penguin.  
53. Sandel, M. (2020). The tyranny of merit. Farrar, Straus and Giroux.  
54. Seligman, M. E. P. (2018). The hope circuit. PublicAffairs.  
55. Sunstein, C. R. (2017). Republic. Princeton University Press.  
56. Seligman, M. E. P. (2011). Flourish. Free Press.  
57. Schwartz, B., & Sharpe, K. (2010). Practical wisdom. Riverhead Books.  
58. Turkle, S. (2011). Alone together. Basic Books.  
59. Twenge, J. (2017). iGen. Atria Books.  
60. Tan, C. (2016). Confucian moral education in East Asia: Tradition and modernity. Journal of Moral  
61. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic  
Books.  
62. Valkenburg, P. M., Meier, A., & Beyens, I. (2022). The effects of social media on well-being. Current  
Opinion in Psychology, 45, 101107.  
63. Xu, Y., & Chen, H. (2020). AI ethics in China: Between cultural tradition and technological innovation.  
Journal of Information, Communication and Ethics in Society, 18(4), 503518.  
64. Zuboff,  
S.  
(2019).  
The  
age  
of  
surveillance  
capitalism.  
PublicAffairs.  
Page 4217