INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Between Cognitive Overload and Dehumanization: Exploring the  
Dimensions of Consumer Fatigue with Artificial Intelligence  
Molka Triki1, Amal Makni Turki2  
1Research Laboratory in Marketing (LRM-FSEG Sfax), University of Sfax, Tunisia  
2Interdisciplinary Laboratory of University-Business Management (LIGUE), Institute of Advanced  
Commercial Studies of Carthage, University of Carthage  
Received: 28 October 2025; Accepted: 04 November 2025; Published: 19 November 2025  
ABSTRACT  
Artificial intelligence (AI) is now a central player in interactions between brands and consumers, but its intensive  
use can generate cognitive, emotional, and relational fatigue, which has been little explored in marketing  
literature. This research aims to understand AI-fatigue and identify its constituent dimensions through consumers'  
experiences. An exploratory qualitative approach was adopted, based on 22 semi-structured interviews with  
regular users of AI, including chatbots, voice assistants, and recommendation systems. Thematic analysis  
revealed that AI fatigue unfolds along cognitive, emotional, relational, and ethical dimensions, leading in  
particular to information overload, feelings of dehumanization, and changes in strategies for interacting with  
technologies. This study makes a theoretical contribution by proposing an integrative conceptualization of AI  
fatigue and offers practical insights for designing more balanced and sustainable interactions between consumers  
and intelligent technologies.  
Keywords: Artificial intelligence (AI); AI fatigue; Cognitive overload; Emotional dimension;  
Consumer behavior; Human-machine interaction  
INTRODUCTION  
Artificial intelligence (AI) is now a central player in interactions between brands and consumers, redefining  
behaviors, decision-making processes, and consumer experiences (Zarantonello et al., 2024). From voice  
assistants to recommendation systems, chatbots, and smart applications, AI is integrated into all spheres of daily  
life, professional, personal, and emotional, profoundly transforming the way individuals interact with their  
technological environment (Panetta, 2018; Shalu et al., 2025).  
While these technologies promote increased personalization and unprecedented operational efficiency (Kotler et  
al., 2021; Sahebi et al., 2022; Gao and Liu., 2023), they also have a paradoxical effect: increased cognitive  
overload, anxiety, disengagement, and mental fatigue (Bright et al., 2018; Peake et al., 2018; Marsh et al., 2024).  
This phenomenon reflects a gradual shift in the scientific debate from a techno-optimistic view to an exploration  
of the "dark side" of technology adoption.  
In this context, artificial intelligence fatigue (AI fatigue) is emerging as a new form of negative reaction to the  
intensive use of intelligent technologies (Marsh et al., 2024). It reflects a range of emotions and perceptions such  
as weariness, loss of control, perceived dehumanization, and moral and ethical fatigue in the face of the growing  
presence of algorithms in everyday decisions (Fan et al., 2024; Nguyen et al., 2024; Wang et al., 2024). However,  
despite the proliferation ofAI devices in consumer environments, marketing literature remains primarily focused  
on their functional and emotional benefits, neglecting the psychological and social impacts of overexposure to  
smart technologies (Islam et al., 2020; da Silva et al., 2024; Fernandes and Oliveira, 2024).  
To date, little research has sought to conceptualize the internal dimensions of AI fatigue or describe the diversity  
of forms it can take in consumers’ lived experiences. However, understanding these dimensions, whether  
Page 7551  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
cognitive, emotional, relational, or ethical, appears essential to proposing an integrative view of the phenomenon  
and to helping organizations design more balanced and sustainable interactions with their audiences (Hang, et  
al., 2022; da Silva et al., 2024; Zarantonello et al., 2024).  
This research therefore aims to deepen our understanding of the concept ofAI fatigue by adopting an exploratory  
qualitative approach based on semi-structured interviews with consumers who are regular users of AI. It has  
three main objectives: (1) to identify the different forms of negative experiences associated with AI use, (2) to  
explore the emotions, cognitions, and behaviors that reflect perceived fatigue, and (3) to propose an integrative  
conceptualization of the dimensions of AI fatigue.  
On a theoretical level, this study draws on Cognitive Load Theory (Sweller, 1988) to explain the effects of  
information overload, as well as contributions from the Stress–Strain–Outcome Model (Tarafdar et al., 2019)  
and Conservation of Resources Theory (Hobfoll, 1989) to understand how repeated interaction with intelligent  
systems depletes users' cognitive, emotional, and ethical resources.  
The central question guiding this research is: what are the constituent dimensions of artificial intelligence fatigue,  
and how do they manifest themselves in the emotions, cognitions, and behaviors of regular AI users?  
This research is of dual interest. On a theoretical level, it enriches the literature on human-technology interactions  
by conceptualizing artificial intelligence fatigue as a multidimensional, cognitive, emotional, and relational  
experience that has been little explored in the field of marketing. It thus provides a better understanding of  
negative reactions to AI, beyond the already established concepts of technostress, technological anxiety, and  
information overload. In this sense, it contributes to the emergence of an original explanatory framework for the  
"dark side" of human-AI interaction, while promoting interdisciplinary dialogue between marketing, cognitive  
psychology, and information systems research. From a managerial and societal perspective, this study offers  
organizations avenues for designing more balanced and ethical technological experiences, preventing overload,  
disengagement, and user fatigue in the face of intelligent technologies.  
LITERATURE REVIEW  
Artificial intelligence (AI) now plays a central role in transforming marketing practices and consumer  
experiences. By enabling brands to analyze data on a massive scale, anticipate behavior, and personalize  
interactions, AI has profoundly changed the relationship between businesses and consumers (Kotler et al., 2021;  
Gao et al., 2023). Voice assistants, chatbots, recommendation systems, and intelligent platforms promise a  
seamless, responsive, and individualized experience, helping to build a continuous and contextual relationship  
with the consumer (Sahebi et al., 2022; Zarantonello et al., 2024).  
However, this hyper-personalization has ambivalent effects. The constant and intensive use of smart technologies  
generates cognitive overload and psychological fatigue linked to the proliferation of notifications, interactions,  
and digital solicitation (Bright et al., 2022; Lee et al., 2024; Marsh et al., 2024). This phenomenon is consistent  
with research on technostress (Tarafdar et al., 2019; Islam et al., 2020), which highlights the tensions and  
imbalances resulting from excessive use of digital technologies.  
From techno-fatigue to AI-related fatigue  
Techno-fatigue refers to the state of mental, emotional, and cognitive exhaustion experienced by individuals due  
to prolonged exposure to digital technologies (Ayyagari et al., 2011; Lyu et al., 2022). This fatigue stems from  
information overload, feelings of intrusion, and a perceived loss of control in the use of technological devices  
(Peake et al., 2018; Bright et al., 2022).  
However, AI fatigue is distinguished by its intelligent, adaptive, and algorithmic nature: it emerges not only from  
the quantity of interactions, but also from the quality of the human-machine relationship (Nguyen et al., 2024;  
Wang, 2025). AI fatigue reflects a set of negative reactions, frustration, mistrust, disengagement, to systems  
perceived as overly powerful, impersonal, or cognitively demanding (Ragolane and Patel, 2025).  
Page 7552  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
While social media fatigue (Fernandes and Oliveira, 2024) and information fatigue (Borges et al., 2021) have  
been widely documented, AI fatigue remains an emerging field that still lacks a clear conceptual framework.  
Existing research often addresses cognitive overload or technological stress without distinguishing the specific  
effects of artificial intelligence: machine learning, decision-making autonomy, or non-human interactions (Litan,  
2025).  
Theoretical foundations  
Artificial intelligence fatigue (AI fatigue) can be understood as a multidimensional phenomenon affecting the  
cognitive, emotional, relational, and ethical spheres. The combination of the following theories allows for a  
rigorous analysis of these dimensions.  
Cognitive Load Theory (Sweller, 1988)  
Cognitive Load Theory posits that information overload and the complexity of interactions place excessive  
demands on individuals' cognitive capacities, leading to a decrease in processing capacity, attention, and mental  
fatigue. In the context ofAI, this theory helps us understand how the design of systems (chatbots, voice assistants,  
recommendation systems) influences cognitive load and generates effects such as overload, frustration, or mental  
exhaustion. This theory thus specifically captures the cognitive mechanism of fatigue, where mental overload  
and continuous attentional effort lead to decreased cognitive efficiency and performance.  
Recent studies confirm that the perceived complexity of interfaces and the amount of information processed  
simultaneously increase cognitive fatigue and decrease user performance (Logan et al., 2018; Lyu et al., 2022;  
Marsh, et al., 2024).  
Stress–Strain–Outcome Model (Tarafdar et al., 2019)  
The Stress–Strain–Outcome Model proposes that technological demands create technological stress, which  
generates emotional responses (strain) and subsequently influences user behavior (outcome). This model is  
particularly relevant for explaining emotional fatigue, such as frustration, weariness, or anxiety, experienced  
during repetitive interactions with automated systems. It therefore highlights the emotional mechanism of  
fatigue, by linking exposure to AI-related stressors with negative affective states such as irritation, anxiety, or  
loss of motivation.  
Research shows that frequent and intrusive interactions with AI systems can cause persistent negative emotions  
and impair the user experience (Tarafdar et al., 2019; Bright et al., 2022; Marsh et al., 2024; Litan, 2025).  
Conservation of Resources Theory (Hobfoll, 1989)  
Conservation of Resources Theory argues that individuals seek to preserve their cognitive, emotional, and social  
resources. Intensive and prolonged use of AI can cause depletion of these resources, leading to mental fatigue,  
emotional stress, disengagement, and deterioration of social connection. This framework illuminates the ethical  
and relational mechanisms of fatigue, showing how the perceived dehumanization of interactions, loss of  
autonomy, and erosion of moral comfort deplete users’ psychological and social resources.  
Thus, this theory helps to understand not only cognitive and emotional overload, but also relational fatigue,  
resulting from a feeling of dehumanization or social distance induced by interactions with automated systems  
(Hobfoll, 1989; Nguyen et al., 2024; Ragolane and Patel, 2025).  
This multidimensional approach paves the way for an integrative conceptualization of AI-related fatigue,  
articulating the contributions of cognitive psychology, the sociology of technology, and experiential marketing.  
Such a comprehensive understanding is essential for developing more sustainable, responsible, and human-  
centered strategies for the design and use of AI.  
Page 7553  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
METHODOLOGY  
Data collection method  
In order to meet the objectives of this research, an exploratory qualitative approach was favored, based on semi-  
structured individual interviews. In accordance with the purposive sampling method (Marshall, 1996),  
participants were selected from among regular consumers of artificial intelligence, i.e., individuals who  
frequently use devices such as chatbots, voice assistants, recommendation platforms, or mobile applications. The  
challenge was to recruit participants with recent, frequent, and varied experience with AI.  
Data collection continued until semantic saturation was reached ((Delacroix et al., 2021), ensuring the depth and  
redundancy necessary for analysis. The interviews, which lasted an average of 30 to 40 minutes, a duration  
corresponding to the minimum threshold generally recognized in qualitative marketing research (Evrard et al.,  
2009), were conducted face-to-face, recorded with the consent of the interviewees, then translated from Arabic  
dialect into French and transcribed in full. The final corpus consists of 198 pages of verbatim transcripts, offering  
rich and nuanced material.  
A total of 22 Tunisian consumers were interviewed, which is above the standards recommended in the literature  
(Glaser and Strauss, 1967; Thompson and Haytko, 1997; Fournier, 1999; Bonsu and Belk, 2003). This sample,  
composed of 12 women and 10 men, was constructed to reflect a diversity of ages, profiles, and  
sociodemographic situations (see table 1).  
Table 1 Profile Of Participants Recruited For Individual Interviews  
Participant number  
Gender Age  
Occupation/Education  
Marketing student  
Computer developer  
Graphic designer  
Strategy Consultant  
Journalist  
Type of AI mainly used  
Generative AI (ChatGPT)  
AI coding tools (Copilot)  
Creative AI (Midjourney, Canva AI)  
Analytical AI  
P1  
Female  
Male  
23  
34  
29  
40  
31  
28  
36  
45  
27  
52  
33  
30  
48  
39  
26  
P2  
P3  
Female  
Male  
P4  
P5  
Female  
Man  
Chatbots / article summarization  
Automated marketing AI  
Virtual assistants/chatbots  
Administrative chatbots  
Voice assistants (Siri, Alexa)  
Medical AI  
P6  
Entrepreneur  
P7  
Woman  
Male  
Computer engineer  
Administrative officer  
Engineering student  
Doctor  
P8  
P9  
Female  
Male  
P10  
P11  
P12  
P13  
P14  
P15  
Female  
Man  
University professor  
Software Engineer  
Architect  
ChatGPT / Educational AI  
Recommendation AI  
Female  
Male  
AI in architecture/3D simulation  
Sales AI / Predictive CRM  
Design AI (Figma AI, DALL·E)  
Sales  
Female  
UX/UI Designer  
Page 7554  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
P16  
P17  
P18  
P19  
P20  
P21  
P22  
Male  
42  
35  
31  
50  
29  
38  
44  
Financial Analyst  
Financial AI  
Woman  
Male  
PhD student in Marketing Conversational AI / Research  
Data scientist  
Machine learning / data analysis  
Medical AI  
Female  
Male  
Medical specialist  
Digital entrepreneur  
Marketing Manager  
Teacher  
E-commerce/advertising AI  
AI marketing/segmentation  
ChatGPT / educational tools  
Woman  
Female  
The interview guide was developed in advance around six main themes in order to structure the discussions with  
participants. These themes focus on: (1) the general relationship with artificial intelligence, (2) positive  
experiences and perceived benefits, (3) manifestations of artificial intelligence fatigue, (4) cognitive and  
emotional reactions to artificial intelligence, (5) coping strategies and behaviors in response to fatigue, and (6)  
ethical perceptions and the relationship to dehumanization. This thematic organization allows for an in-depth  
and systematic exploration of the different dimensions of the experience of regular AI users.  
Data analysis method  
Data analysis followed a thematic approach (Giannelloni and Vernette, 2001), based on a grid inspired by  
Grounded Theory and implemented using QDA Miner. The coding was carried out jointly by the authors and a  
research assistant (Masmoudi and ElAoud, 2021), ensuring the robustness and reliability of the results. Adopting  
an inductive and comparative approach (Glaser and Strauss, 1967), we performed double coding of four  
randomly selected interviews, assessing consistency using the inter-rater agreement rate (Ronan and Latham,  
1974) and Scott's Pi (Scott, 1955) and Krippendorff's Alpha coefficients (Krippendorff, 1980). The satisfactory  
results of these tests confirm the robustness of our analysis and allow us to interpret the data with complete  
reliability.  
RESULTS  
The data analysis reveals a diversity of perceptions regarding artificial intelligence (See table 1). On the one  
hand, the coexistence of benefits and risks is clear: almost all participants recognize the practical advantages of  
artificial intelligence, while expressing growing mistrust of it. As one participant points out: "AI saves me a lot  
of time sorting through my emails, but I'm still afraid of what happens to my data" (P2, Male, 34). Another  
participant added: "Even when I know it’s supposed to help me, I can’t shake the feeling that there’s always  
something hidden behind it. It’s like I’m being watched, even when I’m just using a simple app" (P3, Female,  
29).  
On the other hand, individual variations emerge depending on age, usage habits, and the purpose of use  
(professional or personal). Contextual factors also play an important role. Fatigue is more pronounced when AI  
is used in sensitive areas such as health, finance, or parenting decisions. In this regard, one user said: "When I  
consult medical recommendations generated by AI, I feel additional stress. I find it difficult to know if I can really  
trust them" (P10, male, 52). He went on to explain: "I start questioning everything — even things I would  
normally trust. It’s like my own judgment is blurred because of all this automation. That’s mentally draining"  
(P10, male, 52).  
Finally, a close link between cognitive overload and ethical concerns emerges: the more abundant and difficult  
to process the information is, the more individuals feel a loss of control, fueling their concerns about the ethics  
and reliability of these technologies. As one participant noted: "Too many suggestions, too much data to analyze...  
I feel overwhelmed and I doubt the reliability of it all" (P18, Male, 31).  
Page 7555  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
The testimonials collected also highlight an ambivalent movement, oscillating between fascination with the  
practicality of artificial intelligence and a growing rejection of its coldness and omnipresence. This paradoxical  
relationship is expressed through three main dynamics.  
First, there is a quest for rehumanization: users express a desire for more sensitive, less automated interactions,  
seeking to restore an emotional and authentic dimension to their relationship with technology. As one participant  
said: "Even though AI can respond quickly, I still prefer face-to-face exchanges to feel that I am truly understood"  
(P7, Female, 36). Another respondent elaborated: "When I talk to a real person, there’s empathy a tone, a look.  
With AI, I get efficiency but no soul. After a while, it’s like everything feels empty" (P22, Female, 44).  
Secondly, a clear need for transparency emerges: understanding how AI systems work and how personal data is  
used is becoming essential for building trust. One respondent illustrates this point: "I need to understand what  
these tools are doing with my personal information. Without that, I feel vulnerable" (P11, Female, 33).  
Finally, a desire to regain control is becoming apparent: faced with the growing power and influence of smart  
devices, individuals are developing a heightened critical awareness and adapting their usage in order to preserve  
their psychological, ethical, and identity balance. One participant- e explains: "I disable automatic  
recommendations and filter notifications to stay in control of my choices" (P15, Female, 26). She continued: "It  
took me months to realize how much these systems were deciding for me. Now I’ve learned to slow down, to  
pause before clicking. That’s the only way I can feel like my choices are mine again" (P15, Female, 26)  
Thus, the relationship with AI is built on a tension between efficiency and humanity, technical efficiency and the  
need for meaning, reflecting a collective effort to restore a more human place for technology in everyday life.  
Faced with this tension, consumers are not content to simply accept it: they are experimenting with forms of  
resistance and adjustment, seeking to reaffirm their individuality in a world increasingly mediated by algorithms.  
Some choose to voluntarily limit their use, imposing regular digital breaks on themselves to reduce mental  
overload. Others favor increased control over settings, disabling notifications or restricting automatic  
recommendations to regain control of their digital environment.  
At the same time, seeking human contact appears to be a preferred alternative: many users prefer to seek human  
advice or expertise when it comes to important decisions. Finally, external verification of information,  
particularly by cross-referencing different sources, is a way of restoring confidence in the face of the perceived  
limitations of AI systems. As one participant points out: "Whenever possible, I always check information with  
several sources before relying on AI" (P5, Female, 31).  
Dimensions of artificial intelligence fatigue: Qualitative analysis of the interviews highlights several new forms  
of artificial intelligence fatigue, reflecting a complex experience that intertwines psychological, ethical, and  
identity dimensions. Far from being simple cognitive exhaustion, this fatigue reflects a gradual erosion of  
meaning, relationships, and feelings of autonomy in the face of ubiquitous technologies.  
Four main types of fatigue emerge: fatigue from loss of meaning, relational fatigue with the machine, moral and  
ethical fatigue, and identity and autonomy fatigue.  
Fatigue from loss of meaning: saturation with algorithmic logic: The first form of weariness observed concerns  
the loss of meaning felt by users when faced with the predictive and standardized logic of AI devices. Automatic  
recommendations, although intended to simplify decision-making, paradoxically create a feeling of boredom and  
personal disconnection. As one participant put it: "I feel like everything is already decided for me, that my choices  
are no longer really my own" (P1, Female, 23).  
This saturation with algorithmic repetition leads to existential fatigue: several participants say they no longer  
find the spontaneity and creativity that previously characterized their digital interactions. Some even say they  
deliberately limit their use of smart tools in order to reintroduce randomness and "feel like they are in control"  
of their decisions again.  
Page 7556  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Relational fatigue: the erosion of the human-machine bond: Another form of fatigue stems from the artificial  
relationship maintained with smart systems. Behind the efficiency and constant availability of chatbots and  
virtual assistants, users perceive a "soulless" form of communication. This feeling is summed up by one evocative  
testimony: "Sometimes talking to a chatbot is like talking to a brick wall: it responds, but it doesn't understand"  
(P5, Female, 31).  
This observation reflects a gradual disillusionment: AI, initially perceived as a facilitating presence, is becoming  
a source of frustration and even irritation. The interviews reveal a growing emotional disengagement from these  
devices. To compensate for this perceived dehumanization, some participants develop coping strategies, such as  
reintroducing human contact into their interactions (preferring a real advisor, a phone call, or face-to-face  
interaction).  
Moral and ethical fatigue: the unease of invisible surveillance: Moral fatigue stems from the feeling of constant  
technological presence and massive data collection perceived as intrusive. Participants mention a growing unease  
about digital surveillance and the commodification of their behavior, as well as the loss of privacy: "I know that  
everything I do is recorded, analyzed, sold... It's exhausting to feel constantly watched" (P12, male, 30).  
This constant vigilance leads to psychological wear and tear fueled by mistrust and guilt about using tools that  
are known to have abuses. Several users therefore develop defensive strategies: restricting access permissions,  
creating multiple digital identities, or disabling certain smart features deemed too intrusive. These behaviors  
reflect a desire to regain control over a technological space that has become anxiety-provoking.  
Identity and autonomy fatigue: the dilution of the "digital self": Finally, identity fatigue is rooted in the tension  
between autonomy and dependence. Individuals feel that algorithms are having a growing influence on the  
formation of their tastes, opinions, and behaviors. One of the quotes illustrates this loss of bearings: "After a  
while, I no longer know if I like something because it's me, or because an algorithm suggested it to me" (P20,  
Male, 29).  
This feeling of losing oneself is accompanied by questions about freedom of choice and the development of  
personal judgment in the age of automation. Some participants, aware of this trend, adopt selective distancing  
strategies: temporarily uninstalling certain applications, diversifying their sources of information, or favoring  
"offline" experiences to preserve their cognitive and emotional autonomy.  
These results show that fatigue associated with the use of artificial intelligence is a multidimensional  
phenomenon, situated at the intersection of the cognitive, emotional, and social spheres. It reflects not only  
exhaustion in the face of technological complexity, but also the search for identity balance in a world increasingly  
shaped and mediated by algorithms.  
Table 2 Themes And Sub-Themes Emerging From The Qualitative Analysis  
Themes  
Sub-themes  
Perceptions of Artificial Intelligence  
- Coexistence of benefits and risks  
- Individual and contextual variations  
- Cognitive overload and ethical concerns  
- Ambivalent relationship with technology  
- Quest for rehumanization  
- Demand for transparency  
- Desire to regain control  
Page 7557  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Adaptation Strategies for Coping with AI Fatigue - Voluntary limitation of use  
- Increased control over system parameters  
- Seeking human contact  
- External verification of information  
Dimensions of AI-Related Fatigue  
- Fatigue from loss of meaning  
- Relational fatigue (erosion of the human–machine bond)  
- Moral and ethical fatigue  
- Identity and autonomy fatigue  
DISCUSSION  
The results of this research highlight a profound ambivalence in individuals' relationship with artificial  
intelligence (AI), oscillating between attraction and concern, efficiency and humanity. This tension, which has  
been widely documented in the literature (Cave and Dignum, 2019; Longoni and Cian, 2022), is particularly  
evident here, revealing the emotional, cognitive, and ethical complexity of interactions with intelligent  
technologies. Our results can be interpreted in light of several relevant theoretical frameworks, including  
Cognitive Load Theory (Sweller, 1988), the Stress–Strain–Outcome Model (Tarafdar et al., 2019), and  
Conservation of Resources Theory (Hobfoll, 1989).  
Benefit/risk ambivalence and cognitive overload: Duality and cognitive fatigue in the face of AI  
The coexistence of perceived benefits and risks confirms previous findings that AI arouses both fascination and  
fear (Granulo et al., 2019). Participants overwhelmingly recognize the practical and functional advantages of  
these technologies, such as time savings, accessibility, and decision support, while expressing growing mistrust  
of their autonomy and opacity. This duality illustrates what Cave and Dignum (2019) describe as an "ambivalent  
relationship of critical dependence" on AI.  
Our results also highlight the cognitive and emotional fatigue associated with prolonged use of AI, particularly  
in sensitive contexts (health, parenting, finance). This cognitive overload can be interpreted in light of Cognitive  
Load Theory (Sweller, 1988): the abundance of information and the complexity of interactions increase cognitive  
load, limiting individuals' processing capacity and generating stress and frustration. The Stress–Strain–Outcome  
Model (Tarafdar et al., 2019) explains how this technological stress can cause psychological strain and  
adjustment behaviors. Furthermore, according to Conservation of Resources Theory (Hobfoll, 1989), the  
depletion of cognitive and emotional resources pushes individuals to adopt strategies aimed at protecting their  
well-being and autonomy.  
Contextual and individual sensitivity to AI fatigue  
The variations observed according to age, technological familiarity, or intended use confirm the classic models  
of technology adoption (Parasuraman, 2000; Venkatesh et al., 2003). However, our results nuance these  
approaches by emphasizing the importance of the context of use. AI is perceived as more intrusive and anxiety-  
n situations involving high moral or emotional responsibility. This contextual sensitivity illustrates the  
interaction between cognitive overload, technological stress, and resource availability, consistent with previous  
theories, and is in line with work on the ethical contextualization of AI (Floridi and Cowls, 2022).  
From mistrust to resistance: Critical agency and strategies for regulating AI fatigue  
Our findings reveal that individuals adopt strategies to regulate and protect resources: voluntarily limiting  
exposure time, disabling features, critically selecting sources, or resorting to human interlocutors. These  
Page 7558  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
behaviors are consistent with Conservation of Resources Theory, which posits that individuals protect and restore  
their resources in the face of environmental pressures. The Stress–Strain–Outcome model also helps to  
understand how AIrelated fatigue and stress lead to these proactive adjustments, reflecting a form of digital  
empowerment and identity preservation. Users thus appear as reflective actors, capable of developing forms of  
symbolic and behavioral resistance to maintain their autonomy in an algorithmic environment.  
Transparency and rehumanization: Towards relational ethics and reduced AI fatigue  
The requirements for transparency and understanding of AI systems confirm the widely shared ethical concerns  
about "trustworthy AI" (Floridi and Cowls, 2019; European Commission, 2021). Our data show that the search  
for more human interactions is a strategy for reducing stress and protecting emotional resources, in line with  
Cognitive Load Theory: intuitive and emotionally rich interactions reduce cognitive load and improve the  
perception of control. This quest for rehumanization extends the work on social robotics (Flandorfer, 2012;  
Vlachos and Schärfe, 2014; Sundar et al., 2022) and introduces an affective dimension that is often overlooked  
in ethical debates focused on regulation. It invites us to conceive ofAI ethics not only as a normative framework,  
but as a relational ethic, based on mutual recognition and the restoration of human connection.  
Thus, the relationship with AI appears as a space of constant negotiation between technical efficiency and  
humanity, where users mobilize their cognitive, emotional, and social resources to maintain psychological and  
moral balance. Confrontation with Cognitive Load, Stress–Strain–Outcome, and Conservation of Resources  
theories helps us understand how cognitive overload and technological stress influence the perception of risks  
and benefits, motivating strategies of adjustment and resistance. AI thus becomes a trigger for adaptive learning  
and identity co-construction, highlighting that technological adoption is inseparable from emotional, ethical, and  
contextual dimensions.  
Fatigue and AI: Interactions between cognition, ethics, and identity  
The results of this study reveal that fatigue related to the use of artificial intelligence (AI) goes beyond the  
traditional framework of cognitive exhaustion to become a multidimensional experience involving  
psychological, ethical, and identity dimensions. This approach enriches the existing literature by offering an  
integrative and nuanced view of digital fatigue.  
The fatigue of loss of meaning, as observed in participants, is consistent with the work of Susskind and Susskind  
(2015), who emphasize that the automation of decisions can reduce active engagement and generate a sense of  
alienation. However, our results extend this perspective by revealing an existential dimension: algorithmic  
repetition and the predictability of recommendations lead to a loss of creativity and subjective autonomy, which  
traditional models of cognitive overload (Sweller, 1988) do not fully capture. The proactive attitude of  
participants, who voluntarily limit their use of AI to restore a sense of control, echoes Garcia's (2012)  
observations on the quest for spontaneity and reappropriation in digital interactions.  
Relational fatigue illustrates the gradual disillusionment with intelligent systems. While the literature on the  
uncanny valley and the humanization of interfaces (Reeves and Nass, 1996; Nass and Moon, 2000; Zhang et al.,  
2020) shows that human-machine interactions can generate emotional tension, our data specifies that this  
disillusionment is accompanied by lasting emotional disengagement and compensation through human  
interactions. This observation qualifies the idea that simply humanizing devices is sufficient to maintain a  
satisfactory emotional connection.  
Moral and ethical fatigue, linked to constant surveillance and the commodification of behavior, is consistent with  
Zuboff's (2023) work on surveillance capitalism and with studies on "privacy fatigue" (Choi et al., 2018; Lyu et  
al., 2024; Wang et al., 2025). Nevertheless, our results highlight active defensive strategies employed by users,  
such as restricting permissions and diversifying digital identities, which demonstrate an attempt to reclaim and  
regulate the digital space, going beyond the simple passive resignation described in the literature.  
Finally, identity and autonomy fatigue reveals the tension between algorithmic influence and self-construction.  
While the effects of filter bubbles on individuals' preferences and opinions have already been documented  
Page 7559  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
(Pariser, 2011), our results highlight that this influence can lead to a dilution of digital identity and a questioning  
of personal choices. Strategies of selective distancing and seeking disconnected experiences illustrate a proactive  
reaffirmation of autonomy and freedom of judgment, extending Turkle's (2015) work on identity in the digital  
age.  
The central contribution of this study lies in the multidimensionality and interdependence of fatigue. Unlike  
previous work that addresses cognitive, emotional, or moral fatigue separately (Kahneman, 1973; Dabbish and  
Kraut, 2006), our results suggest that these dimensions reinforce each other, forming a global phenomenon where  
loss of meaning, relational disillusionment, moral vigilance, and identity dilution combine to generate complex  
and lasting fatigue. This perspective invites us to consider AI-related fatigue not only as functional exhaustion,  
but also as a socio-technical and identity signal, revealing tensions between automation, autonomy, and self-  
construction in the digital age.  
Research Contributions  
This study makes several major contributions, both theoretical and methodological as well as managerial, helping  
to enrich our understanding of the phenomenon of fatigue linked to artificial intelligence.  
On a theoretical level, this research makes a significant contribution to the literature on digital fatigue by  
extending the concept to the specific context of artificial intelligence. While previous work has dealt separately  
with cognitive overload, relational disillusionment, and ethical mistrust, it shows that AI fatigue is a  
multidimensional phenomenon resulting from the intertwining of the cognitive, emotional, ethical, and identity  
spheres.  
She therefore introduces an original typology of AI fatigue, based on four dimensions:  
-
-
-
Fatigue from loss of meaning (algorithmic saturation and existential disengagement)  
Relational fatigue (weariness with human-machine connections),  
Moral and ethical fatigue (unease with surveillance and the commodification of data), - Identity  
autonomy fatigue (dilution of the digital self).  
and  
This conceptualization enriches existing theoretical frameworks, such as Cognitive Load Theory, Stress–Strain–  
Outcome Model, and Conservation of Resources Theory, by integrating them into a holistic and sociotechnical  
perspective that takes into account subjectivity, meaning, and human values. It thus allows fatigue to be  
interpreted not simply as a consequence of technological complexity, but as an indicator of identity and ethical  
tension in the human-machine relationship.  
Beyond marketing and cognitive psychology approaches, it offers a sociological and existential reading of the  
relationship with artificial intelligence, where fatigue appears as a sociotechnical warning signal, revealing  
tensions between automation, autonomy, and identity construction in hyperconnected societies. She thus invites  
us to rethink technological adoption not as a simple question of efficiency, but as a process of balancing  
humanity, meaning, and performance.  
Methodologically, this study adopts an exploratory qualitative approach based on semi-structured interviews,  
which is innovative in a field still dominated by quantitative and instrumental approaches to digital fatigue. This  
approach gives voice to users and reveals the diversity of experiences and regulation strategies (limiting use,  
seeking human contact, increased control), paving the way for a phenomenological and embodied understanding  
of the relationship with AI.  
Finally, from a managerial and practical standpoint, the research invites designers, companies, and institutions  
to integrate AI-related fatigue as a key indicator of digital well-being. It recommends promoting more human,  
transparent, and ethical interfaces that reduce cognitive overload and restore trust, while fostering a sustainable  
balance between technological efficiency, autonomy, and humanity.  
Page 7560  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Limitations And Future Avenues Of Research  
Despite the significant contributions of this research, several limitations are worth highlighting, opening up  
prospects for future work.  
First, the number of interviews (22 participants) corresponds to the methodological standards recommended for  
exploratory qualitative studies and allowed semantic saturation to be achieved. Nevertheless, to strengthen the  
robustness and generalizability of the results, it would be relevant to conduct complementary qualitative studies,  
for example, using focus groups or digital ethnography. Such approaches could allow for a more in-depth  
exploration of the diversity of experiences and enrich our understanding of the dimensions of AI fatigue in  
various contexts of use.  
Moreover, this study was conducted within a specific cultural and geographical context, which, while allowing  
for in-depth exploration of meanings and experiences, limits the cross-cultural generalization of the findings.  
Future research should therefore extend the investigation to participants from diverse regions and cultural  
backgrounds to allow for cross-cultural validation of the proposed framework. Such comparative studies could  
explore how cultural orientations, such as individualism versus collectivism, uncertainty avoidance, or power  
distance, shape the cognitive, emotional, and ethical responses to AI. Broadening the cultural scope would  
enhance the robustness and generalizability of the findings and reveal potential cultural variations in how users  
perceive and cope with AI fatigue.  
Second, although the qualitative approach adopted provides a rich and nuanced analysis, it does not allow for  
quantifying the extent or intensity of the different forms of fatigue. Future research could therefore be  
supplemented by quantitative or mixed studies, incorporating scales measuring cognitive, emotional, relational,  
and ethical fatigue, in order to test the validity of the identified dimensions and assess their prevalence within  
larger populations.  
Furthermore, this study focused on regular AI users, neglecting the perspectives of occasional users or non-users.  
It would be relevant to examine these groups in future research to determine the extent to which the frequency  
and intensity of exposure to smart technologies influence fatigue and to identify the user profiles most likely to  
experience negative effects.  
Finally, although this research highlights the psychological, emotional, and relational dimensions of AI-related  
fatigue, it does not address in depth the long-term impacts on consumer behavior, such as the abandonment of  
certain devices, changes in purchasing habits, or shifts in trust in technology. Future studies could adopt a  
longitudinal perspective to track the evolution of fatigue over time and assess its consequences on consumer  
behavior and decisions.  
REFERENCES  
1. Ayyagari, R., Grover, V., & Purvis, R. (2011). Technostress: Technological antecedents and implications.  
MIS quarterly, 831-858.  
2. Bonsu, S. K., & Belk, R. W. (2003). Do not go cheaply into that good night: Death-ritual consumption  
in Asante, Ghana. journal of consumer research, 30(1), 41-55.  
3. Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2021). The strategic  
use of artificial intelligence in the digital era: Systematic literature review and future research directions.  
International Journal of Information Management, 57, 102225.  
4. Bright, L. F., & Logan, K. (2018). Is my fear of missing out (FOMO) causing fatigue? Advertising, social  
media fatigue, and the implications for consumers and brands. Internet Research, 28(5), 1213-1227  
5. Cave, S., & Dignum, V. (2019). The role ofAI in achieving sustainable development goals. AI & Society,  
34(4), 527–533.  
6. Choi, H., Park, J., & Jung, Y. (2018). The role of privacy fatigue in online privacy behavior. Computers  
in Human Behavior, 81, 42-51.  
7. da Silva, F. P., Jerónimo, H. M., Henriques, P. L., & Ribeiro, J. (2024). Impact of digital burnout on the  
use of digital consumer platforms. Technological Forecasting and Social Change, 200, 123172.  
Page 7561  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
8. Dabbish, L.ꢀA., & Kraut, R.ꢀE. (2006). Email overload at work: An analysis of factors associated with  
email strain. Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative  
Work, 431-440.  
9. Delacroix, É., Jolibert, A., Monnot, E., Jourdan, P. (2021). Marketing Research: Méthodes de recherche  
et d'études en marketing. France: Dunod.  
10. Evrard, Y., Pras, B., & Roux, E. (2009). Market : Fondements et méthodes des recherches en marketing  
(4ᵉ éd.). Paris : Dunod, 704 pages  
11. Fan, W., Osman, S., Zainudin, N., & Yao, P. (2024). How information and communication overload affect  
consumers’ platform switching behavior in social commerce. Heliyon, 10(10).  
12. Fernandes, T., & Oliveira, R. (2024). Brands as drivers of social media fatigue and its effects on users’  
disengagement: The perspective of young consumers. Young Consumers, 25(5), 625-644.  
13. Flandorfer, P. (2012). Population ageing and socially assistive robots for elderly persons: the importance  
of sociodemographic factors for user acceptance. International journal of population research, 2012(1),  
829835.  
14. Floridi, L., & Cowls, J. (2022). Aunified framework of five principles for AI in society. Machine learning  
and the city: Applications in architecture and urban design, 535-545.  
15. Fournier, S. (1998). Consumers and their brands: Developing relationship theory in consumer  
research. Journal of consumer research, 24(4), 343-373.  
16. Gao, Y., & Liu, H. (2023). Artificial intelligence-enabled personalization in interactive marketing: a  
customer journey perspective. Journal of Research in Interactive Marketing, 17(5), 663–680.  
17. Garcia, P. (2012). Alone Together: Why We Expect More from Technology and Less from Each Other by  
Sherry Turkle. InterActions: UCLA Journal of Education and Information Studies, 8(1).  
18. Giannelloni J.L. et Vernette E. (2001), Etudes de marché, Edition Vuibert, Paris.  
19. Glaser, B.G., et Strauss, A.L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative  
Research., Aldine, Chicago, 271 pages  
20. Granulo, A., Fuchs, C., & Puntoni, S. (2019). Psychological reactions to human versus robotic job  
replacement. Nature human behaviour, 3(10), 1062-1069.  
21. Hobfoll, S. E. (1989). Conservation of resources: a new attempt at conceptualizing stress. American  
psychologist, 44(3), 513.  
22. Hobfoll, S. E. (1989). Conservation of resources: a new attempt at conceptualizing stress. American  
psychologist, 44(3), 513.  
23. Islam, A. N., Laato, S., Talukder, S., & Sutinen, E. (2020). Misinformation sharing and social media  
fatigue during COVID-19: An affordance and cognitive load perspective. Technological forecasting and  
social change, 159, 120201.  
24. Kahneman, D. (1973). Attention and Effort. Royaume-Uni: Prentice-Hall.  
25. Kotler, P., Kartajaya, H., & Setiawan, I. (2021). Marketing 5.0: Technology for Humanity. Hoboken, NJ:  
John Wiley & Sons.  
26. Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills, CA : Sage.  
27. Krippendorff, K. (1980). Content Analysis: An Introduction to Its Methodology. Beverly Hills (CA),  
Sage.  
28. Lee, S., Erdem, M., Anlamlier, E., Chen, C. C., Bai, B., & Putney, L. (2023). Technostress and hotel  
guests: a mere hurdle or a major friction point?. Journal of Hospitality and Tourism Management, 55,  
307-317. 34.  
29. Lițan, D. E. (2025). The Impact of Technostress Generated by Artificial Intelligence on the Quality of  
Life: The Mediating Role of Positive and Negative Affect. Behavioral Sciences, 15(4), 552.  
30. Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-  
machine” effect. Journal of Marketing, 86(1), 91-108.  
31. Lyu, T., Guo, Y., & Chen, H. (2024). Understanding the privacy protection disengagement behaviour of  
contactless digital service users: the roles of privacy fatigue and privacy literacy. Behaviour &  
Information Technology, 43(10), 2007-2023.  
32. Marsh, E., Perez Vallejos, E., & Spence, A. (2024). Overloaded by information or worried about missing  
out on it: A quantitative study of stress, burnout, and mental health implications in the digital workplace.  
Sage Open, 14(3), 21582440241268830.  
33. Marshall, M. N. (1996). Sampling for qualitative research. Family practice, 13(6), 522-526.  
Page 7562  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
34. Masmoudi, M. H., & El Aoud, N. (2021). Le style d’achat hybride: conceptualisation et proposition d’un  
instrument de mesure. Recherches en Sciences de Gestion, 143(2), 87-111  
35. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of  
social issues, 56(1), 81-103.  
36. Nguyen, K. M., Nguyen, N. T., Ngo, N. T. Q., Tran, N. T. H., & Nguyen, H. T. T. (2024). Investigating  
consumers’ purchase resistance behavior to AI-Based content recommendations on short-video  
platforms: a study of greedy and biased recommendations. Journal of Internet Commerce, 23(3), 284-  
327.  
37. Panetta, F. (2018). Fintech and banking: today and tomorrow. Speech of the Deputy Governor of the  
Bank of Italy, Rome, 12th May. Banca-Italia-Panetta-Intervento Fintech.pdf  
38. Parasuraman, A. (2000). Technology readiness index (TRI): A multiple-item scale to measure readiness  
to embrace new technologies. Journal of Service Research, 2(4), 307-320.  
39. Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.  
40. Peake, J. M., Kerr, G., & Sullivan, J. P. (2018). A critical review of consumer wearables, mobile  
applications, and equipment for providing biofeedback, monitoring stress, and sleep in physically active  
populations. Frontiers in physiology, 9, 743.  
41. Ragolane, M., & Patel, S. (2025). Too Much, Too Fast: Understanding Ai Fatigue In The Digital  
Acceleration Era. International Journal of Arts, Humanities and Social Sciences, 6(8), 2693-2555.  
42. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new  
media like real people. Cambridge, UK, 10(10), 19-36.  
43. Ronan, W. W., & Latham, G. P. (1974). The reliability and validity of the critical incident technique: A  
closer look. Studies in Personnel Psychology, 6(1), 53–64.  
44. Sahebi, A. G., Kordheydari, R., & Aghaei, M. (2022). Anew approach in marketing research: Identifying  
the customer expected value through machine learning and big data analysis in the tourism industry. Asia-  
Pacific Journal of Management and Technology (AJMT), 2(3), 26–42.  
45. Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding. Public opinion  
quarterly, pp. 321-325.  
46. Shalu, Verma, N., Dev, K., Bhardwaj, A. B., & Kumar, K. (2025). The Cognitive Cost of AI: How AI  
Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use. Annals of Neurosciences,  
09727531251359872.  
47. Sundar, S. S., Jia, H., Bellur, S., Oh, J., & Kim, H. S. (2022). News informatics: engaging individuals  
with data-rich news content through interactivity in source, medium, and message. In Proceedings of the  
2022 CHI conference on human factors in computing systems (pp. 1-17).  
48. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive science, 12(2),  
257-285.  
49. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2),  
257-285.  
50. Tarafdar, M., Cooper, C. L., & Stich, J. F. (2019). The technostress trifecta‐techno eustress, techno  
distress and design: Theoretical directions and an agenda for research. Information systems  
journal, 29(1), 6-42.  
51. Tarafdar, M., Pullins, E.ꢀB., & Ragu-Nathan, T.ꢀS. (2015). Technostress: Negative effect on performance  
and possible mitigations. Information Systems Journal, 25(2), 103-132.  
52. Thompson, C. J., et Haytko, D. L. (1997). Speaking of fashion: Consumers' uses of fashion discourses  
and the appropriation of countervailing cultural meanings. Journal of consumer research, 24(1), pp. 15-  
42.  
53. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic  
Books.  
54. Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin Press.  
55. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information  
technology: Toward a unified view. MIS quarterly, 425-478.  
56. Vlachos, E., & Schärfe, H. (2014). Social robots as persuasive agents. In International Conference on  
Social Computing and Social Media (pp. 277-284). Cham: Springer International Publishing.  
57. Wang, Shuo., (2025). The Influence of AI in Marketing. Proceedings of the 3rd International Conference  
on Financial Technology and Business Analysis DOI: 10.54254/2754-1169/151/2024.19326  
Page 7563  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
58. Wang, W., Wu, Q., Li, D., & Tian, X. (2025). An exploration of the influencing factors of privacy fatigue  
among mobile social media users from the configuration perspective. Scientific Reports, 15(1), 427.  
59. Zarantonello, L., Grappi, S., & Formisano, M. (2024). How technological and natural consumption  
experiences impact consumer well‐being: The role of consumer mindfulness and fatigue. Psychology &  
Marketing, 41(3), 465-491.  
60. Zhang, J., Li, S., Zhang, J. Y., Du, F., Qi, Y., & Liu, X. (2020, July). A literature review of the research  
on the uncanny valley. In International conference on human-computer interaction (pp. 255-268). Cham:  
Springer International Publishing.  
61. Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203-213). Routledge.  
Page 7564