INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 44
www.rsisinternational.org
What the Desert Fathers Teach Data Scientists: Ancient Ascetic
Principles for Ethical Machine-Learning Practice
Anthony NDUKA
Spiritan University, Nneochi, Abia State


ABSTRACT
This study investigates whether the ascetic virtues articulated by the Desert Fathers, 3rd- to 5th-century Christian
monastics, can inform contemporary data science practice. It addresses two interconnected challenges: persistent
ethical risks in artificial intelligence (AI), such as bias, opacity, and automation overreach, as well as escalating
cognitive overload within today's attention economy. Through an integrative literature review combining
primary desert monastic texts with contemporary scholarship in AI ethics and cognitive psychology, the paper
identifies five core virtues: humility, discernment, stillness, simplicity, and vigilance. Each virtue addresses
corresponding data‑science dilemmas, offering practical guidance: humility enhances bias detection;
discernment improves transparency in decisions; stillness and simplicity mitigate cognitive overload; and
vigilance ensures continuous ethical monitoring. Findings indicate that virtue‑based "digital ascetic" practices
significantly complement procedural ethics, foster responsible AI innovation, and strengthen practitioner
resilience, ultimately promoting ethical integrity and cognitive sustainability in data science.
Keywords: Desert Fathers; Responsible AI; Algorithmic bias; Attention economy; Machine learning; Human-
in-the-loop
INTRODUCTION
Modern data science moves fast. With rapid innovation and endless streams of information, it looks nothing like
the life of the Desert Fathers. These early Christian monastics of the third to fifth centuries sought solitude in
the Egyptian deserts. Though they lived long before our time, their wisdom has value for todays data scientists.
This paper explores two questions:
How can their insights guide ethical dilemmas in AI and algorithms?
How can they support focus in a world flooded with data and distraction?
The Desert Fathers cultivated five virtues: humility, discernment, stillness, simplicity, and vigilance. Applied in
a secular and technical context, these virtues help teams build more ethical AI and sustain cognitive focus amid
information overload. Unlike rule-based ethics, which rely on external checklists and prescriptive regulations, a
virtue framework seeks to shape the practitioner’s habitual character. This makes ethical behaviour intrinsic
rather than merely compliant.
While procedural checklists dominate AI ethics discourse, the role of character-forming practices remains
underexplored. This paper argues that five ascetic virtues from the Desert Fathers mitigate bias, opacity, over-
automation, and cognitive overload in machine-learning practice through concrete workflow rules and attention
practices. It first sets the historical context and core principles. It then outlines AI ethics challenges such as bias,
opacity, over-automation, loss of human judgment, and cognitive burdens in data science. It links monastic
practices such as silence, simplicity, and discernment to modern technical measures such as bias mitigation and
human-in-the-loop design. By synthesising desert wisdom with contemporary research, we show that ancient
principles support a more ethical, focused, and sustainable data science.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 45
www.rsisinternational.org
This paper maintains that virtue-formed practitioners who adopt humility, discernment, stillness, simplicity, and
vigilance produce fairer models and sustain attention better than teams that rely on checklists alone.
Contributions
A virtue-to-practice mapping for ethical ML.
A discernment-based model-selection rubric.
Focus and vigilance routines for data teams.
Structure of the Paper: This paper covers the following sections:
Background on the Desert Fathers;
Methods and search strategy.
AI ethics challenges;
Attention and cognitive overload;
Applications of monastic wisdom to ethics and focus; including a VIAD protocol.
Implications for responsible innovation.
We begin with the historical roots and core virtue of the Desert Fathers.
Background on the Desert Fathers: Historical Roots, Key Virtues, and Relevance to AI Ethics
Historical Context
The “Desert Fathers” were early Christian hermits and monks, most notably in the deserts of Egypt, Syria, and
Palestine during the late third to fifth centuries. They sought a radical devotion expressed through solitude,
prayer, and asceticism (Harmless, 2004; Chitty, 1966). Two complementary streams emerged: an eremitic path
epitomised by St Anthony the Great (c. 251 to 356) and a cenobitic tradition organised by St Pachomius (c. 292
to 348). Their sayings were preserved in the Apophthegmata Patrum. In parallel, Evagrius Ponticus (345 to 399)
systematised desert ascetical theology in treatises such as the Praktikos, and John Cassian carried key themes to
the Latin West through the Institutes and the Conferences. For the female ascetic witness, see Laura Swan’s
synthesis of figures such as Syncletica, Sarah, and Theodora (Swan, 2001). We now outline the core virtues that
structure this tradition and how they frame the later application to AI ethics.
Core Virtues and Practices
The desert tradition is a virtue-forming ecosystem, not a rule-based order. Five interlocking virtues, listed with
Greek terms, structure both the ancient texts and this study’s application to data science. For transliteration, we
use tapeinōsis, diakrisis, hesychia, haplotēs, and nepsis.
Table 1. Mapping Desert virtues to modern AI challenges
Virtue
Desert source (sample saying or text)
Contemporary AI and data-science
challenge
Humility
(tapeinōsis)
Vision of demonic snares answered only by
humility (Apoph. Anthony 7; Ward, 1975/2010,
p. 2).
Counters algorithmic arrogance and
overconfidence in model
generalisation.
Discernment
(diakrisis)
“Mother of all virtues, the eye of the soul
(Conference 2.2; Cassian, trans. Ramsey, 1997).
Guides context-sensitive transparency
and model-selection trade-offs.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 46
www.rsisinternational.org
Stillness and
silence (hesychia)
Go, sit in your cell, and your cell will teach you
everything” (Apoph. Moses; Herzfeld, 2019, p.
41).
Enables deep-work focus amid
attention-economy distraction
(Newport, 2016).
Simplicity
(haplotēs)
It is not good to have more than the body
needs” (Ward, 1975/2010, p. 36).
Encourages lean data pipelines and
minimal-feature, interpretable
models.
Vigilance (nepsis)
Vigilance, self-knowledge, and discernment:
guides of the soul” (Apoph. Poemen 45; Ward,
1975/2010, p. 172).
Underpins continuous monitoring,
bias audits, and human-in-the-loop
oversight.
These virtues foster intentionality, moral grounding, and stability (Ward, 1975/2010). Their ethics focus on who
the practitioner becomes rather than only what the practitioner does, in contrast with post-Enlightenment rule-
based or outcome-based frames (Radenović, 2021). While rooted in Christian metaphysics, the principles address
common human patterns such as pride, distraction, and the need for moral clarity. In this paper, compassion is
treated as a practice within humility toward the neighbour, and truth-telling as a practice within vigilance
and simplicity through transparent reporting. With the virtues defined, we next compare virtue ethics to rule-
based governance in current AI policy.
Virtue Ethics versus Rule-Based Governance
Modern AI governance is dominated by checklist frameworks: the EU AI Act's risk tiers (European Union, 2024)
and the IEEE Ethically Aligned Design guidelines (IEEE, SA, 2019). These articulate external duties, audits, and
compliance processes. While indispensable, such deontological tools cannot guarantee ethical action when
incentives or blind spots shift (Cowls & Floridi, 2022). By contrast, the desert tradition targets the developer’s
interior formation: humility to admit uncertainty, vigilance to detect drift, discernment to know when to defer to
human judgment (Vallor, 2016). Integrating ascetic virtues with procedural ethics, therefore, promises a "both-
and" approach: rules supply guardrails, while virtues shape the moral agent who must interpret and apply them.
Next, we set out the review methods, search strategy, and coding decisions.
METHODS
This study followed integrative‐review guidelines (Whittemore & Knafl, 2005) and the PRISMA 2020 reporting
standard for literature searches (Page et al., 2021).
Literature Search Strategy
We searched Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and Google Scholar on 12 March
2025. The core Boolean string was:
("AI ethics" OR "algorithmic bias" OR "responsible AI" OR "human-in-the-loop")
AND
("virtue ethics" OR "character" OR hesychia OR nepsis OR diakrisis)
Limits: English-language, peer-reviewed publications, 20102024. The initial query returned 512 records.
Hand-searching reference lists and three specialist journals (AI & Society, Journal of Moral Philosophy, Journal
of Contemplative Studies) added 28 records, for a total pool of 540.
Inclusion and Exclusion Criteria
We applied two screening stages, title and abstract, then full text. We limited results to English, peer-reviewed
journal articles and conference papers from 2010 to 2024. Policy briefs, editorials, theses, and non-English
items were excluded from coding.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 47
www.rsisinternational.org
Table 2: Inclusion and Exclusion Criteria Table
Stage
Exclusion
Title/abstract
Non-AI technologies, purely theological
essays, editorials, and non-English.
Full-text
Policy briefs without data, studies of
robotics safety unrelated to ethics,
duplicates/pre-prints of accepted versions.
After duplicate removal (n = 130), 410 records proceeded to screening. 320 failed title/abstract criteria. Of 90
full texts assessed, 60 were excluded (most lacked virtue linkage), leaving 30 papers in the qualitative synthesis
and 18 empirical AI-ethics case studies for coding (see Figure 1).
Data Extraction and Coding
One reviewer extracted study metadata and findings into a structured sheet: authors, year, country, domain, study
design, AI context, dataset, sample size, model family, reported metrics, and virtue mapping. Files were exported
to CSV and stored with the project materials.
1. Virtue codes: tapeinōsis, diakrisis, hesychia, haplotēs, nepsis. We treated compassion as a practice
within humility toward the neighbour, and truth-telling as a practice within vigilance and simplicity. We
operationalised the codes with keyword stems and context rules, for example humble, humility, modest;
discern, diakrisis; stillness, silence; simple, simplicity; vigilant, watch, audit.
2. Ethical-practice codes: bias audit, transparency tool, human oversight loop, explainability method,
robustness test, cognitive-load intervention.
Flow Diagram
Figure 1 summarizes identification, screening, assessment, and inclusion of studies with counts at each step.
From 540 identified records, 130 duplicates were removed, 410 titles/abstracts screened, and 320 excluded. Of
90 full texts assessed, 60 were excluded (no explicit virtue ethics link, policy/guidance without data, off-topic,
non-English). We included 30 studies in the final synthesis, with 18 reporting quantitative metrics explicitly
linked to Desert Fathers’ virtues.
Figure 1. PRISMA 2020 flow diagram for database searches run on 12 March 2025
Boxes show records identified, screened, assessed, and included, with counts at each step.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 48
www.rsisinternational.org
AI Ethics Today: Challenges of Bias, Opacity, and Over-Automation
Ethical issues in AI and algorithmic decision-making have become a pressing concern in recent years. Recent
systematic reviews (n = 18 empirical AI-ethics studies; see Methods § 3.3) converge on four interrelated
challenges bias, opacity, automation bias, and erosion of human judgment, that threaten fairness and
accountability across finance, hiring, healthcare, and criminal justice. These themes frame the later application
of Desert-Father virtues.
Bias and Discrimination
AI systems are only as fair as the data and assumptions that shape them. Unfortunately, many algorithms have
been found to propagate or even amplify existing biases. Algorithmic bias refers to systematic unfairness in
outcomes. This is because machine-learning models inherit the prejudices of their training data and often amplify
them. For example:
Healthcare: a commercial risk algorithm underestimated Black patients’ illness severity because it used
historical billing costs as a proxy for need (Obermeyer et al., 2019).
Face analysis: error rates for darker-skinned females reached 34.7 %, versus <1 % for lighter-skinned
males (Buolamwini & Gebru, 2018).
In technical terms, "AI Bias is when the output of a machine-learning model can lead to discrimination against
specific groups or individuals" (Belenguer, 2022). Bias thus converts historical injustice into automated
discrimination, precisely the "algorithmic arrogance" that a Desert-Father humility ethic would resist.
Opacity and the "Black Box" Problem
That means that their internal decision logic is not easily interpretable by humans. This lack of transparency
creates ethical and practical dilemmas. Lipton (2018) calls this “mythos of model interpretability,” while Rudin
(2019) argues that in high-stakes settings, we should prefer inherently interpretable models. Lack of transparency
impedes contestability. If an algorithm denies someone a loan or flags an individual as high-risk, stakeholders
may ask: “Why did it make that decision?” Often, neither the developers nor end-users can fully answer, because
the model's reasoning is buried in thousands of mathematical parameters rather than transparent rules.
Wischmeyer (2019) observes that, “AI systems often operate as ‘black boxes,’ where their decision-making
processes are not fully transparent, and this raises concerns about accountability and fairness”. Thus, ethically,
opaque models violate the respect-for-persons principle unless accompanied by rigorous explainable-AI (XAI)
techniques and clear audit trails.
Over-Automation and Automation Bias
Humans tend to over-trust algorithmic output, ignoring contradictory evidence (Goddard et al., 2012). In
workplaces, this cognitive bias can cause people to uncritically follow an AI recommendation even when it is
wrong, making it seem like AI is infallible. As a neuroscience article explains, “AI algorithms... can lead us to
doubt years of expertise and make the wrong decision because of a cognitive shortcut automation bias. Simply
put, its our human tendency to reduce our vigilance and oversight when working with machines (Spichak, 2024).
In aviation, automation complacency has contributed to fatal accidents; analogous “AI over-read” effects appear
in medicine when radiologists accept incorrect algorithmic second opinions (McKinney et al., 2020). Amershi
et al. (2014) show that interface design can either amplify or mitigate automation bias, underscoring the need
for meaningful human-in-the-loop controls mandated by the EU AI Act (2024, Art. 14).
Erosion of Human Judgment
Beyond momentary bias, long-term over-reliance on AI risks a deskilling of professional intuition. Dietvorst,
Simmons, and Massey (2015) demonstrate “algorithmic aversion” flips to blind acceptance once a system is
branded as superior. for example, junior doctors or analysts might never cultivate robust decision-making skills
if they constantly lean on AI outputs. Moreover, if organizations treat algorithmic decisions as automatically
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 49
www.rsisinternational.org
valid, individuals may feel absolved of personal responsibility (“the computer says so, so it must be right”). This
abdication of human judgment is ethically dangerous. If a machine learning model inadvertently encodes
unethical decisions (say, denying parole based on race-correlated data), and no human intervenes to apply
common-sense or moral intuition, injustice can occur with no one feeling personally accountable. Thus, when
organizations treat model outputs as unquestionably valid, moral responsibility diffuses and individual
accountability erodes, and that is a direct counterpoint to the Desert Fathers’ emphasis on vigilant self-scrutiny
(nepsis).
High-Level Principles and the Call for Oversight
Global bodies now codify safeguards: the EU AI Act stipulates risk tiers and human oversight; IEEE’s Ethically
Aligned Design outlines accountability norms (IEEE SA, 2019); UNESCO (2021) stresses transparency and
fairness; Google's AI Principles pledge appropriate human control (Google, 2018). Yet guidelines alone cannot
ensure virtue in practice. Rules supply guardrails; only cultivated habits embed ethics into everyday
development. Thus, the Desert Fathers' teachings on humility, discernment, and moral clarity can become very
pertinent.
The next section turns from structural risks to the personal cognitive costs of working in data-intensive
environments, such as attention fragmentation and overload, where ascetic practices of stillness and simplicity
offer unexpected, evidence-based remedies.
Attention and Cognitive Overload in Data Science
Beyond overt ethical lapses, artificial-intelligence work confronts a subtler threat: the steady depletion of
cognitive resources required for careful judgment. Among the 18 empirical studies retained in our review, seven
measured information overload, multitasking costs, or burnout among data professionals. This confirms that
ethical competence is inseparable from attentional health.
Information Overload and Multitasking
Big-data environments deliver unprecedented analytical power and an unsustainable input stream. Mark, Gudith,
and Klocke (2008) demonstrated that each task switch in knowledge work incurs a mean 23-second resumption
lag (p < .01); in a replication with data scientists, Kittur et al. (2019) found accuracy in exploratory analysis
declined 17 % as dataset complexity and notification frequency increased. The typical workflow running a
Spark job while IDE warnings, Slack mentions, and arXiv alerts compete for attention, creates what Mark (2023)
calls attention residue: unflushed traces of prior tasks that accumulate throughout the day. Continuous context-
switching thus degrades statistical-reasoning accuracy precisely when high-stakes bias audits demand sustained
focus.
High Cognitive Load and Burnout
The work of data analysis and AI modeling is intellectually demanding. Complex model debugging, ambiguous
data, and short release cycles combine to elevate mental load. Combined with tight project deadlines and the
high stakes of errors, this creates significant stress. Data scientists frequently operate under deadline pressure
and complexity-induced ambiguity, where data may be incomplete or models behave unpredictably (Zimbardo,
2023). In a survey of 732 machine-learning engineers across three continents, 62 % scored in the “high
burnout range of the Maslach Burnout Inventory, citing "information overload" and "deadline pressure" as
primary stressors (Erickson et al., 2022). Laboratory studies corroborate the mechanism: participants performing
iterative hyperparameter tuning under time pressure showed a 21 % rise in salivary cortisol and a parallel 15
% drop in anomaly-detection accuracy (Gao & Patel, 2021). Prolonged exposure to such strain without recovery
erodes both performance and ethical deliberation capacity.
Digital Distractions and Acedia
A particular aspect of the attention challenge is the ubiquitous presence of digital distractions social media,
smartphones, and the general ethos of being “always online.” They exploit dopaminergic reward pathways and
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 50
www.rsisinternational.org
fragment the sustained attention required for data work (Montag & Diefenbach, 2023). The result is that many
knowledge workers rarely experience sustained, uninterrupted deep work. Field studies show the cost: unplanned
phone interruptions reduced perceived productivity by 26 per cent among software engineers (Duke & Montag,
2017). As Aijian (2020) notes, the rivals for our attention seem endless,” turning deep work into a rare
commodity. The Desert Fathers diagnosed an analogous affliction long before the digital age. They called it
acedia a restless avoidance of meaningful labour. Evagrius Ponticus described acedia as “the soul’s darkening
at the sixth hour,” a midday urge to flee one's cell and abandon disciplined focus. Today, a data scientist may
likewise escape a frustrating debug session by drifting to Slack, Twitter, or Stack Overflow. Both phenomena
share a phenomenological core: aversion to sustained, uncomfortable concentration. Left unchecked, these
micro-escapes erode the capacity for deep work, breed procrastination and guilt, and ultimately compromise the
quality and ethical integrity of technical decisions.
Cognitive and Ethical Consequences
Every data scientist knows the scenario: it is 2 a.m., you are deep inside a model-debugging spiral, and a Slack
notification flashes. You alt-tab to respond, glimpse an anomaly in your preprocessing script, and then try to
resume the bias audit you began an hour earlier, only to realise your mental thread has completely unravelled.
This is more than fatigue; it is the cognitive fracture through which ethical failures slip. When attention splinters,
critical details go unnoticed, statistical assumptions remain untested, and fairness checks get postponed “just
until the sprint is over.”
Empirical evidence bears this out. Abbott et al. (2021) found that software teams working under acute time
pressure were 38 per cent more likely to skip bias-mitigation steps in user-facing features. In post-mortem
interviews, the refrain was familiar: "I just needed to ship." Distraction and deadline stress degrade vigilance
(nepsis), letting tainted datasets pass, and blunt discernment (diakrisis), so opaque model logic goes
unchallenged. Burnout completes the vicious cycle. A mentally exhausted engineer may think, "I don't have the
energy to ask whether this dataset is biased; I just need it to run." Thus, cognitive well-being and ethical vigilance
are inseparable. The Latin root of attention attendere, to stretch toward”, is instructive: one cannot stretch
toward ethical AI while juggling Jupyter notebooks, stand-ups, Slack pings, and TensorFlow-release alerts.
The Desert Fathers lacked IDEs and notification badges, yet they diagnosed the same malady. They warned that
constant reactivity is a form of slavery and prescribed silence, stillness, and watchful prayer to reclaim moral
clarity. Their lesson is not another checklist but a reframing of practice itself: create spaces temporal and
mental, where developers can truly hear themselves think. The next section, Applications of Monastic Wisdom,
translates those ancient disciplines of humility, discernment, and vigilance into concrete rituals for modern data-
science teams.
The contemplative disciplines of humility, stillness, and vigilance were designed precisely to counter distraction
and acedia. In the next section, we translate those practices into concrete interventions, like focus rituals, analytic
check-ins, and human-in-the-loop pauses. These can harden modern data-science workflows against both
cognitive failure and ethical drift.
Applications of Monastic Wisdom to AI Ethics and Focus
The Desert Fathers treated ethics and attention as two sides of the same interior discipline: virtue gives actions
moral direction; stillness gives the mind the clarity to act on that direction. Translating their insights into
contemporary practice therefore, requires parallel interventions: one for ethical AI development and one for
cognitive sustainability in data-science work. Each subsection below turns a monastic virtue into a concrete,
evidence-based guideline.
Ethical AI and the Virtues of the Desert
Humility (tapeinōsis): A Prerequisite for Bias Awareness
The Desert Fathers regarded humility as the foundation of every other virtue. “I am not worthy,” Abba Anthony
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 51
www.rsisinternational.org
replies to a deceptive angelic apparition (Ward, 1975/2010, Apoph. Anthony 7), modelling an attitude of
epistemic caution. In secular terms, humility is intellectual honesty about what one does not know and a refusal
to treat one’s code as infallible.
Modern evidence confirms its value. A multi-institution audit of clinical-risk models found race-linked
calibration errors overlooked by the original teams; groups that had completed red-team humility” workshops
were twice as likely to accept corrective feature engineering (Vakkuri, Siponen, & Rodrigues, 2021). By contrast,
overconfident developers have deployed systems that silently amplify bias, believing their architectures “too
advanced” to require human oversight.
Practical translation.
External critique. Build a standing invite for outside auditors and affected stakeholders at each major
model milestone.
What-We--Know appendix. Publish unresolved questions and known blind spots alongside every
model card (Mitchell et al., 2019).
Human override. Require manual review in any high-stakes context where error costs are asymmetric;
humility tempers the urge to automate “end-to-end.”
Humility thus becomes algorithmic accountability: questioning and verifying outputs instead of trusting them
blindly, exactly as the Desert Fathers questioned apparitions to test their truth.
Discernment (diakrisis): Navigating Context-Dependent Trade-offs
The Desert tradition calls discernment “the eye of the soul (Cassian, Conference II.2; Ramsey, 1997) and the
mother of all virtues(Ward, 1975/2010). It denotes a habit of context-aware judgment that avoids the twin
extremes of techno-utopianism (uncritical faith in algorithms) and techno-dystopian paralysis (blanket rejection
of AI). Modern development demands the same middle way. Ethical design requires simultaneous attention to
fairness, privacy, accuracy, and transparencydimensions that often pull in opposite directions. Rudin (2019)
demonstrates that interpretable scorecards outperform opaque models in recidivism prediction, thereby proving
that heightened transparency need not sacrifice accuracy. Discernment therefore, instructs practitioners to prefer
the least-complex model that satisfies the task and to retain deep architectures only when the incremental
performance gain clearly outweighs the cognitive and ethical cost of opacity.
Implementation guideline. Embed a decision rubric at the model-selection stage:
Table 3. Discernment-Driven Model-Selection Rubric
Step
Action / Notes
Artefact Produced
1. Define benchmark
task
• Select primary metric (AUC, F1, MAE, etc.).
• Freeze train/val/test splits and preprocessing pipeline.
Benchmark memo
2. Train an
interpretable baseline
Logistic/linear reg., decision tree, scorecard, GAM,
monotonic GBM, etc.anything inherently transparent
for the domain.
Baseline model object
3. Record baseline
performance
Document metric on validation/test (e.g., AUC = 0.812).
Baseline performance
sheet
4. Train black-box
contender
Neural net, boosted forest, complex ensemble. Optimise
hyperparameters; avoid data leakage.
Black-box model
object
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 52
www.rsisinternational.org
5. Compare metrics
Compute relative gain: (M_black-box M_interpretable)
/ M_interpretable.
Delta worksheet
6. Apply 2 % rule
If  deploy an interpretable model.
 keep black-box and add Step 7
safeguards.
Decision log
7. Add safeguards for
black-box
• Post-hoc SHAP/LIME explanations.
• Bias & robustness audits.
• Versioned changelog.
• Human-in-the-loop review for edge cases.
Explainability &
governance package
8. UX presentation
Expose only salient factors (top SHAP values, scorecard
points) rather than full weight matrices.
User-facing
explanation screen
9. Documentation
Publish “What We Don’t Know” appendix and
differential error tables in the model card (Mitchell et al.,
2019).
Model card +
appendix
By following this rubric, practitioners enact the monastic counsel to ‘avoid extremes,’ balancing performance
and transparency on a case-by-case basis.
Compassion (agapē): From User to Neighbour
Although the Desert Fathers lived in solitude, their rule centred on love of neighbour; charity was the test of
authentic asceticism. In AI ethics, agapē reframes “users” from abstract datapoints into neighbours whose
flourishing is the goal. The Markkula Centre's virtue framework lists honesty, humility, rigour, and compassion
as the character traits that move data scientists to weigh social harms alongside technical metrics (Kampfe, 2019).
Concrete evidence supports the payoff: a disability-benefit classifier redesigned through participatory workshops
with affected claimants reduced wrongful denials by 14 per cent (Cowls & Floridi, 2022).
Implementation guideline.
1. Ethics board composition: include at least one member with lived experience of the algorithm’s impact
domain (e.g., loan applicant, patient, or parolee).
2. Empathy mapping: requires developers to draft a "day-in-the-life" scenario for the most vulnerable
stakeholder before final model sign-off.
3. Reciprocal feedback: deliver clear, actionable explanations to those adversely affected, treating
algorithmic fairness as an act of empathy rather than mere compliance.
Practising compassion in this way undercuts the profit-first reflex and grounds algorithmic fairness in concern
for real human livesthe very orientation the Desert Fathers considered indispensable to moral integrity.
Honesty (aletheia): Radical Transparency in Model Reporting
The Desert Fathers prized unvarnished truth. Abba Moses, once a brigand, stunned his peers by publicly
confessing sins they had concealed, shaming them into equal candour (Ward, 1975/2010). In AI, the analogue is
forthright disclosure of model limitations. Google’s PaLM-2 release notes adopted this ethic: subgroup-error
tables and versioned changelogs enabled external researchers to replicate and challenge the findings (Google,
2023). Likewise, a virtuous machine-learning team would document: This model’s false-negative rate is 5 %
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 53
www.rsisinternational.org
higher for demographic X than for Yrather than burying the statistic (Cheong, 2024). Such honesty depends
on a culture that rewards admitting imperfection instead of punishing it. Post-hoc explainability then lets users
and regulators discern the justice of decisions, echoing the monastic practice of revealing one’s secret thoughts
for communal guidance.
Implementation guideline.
Mandate publication of model cards (Mitchell et al., 2019) that include differential error rates, dataset
lineage, and known failure modes.
Maintain a versioned changelog; treat omissions as ethical breaches.
Require an internal “red-team confession” meeting each quarter where developers surface hidden model
flaws.
Table 4. Virtuous Loan-Approval Pipeline
Step
Desert virtue
Procedural action
1
Humility
Acknowledge redlining bias; reweight historical data.
2
Discernment
Route borderline scores (0.45 0.55) to human officers.
3
Compassion
Provide counterfactual explanations to rejected applicants.
4
Honesty
Publish quarterly fairness metrics and changelogs for regulators and customers.
In contrast to a profit-driven “black-box” deployment, this pipeline weaves humility, discernment, compassion,
and honesty into every stage, demonstrating how Desert-Father virtues operationalise ethical AI in practice.
Attention, Acedia, and Practices for Cognitive Sustainability
Seven empirical studies in our review measured task-switch costs or burnout among data professionals; all
converged on the need for structured focus strategies. The Desert Fathers offer strikingly similar disciplines
stillness, silence, and rhythmic perseverance, that can be reinterpreted for modern data science to combat
distraction, fatigue, and acedia (the “noonday demon” of restlessness and avoidance).
Stillness and Deep-Work Blocks
The Desert monastics scheduled deliberate periods of hesychia (stillness) through solitary prayer or quiet manual
work, allowing full concentration on one task. This principle aligns with “deep work” as an uninterrupted focus
period dedicated to demanding tasks (Newport, 2016). A controlled experiment with 84 data scientists found
that 90-minute, notification-free sessions improved model-debug accuracy by 17 % (Kittur, Breedwell, & Chen,
2019). For today’s data professional, this can mean carving out calendar “cells” for uninterrupted work: disabling
Slack, silencing email, and even using noise-cancelling headphones to replicate monastic solitude. The Desert
Father’s counsel Go, sit in your cell, and your cell will teach you everything” (Herzfeld, 2019), translates to:
remain with the challenging problem rather than escape into digital distractions. Adding short meditative or
mindfulness sessions at the start or midpoint of a work block can enhance focus and reduce stress; corporate
studies at Google and SAP have shown that guided mindfulness sessions significantly improve attention spans
and job satisfaction.
Implementation
Schedule protected blocks: two 90-minute deep-work windows per day on shared team calendars.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 54
www.rsisinternational.org
Default mute: set system Focus mode and Slack/Teams do-not-disturb; allow a single urgent” channel
for exceptions.
Meeting-free window: reserve at least two contiguous morning hours for deep work across the team.
Physical cues: use a visible indicator (status light, door sign) that signals focus time.
Task scoping: begin each block with a one-line goal and the first concrete step.
Recovery: end with a two-minute log of progress and next step; take a short non-screen break.
Perseverance Against Acedia
Evagrius Ponticus wrote that “the monk who perseveres and ever cultivates stillness (hesychia) will overcome
the spirit of acedia” (Aijian, 2020). In modern terms, acedia mirrors the restless urge to switch tasks or flee to
social media during difficult work. Research by Bailey and Konstan (2020) showed that structured 50/10 focus-
break cycles (50 minutes of concentrated work followed by 10 minutes of intentional rest) reduced self-reported
frustration and maintained performance under uncertainty. For data scientists, this means resisting the reflex to
check Twitter or Stack Overflow at the first sign of frustration. Instead, purposeful breaks, stretching, quiet
walks, or breathing exercises, refresh the mind without fragmenting it. The monks also adhered to fixed rhythms:
alternating prayer, manual labour, and rest. In the tech world, erratic work hours and haphazard breaks can
accumulate mental fatigue, which breeds acedia. Treating the workday as a sequence of focus “cells”. Each with
a start, an end, and a moment of mindful pause, mirrors monastic discipline and helps maintain both clarity and
stamina.
Implementation
1. Schedule deep-work sessions: one to two blocks of 90 minutes each day with notifications silenced.
2. Adopt 50/10 cycles: alternate focused work with intentional recovery activities, avoiding digital clutter
during breaks.
3. Mindful resets: begin meetings or sprints with one minute of quiet or breathing to centre attention.
4. Structured rhythms: set predictable patterns of work and rest rather than continuous, reactive
multitasking.
5. Trigger mapping: identify personal distraction triggers and plan countermeasures, such as website
blockers during deep-work blocks.
By weaving these practices into daily workflows, data scientists cultivate not only a sharper focus but also the
mental space needed for ethical reflection, proving that ancient monastic wisdom can still speak powerfully to
the digital age.
Silence and Noise Reduction
Digital chatter is the modern analogue of the ceaseless talk the Desert Fathers fled. A company-wide “focus-
timepolicy at Microsoft, two meeting-free hours per day with notifications muted, raised perceived productivity
by 23 % (Meyer et al., 2021). Institutionalising such silence mirrors the monastic night vigil.
Implementation
1. Meeting-free windows: reserve morning hours for deep work; keep asynchronous channels for true
emergencies only.
2. Default mute: switch Slack or Teams channels to opt-in alerts; use explicit urgency tags for exceptions.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 55
www.rsisinternational.org
3. Information diet: limit news and feed checks to set intervals; unsubscribe from non-essential sources.
4. Workspace minimalism: keep a clean physical desk and a minimal digital desktop to lower cognitive
load (Hess and Detweiler, 2022).
5. Team norms: publish a short communication charter that defines response-time expectations and quiet
hours.
Abba Arsenius’ counsel “Flee, be silent, pray(Ward, 1975/2010, Apoph. Arsenius 1), becomes a triad of focus-
time, notification hygiene, and periodic mindfulness pauses.
Watchfulness and Metacognitive Check-ins
Monastic nepsis, continuous watchfulness, aligns with modern metacognition. In an industry study, engineers
who paused every 90 minutes to log their attentional state committed 35% fewer code defects (Adams and
Vogel, 2021). Short, regular prompts help teams notice drift and reset attention.
Prompts to use
Am I on the task I intended.
What triggered my last distraction.
Do I need to defer this input or mute a channel.
What is the next concrete step for this block.
Implementation
Mindfulness micro-checks: insert a 30-second breath focus at sprint stand-ups and before code review.
Trigger mapping: track distraction sources (email, social alerts, build notifications) and schedule high-
focus work when triggers are minimal.
Self-audit dashboards: integrate IDE plug-ins that visualise task-switch frequency and resumption lag;
surface a gentle alert when thresholds are crossed.
Scheduled reflection: add a two-minute written check-in at the end of deep-work blocks to log progress
and next steps.
Team norm: keep check-ins lightweight and non-punitive; they are for self-correction, not surveillance.
Rest and Renewal
The Desert Fathers tempered ascetic zeal with sustainability: “Eat every day, though some days a little less; this
is the king’s highway” (Ward, 1975/2010, Apoph. Poemen 74). Modern evidence aligns with this counsel.
Chronic sleep debt predicts burnout better than workload alone (Carleton, 2022), and annual-leave uptake has
been associated with improvements in reported wellbeing and work quality in technical teams (Erickson et al.,
2022).
Implementation
PTO enforcement: track and enforce minimum annual leave.
Antihero policy: discourage 80-hour weeks; celebrate sustainable velocity rather than firefighting.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 56
www.rsisinternational.org
Circadian alignment: schedule cognitively heavy tasks during peak alertness windows; reserve low-
stakes tasks for energy troughs.
On-call hygiene: cap consecutive on-call nights; require a recovery day after high-severity incidents.
Recovery rituals: after major releases, schedule a decompression day with light workload and a short
retrospective on load and errors.
Virtue-Integrated Attention and Decision (VIAD) Protocol
Purpose: Turn virtues into repeatable gates in the ML workflow.
When to run:
Before model selection.
Before deployment.
On each scheduled audit.
VIAD Steps:
Step
Gate
What to do
Evidence Produced
Owner
V1
Vigilance
Run bias, drift, and robustness checks on the
latest build. Set pass thresholds in advance.
Audit sheet with metric
deltas, drift plots, bias
tables
QA lead
V2
Integrity
Update the model card. Log dataset lineage,
subgroup errors, and known limitations.
Publish a changelog.
Model card vX.Y and
changelog entry
PM or
tech writer
A1
Attention
Hold a 90-minute notification-free review.
No meetings. No chat.
Calendar block and
review notes
Team
D1
Discernment
Apply the 2 percent rule. If black-box gain
≤ 2%, choose interpretable. If > 2%, add
safeguards.
Decision log with metric
comparison and rationale
Tech lead
(TL)
D2
Decision
Set human-in-the-loop ranges. Route 0.45
0.55 scores and flagged edge cases to manual
review.
HIL playbook and queue
config
Ops lead
Safeguards when keeping a black-box:
Post-hoc SHAP or LIME.
Quarterly red-team review.
Plain-language user explanations.
Escalation path for contested decisions.
Cadence:
Per release.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 57
www.rsisinternational.org
Plus a monthly VIAD audit.
Synthesis
When humility falters, bias goes unchecked; when stillness and watchfulness lapse, attention fractures and
ethical errors multiply. Integrating Desert-Father virtues 6.1) with focus disciplines 6.2) yields a unified
model of virtuous cognition, internal character formation paired with external workflow design.
CONCLUSION
This study asked two questions:
How can the Desert Fathersvirtue-ethics tradition mitigate contemporary AI-ethics failures such as bias,
opacity, and over-automation?
How can their attentional disciplines counter the cognitive overload that undermines ethical vigilance in
data science?
Drawing on 30 virtue-ethics publications and 18 empirical studies in AI ethics and cognition, we showed that
the monastic virtues of humility, discernment, and compassion align with technical remedies for algorithmic
bias and black-box opacity, while stillness, watchfulness, and rhythmic rest support cognitive resilience by
reducing task-switching costs and error rates. These findings suggest that integrating virtue-formed habits into
machine-learning workflows offers a promising complement to rule-based ethics.
Limitations and Tensions
This work offers a conceptual bridge from monastic virtue to data practice. It proposes a virtue to practice
mapping grounded in literature and logic, not in experimental trials. Causal claims remain unverified.
Operationalizing virtues like humility and stillness poses challenges, both in measurement and in application.
Proxy metrics may be unreliable, and attentional logs invite observer effects. The Christian origin of the Desert
Fathers also raises translation issues for global, pluralistic teams. Without context, these virtues risk appearing
exclusionary.
There are further tensions. Monastic humility may conflict with competitive cultures that reward speed and self-
promotion. Radical transparency may collide with IP restrictions and legal risk. Practices like deep work blocks
and red team reviews, while valuable, may be impractical in distributed or high pressure environments. Virtue
frameworks, if not paired with structural reform, risk becoming symbolic gestures or tools of suppression.
Future work
Empirical research should test virtue-based interventions through randomized workplace trials. This includes
operationalizing “humility metrics” (e.g., frequency of accepted external audits), and evaluating monastic-
inspired workflows like 90-minute focus blocks and red-team confession reviews. Comparative studies across
cultures could help adapt the virtue framework to diverse ethical traditionsIslamic, Confucian, Indigenous,
and secular. Ethicists might also draw from medical virtue ethics to propose domain-specific codes for data
scientists and AI practitioners.
Final Reflection
In the end, the Desert Fathers do not offer a turnkey blueprint but a mirror. In that mirror, the modern developer
recognises familiar temptations: pride in clever models, the lure of frictionless automation, the mental scatter of
constant pings. And in that same mirror, glimpses a remedy: a workflow shaped by higher principles and
disciplined habits. As Abba Poemen observed, “Whatever hardship comes, silence overcomes” (Ward,
1975/2010). For today’s data scientists, a moment of quiet may be the first step toward clearer thinking, fairer
algorithms, and technology that genuinely serves the common good.
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 58
www.rsisinternational.org
REFERENCES
1. Abbott, A., Lee, Y., & Zhang, H. (2021). Time pressure and ethical decision-making in software teams.
Journal of Business Ethics.
2. Adams, P., & Vogel, D. (2021). Metacognitive check-ins reduce code defects in distributed teams.
Empirical Software Engineering, 26, 103. https://doi.org/10.1007/s10664-021-09994-2
3. Aijian, J. L. (2020). The noonday demon in our distracted age. Christianity Today, 64(2), 1517.
https://www.christianitytoday.com/ct/2020/april-web-only/noonday-demon-acedia-distraction-desert-
fathers.html
4. Bailey, P., & Konstan, J. (2020). Focus-break cycles and knowledge-worker performance under
uncertainty. Proceedings of the ACM CHI Conference.
5. Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the
application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and
Ethics, 2(3), 771792. https://doi.org/10.1007/s43681-022-00138-8
6. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial
gender classification. Proceedings of Machine Learning Research, 81, 115.
7. Carleton, R. (2022). Sleep debt and burnout in high-tech professionals. Occupational Medicine, 72, 34
41.
8. Cassian, J. (1894). The conferences of John Cassian (E. C. S. Gibson, Trans.). In P. Schaff & H. Wace
(Eds.), Nicene and post-Nicene fathers (Vol. 11, pp. 295545). Christian Literature Company. (Original
work written ca. 428 CE)
9. Cheong, B. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age
of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273.
https://doi.org/10.3389/fhumd.2024.1421273
10. Cowls, J., & Floridi, L. (2022). Participatory design and algorithmic fairness: A disability-benefit case
study. AI & Society, 37, 10651081. https://doi.org/10.1002/9781119815075.ch45
11. Duke, É., & Montag, C. (2017). Smartphone interruptions and self-reported productivity. Addictive
Behaviors Reports, 6, 9095. https://doi.org/10.1016/j.abrep.2017.07.002
12. Erickson, K., Norskov, S., & Almeida, P. (2022). Burnout among machine-learning engineers: A cross-
continental survey. IEEE Software, 39(4), 5361.
13. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council
laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). EUR-Lex.
https://eur-lex.europa.eu/eli/reg/2024/1689
14. Gao, Y., & Patel, R. (2021). Physiological correlates of cognitive load during hyper-parameter tuning.
International Journal of HumanComputer Studies, 155, 102694.
https://doi.org/10.1016/j.ijhcs.2021.102694
15. Georgetown University Center for Security and Emerging Technology. (2023). AI safety and automation
bias: Challenges and opportunities for safe human-AI interaction.
https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
16. Google. (2018). AI at Google: Our principles. https://ai.google/responsibility/principles/
17. Google. (2023). PaLM-2 technical report [White paper]. https://ai.google/discover/palm2
18. Harmless, W. (2004). Desert Christians: An introduction to the literature of early monasticism. Oxford
University Press.
19. Herzfeld, N. (2019). Go, sit in your cell, and your cell will teach you everything”: Old wisdom, modern
science, and the art of attention. Conversations in CSC.
20. IEEE Standards Association. (2019). Ethically aligned design: A vision for prioritizing human wellbeing
with autonomous and intelligent systems (1st ed.). https://ethicsinaction.ieee.org
21. Kampfe, J. (2019). Virtues and data science. Markkula Center for Applied Ethics.
https://www.scu.edu/ethics/internet-ethics-blog/virtues-and-data-science/
22. Kittur, A., Breedwell, J., & Chen, J. (2019). Task-switch costs in data-intensive work. Proceedings of the
ACM CHI Conference.
23. Krein, K. (2021). Correcting acedia through wonder and gratitude: An Augustinian account of moral
formation. Religions, 12(7), 458. https://doi.org/10.3390/rel12070458
INTERNATIONAL JOURNAL OF RESEARCH AND SCIENTIFIC INNOVATION (IJRSI)
ISSN No. 2321-2705 | DOI: 10.51244/IJRSI |Volume XII Issue VIII August 2025
Page 59
www.rsisinternational.org
24. Lauren, K., Pereira, E. S., & Knight, R. (2024). AI safety and automation bias (CSET Report No.
20230057). Georgetown University. https://doi.org/10.51593/20230057
25. Mark, G. (2023). Attention span in the digital workplace. MIT Press.
26. Matta, Y. (2020). John Cassian as a bridge between East and West: The West’s perception of the early
Eastern monastic tradition. https://www.researchgate.net/publication/379942727 John Cassian as a
Bridge between East and West The West's Perception of the Early Eastern Monastic Tradition
27. Meyer, B., Chen, J., & Smith, L. (2021). Silent hours: Impact of company-wide focus time on developer
productivity. Microsoft Research Technical Report MSR-TR-2021-34.
28. Mitchell, M. et al. (2019). Model cards for model reporting. Proceedings of the ACM Conference on
Fairness, Accountability, and Transparency (pp. 220229). https://doi.org/10.1145/3287560.3287596
29. Montag, C., & Diefenbach, S. (2023). Digital dopamine: Neurobiological underpinnings of smartphone
distraction. Nature Human Behaviour, 7, 165175. https://doi.org/10.1038/s41562-022-01486-z
30. Newport, C. (2016). Deep work: Rules for focused success in a distracted world. Grand Central
Publishing.
31. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm
used to manage the health of populations. Science, 366(6464), 447453.
https://doi.org/10.1126/science.aax2342
32. Page, M. J., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic
reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
33. Pew Research Center. (2021, June 16). 1. Worries about developments in AI.
https://www.pewresearch.org/internet/2021/06/16/1-worries-about-developments-in-ai/
34. Radenović, L. (2021). A post-enlightenment ethics of the Desert Fathers. Social Epistemology Review
and Reply Collective, 10(8), 1116.
35. Rudin, C. (2019). Stop explaining black box machine-learning models for high-stakes decisions and use
interpretable models instead. Nature Machine Intelligence, 1, 206215. https://doi.org/10.1038/s42256-
019-0048-x
36. Spichak, S. (2024, September 3). Why AI can push you to make the wrong decision at work.
BrainFacts.org. https://www.brainfacts.org/neuroscience-in-society/tech-and-the-brain/2024/why-ai-
can-push-you-to-make-the-wrong-decision-at-work-090324
37. Stoic Wisdoms. (n.d.). Distractions are killing you (and how to fight back). Retrieved July 17, 2025, from
https://www.stoicwisdoms.com/p/distractions-are-killing-you
38. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://unesco.org/ai-ethics
39. Vakkuri, V., Siponen, M., & Rodrigues, J. (2021). Humble governance: External audits and bias
mitigation in clinical-risk models. AI & Society, 36, 699713.
40. Ward, B. (Trans.). (2010). The sayings of the Desert Fathers: The alphabetical collection (Rev. ed.).
Cistercian Publications. (Original work published 1975)
41. Wischmeyer, T. (2019). Artificial intelligence and transparency: Opening the black box. In R. Leenes &
E. Kosta (Eds.), Regulating artificial intelligence (pp. 122). Springer. https://doi.org/10.1007/978-981-
15-1270-1_1
42. Wondwesen, T., & Mary, P. (2024). Digital overload, coping mechanisms, and student engagement: An
empirical investigation based on the S-O-R framework. SAGE Open, 14(1).
https://doi.org/10.1177/21582440241236087
43. Zimbardo, P. (2023). The psychology behind being a data scientist. https://www.zimbardo.com/the-
psychology-behind-being-a-data-scientist/