What the Desert Fathers Teach Data Scientists: Ancient Ascetic Principles for Ethical Machine-Learning Practice
Authors
Spiritan University, Nneochi, Abia State, Nigeria (Nigeria)
Article Information
DOI: 10.51244/IJRSI.2025.120800004
Subject Category: Computer Science
Volume/Issue: 12/8 | Page No: 44-59
Publication Timeline
Submitted: 2025-07-20
Accepted: 2025-07-26
Published: 2025-08-27
Abstract
This study investigates whether the ascetic virtues articulated by the Desert Fathers, 3rd- to 5th-century Christian monastics, can inform contemporary data science practice. It addresses two interconnected challenges: persistent ethical risks in artificial intelligence (AI), such as bias, opacity, and automation overreach, as well as escalating cognitive overload within today's attention economy. Through an integrative literature review combining primary desert monastic texts with contemporary scholarship in AI ethics and cognitive psychology, the paper identifies five core virtues: humility, discernment, stillness, simplicity, and vigilance. Each virtue addresses corresponding data‑science dilemmas, offering practical guidance: humility enhances bias detection; discernment improves transparency in decisions; stillness and simplicity mitigate cognitive overload; and vigilance ensures continuous ethical monitoring. Findings indicate that virtue‑based "digital ascetic" practices significantly complement procedural ethics, foster responsible AI innovation, and strengthen practitioner resilience, ultimately promoting ethical integrity and cognitive sustainability in data science.
Keywords
Desert Fathers; Responsible AI; Algorithmic bias; Attention economy; Machine learning; Human-in-the-loop
Downloads
References
1. Abbott, A., Lee, Y., & Zhang, H. (2021). Time pressure and ethical decision-making in software teams. Journal of Business Ethics. [Google Scholar] [Crossref]
2. Adams, P., & Vogel, D. (2021). Metacognitive check-ins reduce code defects in distributed teams. Empirical Software Engineering, 26, 103. https://doi.org/10.1007/s10664-021-09994-2 [Google Scholar] [Crossref]
3. Aijian, J. L. (2020). The noonday demon in our distracted age. Christianity Today, 64(2), 15–17. https://www.christianitytoday.com/ct/2020/april-web-only/noonday-demon-acedia-distraction-desert-fathers.html [Google Scholar] [Crossref]
4. Bailey, P., & Konstan, J. (2020). Focus-break cycles and knowledge-worker performance under uncertainty. Proceedings of the ACM CHI Conference. [Google Scholar] [Crossref]
5. Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(3), 771–792. https://doi.org/10.1007/s43681-022-00138-8 [Google Scholar] [Crossref]
6. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. [Google Scholar] [Crossref]
7. Carleton, R. (2022). Sleep debt and burnout in high-tech professionals. Occupational Medicine, 72, 34–41. [Google Scholar] [Crossref]
8. Cassian, J. (1894). The conferences of John Cassian (E. C. S. Gibson, Trans.). In P. Schaff & H. Wace (Eds.), Nicene and post-Nicene fathers (Vol. 11, pp. 295–545). Christian Literature Company. (Original work written ca. 428 CE) [Google Scholar] [Crossref]
9. Cheong, B. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273 [Google Scholar] [Crossref]
10. Cowls, J., & Floridi, L. (2022). Participatory design and algorithmic fairness: A disability-benefit case study. AI & Society, 37, 1065–1081. https://doi.org/10.1002/9781119815075.ch45 [Google Scholar] [Crossref]
11. Duke, É., & Montag, C. (2017). Smartphone interruptions and self-reported productivity. Addictive Behaviors Reports, 6, 90–95. https://doi.org/10.1016/j.abrep.2017.07.002 [Google Scholar] [Crossref]
12. Erickson, K., Norskov, S., & Almeida, P. (2022). Burnout among machine-learning engineers: A cross-continental survey. IEEE Software, 39(4), 53–61. [Google Scholar] [Crossref]
13. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). EUR-Lex. https://eur-lex.europa.eu/eli/reg/2024/1689 [Google Scholar] [Crossref]
14. Gao, Y., & Patel, R. (2021). Physiological correlates of cognitive load during hyper-parameter tuning. International Journal of Human–Computer Studies, 155, 102694. https://doi.org/10.1016/j.ijhcs.2021.102694 [Google Scholar] [Crossref]
15. Georgetown University Center for Security and Emerging Technology. (2023). AI safety and automation bias: Challenges and opportunities for safe human-AI interaction. https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/ [Google Scholar] [Crossref]
16. Google. (2018). AI at Google: Our principles. https://ai.google/responsibility/principles/ [Google Scholar] [Crossref]
17. Google. (2023). PaLM-2 technical report [White paper]. https://ai.google/discover/palm2 [Google Scholar] [Crossref]
18. Harmless, W. (2004). Desert Christians: An introduction to the literature of early monasticism. Oxford University Press. [Google Scholar] [Crossref]
19. Herzfeld, N. (2019). “Go, sit in your cell, and your cell will teach you everything”: Old wisdom, modern science, and the art of attention. Conversations in CSC. [Google Scholar] [Crossref]
20. IEEE Standards Association. (2019). Ethically aligned design: A vision for prioritizing human wellbeing with autonomous and intelligent systems (1st ed.). https://ethicsinaction.ieee.org [Google Scholar] [Crossref]
21. Kampfe, J. (2019). Virtues and data science. Markkula Center for Applied Ethics. https://www.scu.edu/ethics/internet-ethics-blog/virtues-and-data-science/ [Google Scholar] [Crossref]
22. Kittur, A., Breedwell, J., & Chen, J. (2019). Task-switch costs in data-intensive work. Proceedings of the ACM CHI Conference. [Google Scholar] [Crossref]
23. Krein, K. (2021). Correcting acedia through wonder and gratitude: An Augustinian account of moral formation. Religions, 12(7), 458. https://doi.org/10.3390/rel12070458 [Google Scholar] [Crossref]
24. Lauren, K., Pereira, E. S., & Knight, R. (2024). AI safety and automation bias (CSET Report No. 20230057). Georgetown University. https://doi.org/10.51593/20230057 [Google Scholar] [Crossref]
25. Mark, G. (2023). Attention span in the digital workplace. MIT Press. [Google Scholar] [Crossref]
26. Matta, Y. (2020). John Cassian as a bridge between East and West: The West’s perception of the early Eastern monastic tradition. https://www.researchgate.net/publication/379942727 John Cassian as a Bridge between East and West The West's Perception of the Early Eastern Monastic Tradition [Google Scholar] [Crossref]
27. Meyer, B., Chen, J., & Smith, L. (2021). Silent hours: Impact of company-wide focus time on developer productivity. Microsoft Research Technical Report MSR-TR-2021-34. [Google Scholar] [Crossref]
28. Mitchell, M. et al. (2019). Model cards for model reporting. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (pp. 220–229). https://doi.org/10.1145/3287560.3287596 [Google Scholar] [Crossref]
29. Montag, C., & Diefenbach, S. (2023). Digital dopamine: Neurobiological underpinnings of smartphone distraction. Nature Human Behaviour, 7, 165–175. https://doi.org/10.1038/s41562-022-01486-z [Google Scholar] [Crossref]
30. Newport, C. (2016). Deep work: Rules for focused success in a distracted world. Grand Central Publishing. [Google Scholar] [Crossref]
31. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342 [Google Scholar] [Crossref]
32. Page, M. J., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71 [Google Scholar] [Crossref]
33. Pew Research Center. (2021, June 16). 1. Worries about developments in AI. https://www.pewresearch.org/internet/2021/06/16/1-worries-about-developments-in-ai/ [Google Scholar] [Crossref]
34. Radenović, L. (2021). A post-enlightenment ethics of the Desert Fathers. Social Epistemology Review and Reply Collective, 10(8), 11–16. [Google Scholar] [Crossref]
35. Rudin, C. (2019). Stop explaining black box machine-learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x [Google Scholar] [Crossref]
36. Spichak, S. (2024, September 3). Why AI can push you to make the wrong decision at work. BrainFacts.org. https://www.brainfacts.org/neuroscience-in-society/tech-and-the-brain/2024/why-ai-can-push-you-to-make-the-wrong-decision-at-work-090324 [Google Scholar] [Crossref]
37. Stoic Wisdoms. (n.d.). Distractions are killing you (and how to fight back). Retrieved July 17, 2025, from https://www.stoicwisdoms.com/p/distractions-are-killing-you [Google Scholar] [Crossref]
38. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://unesco.org/ai-ethics [Google Scholar] [Crossref]
39. Vakkuri, V., Siponen, M., & Rodrigues, J. (2021). Humble governance: External audits and bias mitigation in clinical-risk models. AI & Society, 36, 699–713. [Google Scholar] [Crossref]
40. Ward, B. (Trans.). (2010). The sayings of the Desert Fathers: The alphabetical collection (Rev. ed.). Cistercian Publications. (Original work published 1975) [Google Scholar] [Crossref]
41. Wischmeyer, T. (2019). Artificial intelligence and transparency: Opening the black box. In R. Leenes & E. Kosta (Eds.), Regulating artificial intelligence (pp. 1–22). Springer. https://doi.org/10.1007/978-981-15-1270-1_1 [Google Scholar] [Crossref]
42. Wondwesen, T., & Mary, P. (2024). Digital overload, coping mechanisms, and student engagement: An empirical investigation based on the S-O-R framework. SAGE Open, 14(1). https://doi.org/10.1177/21582440241236087 [Google Scholar] [Crossref]
43. Zimbardo, P. (2023). The psychology behind being a data scientist. https://www.zimbardo.com/the-psychology-behind-being-a-data-scientist/ [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- Comparative Analysis of Some Machine Learning Algorithms for the Classification of Ransomware
- Comparative Performance Analysis of Some Priority Queue Variants in Dijkstra’s Algorithm
- Transfer Learning in Detecting E-Assessment Malpractice from a Proctored Video Recordings.
- Dual-Modal Detection of Parkinson’s Disease: A Clinical Framework and Deep Learning Approach Using NeuroParkNet
- Real-Time Traffic Signal Optimisation Using Deep Q-Network Algorithm and Camera Data