REFERENCES
1. Abbott, A., Lee, Y., & Zhang, H. (2021). Time pressure and ethical decision-making in software teams.
Journal of Business Ethics.
2. Adams, P., & Vogel, D. (2021). Metacognitive check-ins reduce code defects in distributed teams.
Empirical Software Engineering, 26, 103. https://doi.org/10.1007/s10664-021-09994-2
3. Aijian, J. L. (2020). The noonday demon in our distracted age. Christianity Today, 64(2), 15–17.
https://www.christianitytoday.com/ct/2020/april-web-only/noonday-demon-acedia-distraction-desert-
fathers.html
4. Bailey, P., & Konstan, J. (2020). Focus-break cycles and knowledge-worker performance under
uncertainty. Proceedings of the ACM CHI Conference.
5. Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the
application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and
Ethics, 2(3), 771–792. https://doi.org/10.1007/s43681-022-00138-8
6. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial
gender classification. Proceedings of Machine Learning Research, 81, 1–15.
7. Carleton, R. (2022). Sleep debt and burnout in high-tech professionals. Occupational Medicine, 72, 34–
41.
8. Cassian, J. (1894). The conferences of John Cassian (E. C. S. Gibson, Trans.). In P. Schaff & H. Wace
(Eds.), Nicene and post-Nicene fathers (Vol. 11, pp. 295–545). Christian Literature Company. (Original
work written ca. 428 CE)
9. Cheong, B. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age
of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273.
https://doi.org/10.3389/fhumd.2024.1421273
10. Cowls, J., & Floridi, L. (2022). Participatory design and algorithmic fairness: A disability-benefit case
study. AI & Society, 37, 1065–1081. https://doi.org/10.1002/9781119815075.ch45
11. Duke, É., & Montag, C. (2017). Smartphone interruptions and self-reported productivity. Addictive
Behaviors Reports, 6, 90–95. https://doi.org/10.1016/j.abrep.2017.07.002
12. Erickson, K., Norskov, S., & Almeida, P. (2022). Burnout among machine-learning engineers: A cross-
continental survey. IEEE Software, 39(4), 53–61.
13. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council
laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). EUR-Lex.
https://eur-lex.europa.eu/eli/reg/2024/1689
14. Gao, Y., & Patel, R. (2021). Physiological correlates of cognitive load during hyper-parameter tuning.
International Journal of Human–Computer Studies, 155, 102694.
https://doi.org/10.1016/j.ijhcs.2021.102694
15. Georgetown University Center for Security and Emerging Technology. (2023). AI safety and automation
bias: Challenges and opportunities for safe human-AI interaction.
https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
16. Google. (2018). AI at Google: Our principles. https://ai.google/responsibility/principles/
17. Google. (2023). PaLM-2 technical report [White paper]. https://ai.google/discover/palm2
18. Harmless, W. (2004). Desert Christians: An introduction to the literature of early monasticism. Oxford
University Press.
19. Herzfeld, N. (2019). “Go, sit in your cell, and your cell will teach you everything”: Old wisdom, modern
science, and the art of attention. Conversations in CSC.
20. IEEE Standards Association. (2019). Ethically aligned design: A vision for prioritizing human wellbeing
with autonomous and intelligent systems (1st ed.). https://ethicsinaction.ieee.org
21. Kampfe, J. (2019). Virtues and data science. Markkula Center for Applied Ethics.
https://www.scu.edu/ethics/internet-ethics-blog/virtues-and-data-science/
22. Kittur, A., Breedwell, J., & Chen, J. (2019). Task-switch costs in data-intensive work. Proceedings of the
ACM CHI Conference.
23. Krein, K. (2021). Correcting acedia through wonder and gratitude: An Augustinian account of moral
formation. Religions, 12(7), 458. https://doi.org/10.3390/rel12070458