Human-in-the-Loop AI: Rethinking Automation Ethics in Decision-Sensitive Domains Case Study of the Education, IT and Non-for-Profit sectors.
Authors
Department of Information Systems, School of IT and Computing, American University of Nigeria, Yola Mohammed Nasiru Yakubu Arden University Middlemarch Business Park, Coventry. CV3 4FJ (Nigeria)
Department of Information Systems, School of IT and Computing, American University of Nigeria, Yola Mohammed Nasiru Yakubu Arden University Middlemarch Business Park, Coventry. CV3 4FJ (Nigeria)
Article Information
DOI: 10.51584/IJRIAS.2025.1010000027
Subject Category: Education
Volume/Issue: 10/10 | Page No: 361-372
Publication Timeline
Submitted: 2025-09-23
Accepted: 2025-09-30
Published: 2025-10-30
Abstract
This study develops and applies the Human-in-the-Loop (HITL) Ethical Assessment Framework (EHAF) to examine the ethical sufficiency of HITL artificial intelligence (AI) across education, information technology (IT), and non-profit sectors. The research objective was to evaluate how effectively HITL practices safeguard human values in decision-sensitive contexts and to identify sector-specific challenges that may compromise ethical adequacy. Adopting a qualitative thematic approach, we analyzed survey responses from professionals in the three sectors. Responses were coded against the four diagnostic dimensions of EHAF Impact Severity, Contextual Ambiguity, Human Agency, and Transparency & Auditing while also allowing for the identification of emergent themes. Retroductive reasoning was used to move beyond surface patterns to uncover generative mechanisms shaping HITL practices. Findings demonstrate sectoral variation in how HITL systems are operationalized and valued. In education, ethical sufficiency is closely tied to human oversight given the high stakes of student outcomes and the importance of cultural contextualization. In the non-profit sector, transparency and auditing dominate due to donor accountability pressures and reporting requirements. IT organizations, by contrast, privilege efficiency and scalability, but often provide weaker safeguards for human agency and oversight. Across all sectors, emergent themes such as training, trust, infrastructure readiness, and donor influence were found to condition HITL adequacy. Generative mechanisms identified include institutional role ambiguity, donor pressure, cultural misalignment, and capacity constraints. The study concludes by proposing an extension to EHAF that incorporates a fifth dimension, Capacity and Governance Context to better capture systemic and institutional influences. Conceptually, the paper refines the assessment of HITL ethics, while practically offering sector-specific recommendations to strengthen oversight, accountability, and trust in AI-enabled decision-making.
Keywords
Human-in-the-Loop, AI Ethics, Automation, Decision-Sensitive Domains, Responsible AI,
Downloads
References
1. Berente, N., et al., Managing Artificial Intelligence. MIS Quarterly, 2021. 45(3): p. 1433-1450. [Google Scholar] [Crossref]
2. Shrestha, Y.R., S.M. Ben-Menahem, and G. von Krogh, Organizational Decision-Making Structures in the Age of Artificial Intelligence. California Management Review, 2019. 61(4): p. 66-83. [Google Scholar] [Crossref]
3. Wamba, S.F., Akter, S., & Guthrie, C., Making big data analytics perform: the mediating effect of big data analytics dependent organizational agility. . Systèmes d'information & management, , 2020. 25(2), 7-31. [Google Scholar] [Crossref]
4. Lyytinen, K., J.V. Nickerson, and J.L. King, Metahuman systems = humans + machines that learn. Journal of Information Technology, 2020. 36(4): p. 427-445. [Google Scholar] [Crossref]
5. Jarvenpaa, S.L., Sourcing data for data-driven applications: foundational questions. In Research handbook on artificial intelligence and decision making in organizations. Edward Elgar Publishing., 2024: p. (pp. 17-37). . [Google Scholar] [Crossref]
6. Owoc, M.L., Sawicka, A., & Weichbroth, P. , Artificial intelligence technologies in education: benefits, challenges and strategies of implementation. In IFIP international workshop on artificial intelligence for knowledge management. 2019: p. (pp. 37-58). [Google Scholar] [Crossref]
7. Azeem, S. and M. Abbas, Personality correlates of academic use of generative artificial intelligence and its outcomes: does fairness matter? Education and Information Technologies, 2025. 30(13): p. 18131-18155. [Google Scholar] [Crossref]
8. Amershi, S., et al., Guidelines for Human-AI Interaction, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. p. 1-13. [Google Scholar] [Crossref]
9. Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). , Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, , 2012. 19(1), : p. 121-127. [Google Scholar] [Crossref]
10. Abdel-Karim, B., et al., How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work. MIS Quarterly, 2023. 47(4): p. 1395-1424. [Google Scholar] [Crossref]
11. Teodorescu, M.H.M., et al., Failures of Fairness in Automation Require a Deeper Understanding of Human–ML Augmentation. MIS Quarterly, 2021. 45(3): p. 1483-1500. [Google Scholar] [Crossref]
12. Yin, R.K., Case study research and applications. Thousand Oaks, CA: Sage., 2018. (Vol. 6). . [Google Scholar] [Crossref]
13. Danermark, B.A., Interdisciplinary research and critical realism the example of disability research. . 5(1), , 2002. 56-64. [Google Scholar] [Crossref]
14. Sawyer, B.D., Miller, D. B., Canham, M., & Karwowski, W., Human factors and ergonomics in design of a 3: automation, autonomy, and artificial intelligence. Handbook of human factors and ergonomics, , ed. 1385-1416. 2021. [Google Scholar] [Crossref]
15. Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. , Power to the people: The role of humans in interactive machine learning. AI magazine, , 2014. 35(4),: p. 105-120. [Google Scholar] [Crossref]
16. Storey, V.C., Yue, W. T., Zhao, J. L., & Lukyanenko, R. (2025). , Generative artificial intelligence: Evolving technology, growing societal impact, and opportunities for information systems research. . Information Systems Frontiers, , 2025: p. 1-22. [Google Scholar] [Crossref]
17. Doshi-Velez, F., & Kim, B., Towards a rigorous science of interpretable machine learning. . arXiv preprint arXiv:, 2017. 1702.08608. [Google Scholar] [Crossref]
18. Jiang, N., Liu, X., Liu, H., Lim, E. T. K., Tan, C. W., & Gu, J., Beyond AI-powered context-aware services: the role of human–AI collaboration. Industrial Management & Data Systems, , 2023. 123(11), 2771-2802. [Google Scholar] [Crossref]
19. Bryson, J.J. and A. Theodorou, How Society Can Maintain Human-Centric Artificial Intelligence, in Human-Centered Digitalization and Services. 2019. p. 305-323. [Google Scholar] [Crossref]
20. Rahwan, I., Society-in-the-loop: programming the algorithmic social contract. . Ethics and information technology, , 2018. 20(1), 5-14. [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- Assessment of the Role of Artificial Intelligence in Repositioning TVET for Economic Development in Nigeria
- Teachers’ Use of Assure Model Instructional Design on Learners’ Problem Solving Efficacy in Secondary Schools in Bungoma County, Kenya
- “E-Booksan Ang Kaalaman”: Development, Validation, and Utilization of Electronic Book in Academic Performance of Grade 9 Students in Social Studies
- Analyzing EFL University Students’ Academic Speaking Skills Through Self-Recorded Video Presentation
- Major Findings of The Study on Total Quality Management in Teachers’ Education Institutions (TEIs) In Assam – An Evaluative Study