When AI Agents Act: Governance, Accountability, and Strategic Risk in Autonomous Organizations

Authors

Arunraju Chinnaraju

Doctorate in Business Administration, Westcliff University (USA)

Article Information

DOI: 10.51244/IJRSI.2025.12120050

Subject Category: Education

Volume/Issue: 12/12 | Page No: 547-612

Publication Timeline

Submitted: 2025-12-19

Accepted: 2025-12-23

Published: 2026-01-04

Abstract

Autonomous AI agents are being increasingly used in organizations; this is changing the nature of how organizations use information technology, from decision support systems to decision authority systems which can continue to act independently over time, adapt objectives as needed, and execute decisions without needing real-time human input. In contrast to traditional systems that utilize algorithms (e.g., LLM-orchestrated systems, reinforcement learning-based decision-making systems, and multi-agent task-execution systems), contemporary AI agents have been given authority to make decisions in an organization, have the ability to persist over time, and embody organizational roles. However, most current theories regarding organization, governance and accountability are still very much human-centric, they assume intentionality, episodic decision-making and the existence of clearly identifiable moral agents. Therefore, the purpose of this article is to provide a theoretical basis for a governance model that views autonomous AI agents as organizational actors (as opposed to merely a technological tool) and integrates agency theory, corporate governance, decision rights theory and algorithmic control into a governance model that explains why current IT governance, compliance and human-in-the-loop models will not be successful when there are autonomous agents, control delays and changing objectives. This article introduces the concept of artificial agency, which is defined as the delegation of decision-making authority without legal personhood, and examines the implications of artificial agency on the allocation of accountability for agent behavior, the determination of escalation thresholds, and the exposure of organizational risk.
Building on these foundations, this article describes a multi-layered strategic governance model consisting of dynamic human-in-the-loop and human-on-the-loop models, escalation-based override controls, and auditability and traceability models, and continuous oversight for managing behavioral, data and objective drift. The framework establishes clear distinctions between decision ownership and outcome ownership and provides mappings of liability pathways generated by the actions taken by agents, and frames agent drift as a long-horizon governance and strategic risk, rather than simply a technical failure. This article extends the current understanding of organizational agency and governance to include non-human decision-makers and provides a reusable governance model for organizations utilizing autonomous AI agents at scale. As such, it establishes a basis for future empirical and longitudinal research regarding autonomous agents as persistent organizational actors with significant implications for board oversight, regulatory design and enterprise architecture.

Keywords

Autonomous AI agents, Decision authority systems, AI governance frameworks

Downloads

References

1. Aguilera, R. V., Judge, W. Q., & Terjesen, S. A. (2018). Corporate governance deviance. Academy of Management Review, 43(1), 87–109. https://doi.org/10.5465/amr.2014.0394 [Google Scholar] [Crossref]

2. Aguilera, R. V., Williams, C. A., Conley, J. M., & Rupp, D. E. (2006). Corporate governance and social responsibility: A comparative analysis of the UK and the US. Corporate Governance: An International Review, 14(3), 147–158. https://doi.org/10.1111/j.1467-8683.2006.00495.x [Google Scholar] [Crossref]

3. Alavi, M. (1981). An evolutionary strategy for implementing a decision support system. Management Science, 27(11), 1309–1323. https://doi.org/10.1287/mnsc.27.11.1309 [Google Scholar] [Crossref]

4. Alshiekh, M., Bloem, R., Ehlers, R., König, R., Niekum, S., & Topcu, U. (2018). Safe reinforcement learning via shielding. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11797 [Google Scholar] [Crossref]

5. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv. https://doi.org/10.48550/arXiv.1606.06565 [Google Scholar] [Crossref]

6. Arena, M., Arnaboldi, M., & Azzone, G. (2010). The organizational dynamics of enterprise risk management. Accounting, Organizations and Society, 35(7), 659–675. https://doi.org/10.1016/j.aos.2010.07.003 [Google Scholar] [Crossref]

7. Arrieta, A. B., Díaz Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 [Google Scholar] [Crossref]

8. Åström, K. J., & Murray, R. M. (2008). Feedback systems: An introduction for scientists and engineers. Princeton University Press. https://doi.org/10.1515/9781400828739 [Google Scholar] [Crossref]

9. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838 [Google Scholar] [Crossref]

10. Bahner, J. E., Hüper, A.-D., Manzey, D., & Stark, S. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. International Journal of Human-Computer Studies, 66(9), 688–699. https://doi.org/10.1016/j.ijhcs.2008.06.001 [Google Scholar] [Crossref]

11. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning. https://doi.org/10.48550/arXiv.1908.09635 [Google Scholar] [Crossref]

12. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 [Google Scholar] [Crossref]

13. Bertolini, A. (2013). Robots as products: The case for a realistic analysis of robotic applications and liability rules. Law, Innovation and Technology, 5(2), 214–247. https://doi.org/10.5235/17579961.5.2.214 [Google Scholar] [Crossref]

14. Bifet, A., & Gavaldà, R. (2007). Learning from time-changing data with adaptive windowing. In Proceedings of the 2007 SIAM International Conference on Data Mining (pp. 443–448). https://doi.org/10.1137/1.9781611972771.42 [Google Scholar] [Crossref]

15. Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology, 31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5 [Google Scholar] [Crossref]

16. Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x [Google Scholar] [Crossref]

17. Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41–59. https://doi.org/10.1017/err.2019.8 [Google Scholar] [Crossref]

18. Buneman, P., Khanna, S., & Tan, W.-C. (2001). Why and where: A characterization of data provenance. In International Conference on Database Theory (pp. 316–330). Springer. https://doi.org/10.1007/3-540-44503-X_20 [Google Scholar] [Crossref]

19. Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512 [Google Scholar] [Crossref]

20. Busuioc, M., & Lodge, M. (2021). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293 [Google Scholar] [Crossref]

21. Cech, F. (2021). The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency. Journal of Responsible Technology, 7–8, 100015. https://doi.org/10.1016/j.jrt.2021.100015 [Google Scholar] [Crossref]

22. Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM Computing Surveys, 41(3), Article 15. https://doi.org/10.1145/1541880.1541882 [Google Scholar] [Crossref]

23. Choi, J., Nazareth, D. L., & Jain, H. K. (2010). Implementing Service-Oriented Architecture in Organizations. Journal of Management Information Systems, 26(4), 253–286. https://doi.org/10.2753/MIS0742-1222260409 [Google Scholar] [Crossref]

24. Cooper, A. F., Levy, K. E. C., & Barocas, S. (2022). Accountability in an algorithmic society: Relationality, responsibility, and robustness in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) (pp. 217–232). ACM. https://doi.org/10.1145/3531146.3533150 [Google Scholar] [Crossref]

25. Cummings, M. L. (2004). Automation bias in intelligent time critical decision support systems. AIAA 1st Intelligent Systems Technical Conference. https://doi.org/10.2514/6.2004-6313 [Google Scholar] [Crossref]

26. Cutler, D. M. (2023). What artificial intelligence means for health care. JAMA Health Forum, 4(10), e232652. https://doi.org/10.1001/jamahealthforum.2023.2652 [Google Scholar] [Crossref]

27. Daily, C. M., Dalton, D. R., & Cannella, A. A. (2003). Corporate governance: Decades of dialogue and data. Academy of Management Review, 28(3), 371–382. https://doi.org/10.5465/AMR.2003.10196703 [Google Scholar] [Crossref]

28. De Haes, S., & Van Grembergen, W. (2009). An exploratory study into IT governance implementations and its impact on business/IT alignment. Information Systems Management, 26(2), 123–137. https://doi.org/10.1080/10580530902794786 [Google Scholar] [Crossref]

29. Denning, D. E. (1976). A lattice model of secure information flow. Communications of the ACM, 19(5), 236–243. https://doi.org/10.1145/360051.360056 [Google Scholar] [Crossref]

30. Desai, A., Ghosh, S., Seshia, S. A., & Umeno, S. (2019). SOTER: A runtime assurance framework for programming safe robotics systems. In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) (pp. 138–150). IEEE. https://doi.org/10.1109/DSN.2019.00027 [Google Scholar] [Crossref]

31. Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411 [Google Scholar] [Crossref]

32. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110 [Google Scholar] [Crossref]

33. Donaldson, L. (2001). The contingency theory of organizations. Sage. https://doi.org/10.4135/9781452229249 [Google Scholar] [Crossref]

34. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv. https://doi.org/10.48550/arXiv.1702.08608 [Google Scholar] [Crossref]

35. Doukidis, G. I. (1988). Decision support system concepts in expert systems. Decision Support Systems, 4(3), 345–354. https://doi.org/10.1016/0167-9236(88)90021-8 [Google Scholar] [Crossref]

36. Duffourc, M., & Gerke, S. (2023). Generative AI in health care and liability risks for physicians and safety concerns for patients. JAMA, 330(4), 313–314. https://doi.org/10.1001/jama.2023.9630 [Google Scholar] [Crossref]

37. Dwork, C. (2008). Differential privacy: A survey of results. In T. H. H. Chan, S. H. Poon, & P. Y. H. Wong (Eds.), Theory and Applications of Models of Computation (pp. 1–19). Springer. https://doi.org/10.1007/978-3-540-79228-4_1 [Google Scholar] [Crossref]

38. Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18–84. https://doi.org/10.2139/ssrn.2972855 [Google Scholar] [Crossref]

39. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74. https://doi.org/10.5465/amr.1989.4279003 [Google Scholar] [Crossref]

40. Elliott, M. T. J., et al. (2025). Evolving generative AI: Entangling the accountability landscape. Communications of the ACM, 68(?). https://doi.org/10.1145/3664823 [Google Scholar] [Crossref]

41. Elwell, R., & Polikar, R. (2011). Incremental learning of concept drift in nonstationary environments. IEEE Transactions on Neural Networks, 22(10), 1517–1531. https://doi.org/10.1109/TNN.2011.2160459 [Google Scholar] [Crossref]

42. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64. https://doi.org/10.1518/001872095779049543 [Google Scholar] [Crossref]

43. Endsley, M. R., & Kiris, E. O. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462–492. https://doi.org/10.1080/001401399185595 [Google Scholar] [Crossref]

44. Ettish, A. A., El Gazzar, S. M., & Jacob, R. A. (2017). Integrating internal control frameworks for effective corporate information technology governance. Journal of Information Systems and Technology Management, 14(3), 361–370. https://doi.org/10.4301/S1807-17752017000300004 [Google Scholar] [Crossref]

45. Fama, E. F., & Jensen, M. C. (1983). Agency problems and residual claims. Journal of Law and Economics, 26(2), 327–349. https://doi.org/10.1086/467038 [Google Scholar] [Crossref]

46. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People: An ethical framework for a good AI society. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5 [Google Scholar] [Crossref]

47. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Philosophy & Technology, 31(4), 689–707. https://doi.org/10.1007/s13347-018-0319-1 [Google Scholar] [Crossref]

48. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People: An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 [Google Scholar] [Crossref]

49. Gama, J., Medas, P., Castillo, G., & Rodrigues, P. (2004). Learning with drift detection. In Advances in Artificial Intelligence – SBIA 2004 (pp. 286–295). Springer. https://doi.org/10.1007/978-3-540-28645-5_29 [Google Scholar] [Crossref]

50. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., & Bouchachia, A. (2014). A survey on concept drift adaptation. ACM Computing Surveys, 46(4), Article 44. https://doi.org/10.1145/2523813 [Google Scholar] [Crossref]

51. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723 [Google Scholar] [Crossref]

52. Gehrke, J. D., & McDonald, J. (2008). Evaluating situation awareness of autonomous systems. In Proceedings of the 4th International Conference on Augmented Cognition (pp. 274–283). ACM. https://doi.org/10.1145/1774674.1774706 [Google Scholar] [Crossref]

53. Gemaque, R. N., Costa, A. F. J., Giusti, R., & Dos Santos, E. M. (2020). An overview of unsupervised drift detection methods. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(6), e1381. https://doi.org/10.1002/widm.1381 [Google Scholar] [Crossref]

54. Gittell, J. H. (2002). Coordinating mechanisms in care provider groups. Management Science, 48(11), 1408–1426. https://doi.org/10.1287/mnsc.48.11.1408.268 [Google Scholar] [Crossref]

55. Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089 [Google Scholar] [Crossref]

56. Goodfellow, I. J., Mirza, M., Xiao, D., Courville, A., & Bengio, Y. (2013). An empirical investigation of catastrophic forgetting in gradient based neural networks. arXiv. https://doi.org/10.48550/arXiv.1312.6211 [Google Scholar] [Crossref]

57. Goslar, M. D., & Green, G. I. (1986). Applications and implementation: Decision support systems. Information Systems Management, 3(1), 42–50. https://doi.org/10.1111/j.1540-5915.1986.tb00214.x [Google Scholar] [Crossref]

58. Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision making in healthcare. Journal of Medical Ethics, 46(3), 205–211. https://doi.org/10.1136/medethics-2019-105586 [Google Scholar] [Crossref]

59. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), Article 93. https://doi.org/10.1145/3236009 [Google Scholar] [Crossref]

60. Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850 [Google Scholar] [Crossref]

61. Hadfield-Menell, D., Russell, S. J., Abbeel, P., & Dragan, A. D. (2017). Cooperative inverse reinforcement learning. In NeurIPS 2016. https://doi.org/10.48550/arXiv.1606.03137 [Google Scholar] [Crossref]

62. Haiwei Ma, Sunny Parawala, and Svetlana Yarosh. 2021. Detecting Expressive Writing in Online Health Communities by Modeling Aggregated Empirical Data. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 62 (April 2021), 32 pages. https://doi.org/10.1145/3449136 [Google Scholar] [Crossref]

63. Hecking, T., Palkowski, S., & Hoppe, H. U. (2019). Positional analysis in cross-media information diffusion. Applied Network Science, 4, 69. https://doi.org/10.1007/s41109-018-0108-x [Google Scholar] [Crossref]

64. Hendrycks, D., Lee, K., & Mazeika, M. (2019). Using pretraining can improve model robustness and uncertainty. In ICML 2019. https://doi.org/10.48550/arXiv.1901.09960 [Google Scholar] [Crossref]

65. Herbst, H., & Knolmayer, G. (1996). Business rules in systems analysis: A meta-model and repository system. Information Systems, 21(2), 147–166. https://doi.org/10.1016/0306-4379(96)00009-9 [Google Scholar] [Crossref]

66. Horneber, D. (2023). Algorithmic accountability. Business & Information Systems Engineering, 65(6), 723–730. https://doi.org/10.1007/s12599-023-00817-8 [Google Scholar] [Crossref]

67. Hu, V. C., Ferraiolo, D., Kuhn, R., Schnitzer, A., Sandlin, K., Miller, R., & Scarfone, K. (2014). Guide to attribute based access control (ABAC) definition and considerations (NIST SP 800 162). https://doi.org/10.6028/NIST.SP.800-162 [Google Scholar] [Crossref]

68. Hu, V. C., Kuhn, D. R., Ferraiolo, D. F., & Voas, J. (2015). Attribute based access control. Computer, 48(2), 85–88. https://doi.org/10.1109/MC.2015.33 [Google Scholar] [Crossref]

69. J. Lu, A. Liu, F. Dong, F. Gu, J. Gama and G. Zhang, "Learning under Concept Drift: A Review," in IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 2346-2363, 1 Dec. 2019, doi: 10.1109/TKDE.2018.2876857. [Google Scholar] [Crossref]

70. Jarrahi, M. H., & Sutherland, W. (2021). Algorithmic management: A new emerging tool in the workplace. Big Data & Society, 8(2). https://doi.org/10.1177/20539517211020332 [Google Scholar] [Crossref]

71. Jennings, N. R. (2000). On agent based software engineering. Artificial Intelligence, 117(2), 277–296. https://doi.org/10.1016/S0004-3702(99)00107-1 [Google Scholar] [Crossref]

72. Jennings, N. R. (2000). On agent-based software engineering. Artificial Intelligence, 117(2), 277–296. https://doi.org/10.1016/S0004-3702(99)00107-1 [Google Scholar] [Crossref]

73. Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3(4), 305–360. https://doi.org/10.1016/0304-405X(76)90026-X [Google Scholar] [Crossref]

74. João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46, 4, Article 44 (April 2014), 37 pages. https://doi.org/10.1145/2523813 [Google Scholar] [Crossref]

75. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2 [Google Scholar] [Crossref]

76. Kaber, D. B., & Endsley, M. R. (2004). The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomics Science, 5(2), 113–153. https://doi.org/10.1080/1463922021000054335 [Google Scholar] [Crossref]

77. Keen, P. G. W. (1987). Decision support systems: The next decade. Decision Support Systems, 3(3), 253–265. https://doi.org/10.1016/0167-9236(87)90180-1 [Google Scholar] [Crossref]

78. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174 [Google Scholar] [Crossref]

79. Kerr, D. S., & Murthy, U. S. (2013). The importance of the COBIT framework IT processes for effective internal control over financial reporting in organizations: An international survey. Information & Management, 50(7), 590–597. https://doi.org/10.1016/j.im.2013.07.012 [Google Scholar] [Crossref]

80. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521–3526. https://doi.org/10.1073/pnas.1611835114 [Google Scholar] [Crossref]

81. Kirsch, L. J. (1997). Portfolios of control modes and IS project management. Information Systems Research, 8(3), 215–239. https://doi.org/10.1287/isre.8.3.215 [Google Scholar] [Crossref]

82. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade offs in the fair determination of risk scores. In ITCS 2017. https://doi.org/10.4230/LIPIcs.ITCS.2017.43 [Google Scholar] [Crossref]

83. Kohyarnejadfard, I., Nikanjam, A., & Saleh, K. (2022). Anomaly detection in microservice environments: A systematic literature review. Journal of Big Data, 9(1), 1–38. https://doi.org/10.1186/s13677-022-00296-4 [Google Scholar] [Crossref]

84. Könighofer, B., Bloem, R., & Ehlers, R. (2023). Online shielding for reinforcement learning. International Journal on Software Tools for Technology Transfer, 25, 1–24. https://doi.org/10.1007/s11334-022-00480-4 [Google Scholar] [Crossref]

85. Korycki, Ł., & Krawczyk, B. (2022). Concept drift detection for streaming data. Machine Learning, 111, 1243–1268. https://doi.org/10.1007/s10994-022-06177-w [Google Scholar] [Crossref]

86. Langer, M., Baum, K., & König, P. (2024). Effective human oversight of AI-based systems: A signal detection perspective on the detection of inaccurate and unfair outputs. Minds and Machines, 34, 1–32. https://doi.org/10.1007/s11023-024-09701-0 [Google Scholar] [Crossref]

87. Leno, V., Dumas, M., La Rosa, M., & Maggi, F. M. (2020). Multi-perspective declarative process discovery. Information Systems, 89, 101438. https://doi.org/10.1016/j.is.2019.101438 [Google Scholar] [Crossref]

88. Li, B. Q., Wen, S. P., Yan, Z., Wen, G. H., & Huang, T. W. (2023). A survey on the control Lyapunov function and control barrier function for nonlinear-affine control systems. IEEE/CAA Journal of Automatica Sinica, 10(3), 584–602. https://doi.org/10.1109/JAS.2023.123075 [Google Scholar] [Crossref]

89. Lipton, Z. C., Wang, Y. X., & Smola, A. (2018). Detecting and correcting for label shift with black box predictors. In ICML 2018. https://doi.org/10.48550/arXiv.1802.03916 [Google Scholar] [Crossref]

90. Lu, J., Liu, A., Dong, F., Gu, F., Gama, J., & Zhang, G. (2018). Learning under concept drift: A review. IEEE Transactions on Knowledge and Data Engineering, 31(12), 2346–2363. https://doi.org/10.1109/TKDE.2018.2876857 [Google Scholar] [Crossref]

91. Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655. https://doi.org/10.1016/j.jbi.2020.103655 [Google Scholar] [Crossref]

92. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1 [Google Scholar] [Crossref]

93. Meijerink, J. (2021). Algorithmic management of work and workers: A narrative review and research agenda. The International Journal of Human Resource Management, 32(20), 1–27. https://doi.org/10.1080/09585192.2021.1925326 [Google Scholar] [Crossref]

94. Meijerink, J., & Bondarouk, T. (2023). The duality of algorithmic management: Toward a research agenda on HRM algorithms. Human Resource Management Review, 33(1), 100876. https://doi.org/10.1016/j.hrmr.2021.100876 [Google Scholar] [Crossref]

95. Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2019). Automation-induced complacency potential: Development and validation of a new scale. Frontiers in Psychology, 10, 225. https://doi.org/10.3389/fpsyg.2019.00225 [Google Scholar] [Crossref]

96. Mikes, A. (2009). Risk management and calculative cultures. Management Accounting Research, 20(1), 18–40. https://doi.org/10.1016/j.mar.2008.10.005 [Google Scholar] [Crossref]

97. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … Gebru, T. (2019). Model cards for model reporting. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596 [Google Scholar] [Crossref]

98. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596 [Google Scholar] [Crossref]

99. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229). ACM. https://doi.org/10.1145/3287560.3287596 [Google Scholar] [Crossref]

100. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4 [Google Scholar] [Crossref]

101. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679 [Google Scholar] [Crossref]

102. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. https://doi.org/10.1038/nature14236 [Google Scholar] [Crossref]

103. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. https://doi.org/10.1038/nature14236 [Google Scholar] [Crossref]

104. Moreno Torres, J. G., Raeder, T., Alaiz Rodríguez, R., Chawla, N. V., & Herrera, F. (2012). A unifying view on dataset shift in classification. Pattern Recognition, 45(1), 521–530. https://doi.org/10.1016/j.patcog.2011.06.019 [Google Scholar] [Crossref]

105. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5 [Google Scholar] [Crossref]

106. Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation bias: Decision making and performance in high-tech cockpits. International Journal of Aviation Psychology, 8(1), 47–63. https://doi.org/10.1207/s15327108ijap0801_3 [Google Scholar] [Crossref]

107. National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100 1). https://doi.org/10.6028/NIST.AI.100-1 [Google Scholar] [Crossref]

108. Ouchi, W. G. (1979). A conceptual framework for the design of organizational control mechanisms. Management Science, 25(9), 833–848. https://doi.org/10.1287/mnsc.25.9.833 [Google Scholar] [Crossref]

109. Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., Dillon, J. V., Lakshminarayanan, B., & Snoek, J. (2019). Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. NeurIPS 2019. https://doi.org/10.48550/arXiv.1906.02530 [Google Scholar] [Crossref]

110. Page, E. S. (1954). Continuous inspection schemes. Biometrika, 41(1–2), 100–115. https://doi.org/10.1093/biomet/41.1-2.100 [Google Scholar] [Crossref]

111. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055 [Google Scholar] [Crossref]

112. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886 [Google Scholar] [Crossref]

113. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354 [Google Scholar] [Crossref]

114. Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural Networks, 113, 54–71. https://doi.org/10.1016/j.neunet.2019.01.012 [Google Scholar] [Crossref]

115. Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763 [Google Scholar] [Crossref]

116. Park, M., Leahey, E., & Funk, R. J. (2023). Papers and patents are becoming less disruptive over time. Nature, 613(7942), 138–144. https://doi.org/10.1038/s41586-022-05543-x [Google Scholar] [Crossref]

117. Park, R. C., & Parker, B. J. (1986). Decision support systems: The reality that seems hard to accept. Omega, 14(2), 135–143. https://doi.org/10.1016/0305-0483(86)90016-2 [Google Scholar] [Crossref]

118. Park, S., & Humphreys, M. (2021). The emergence of autonomous decision-making and the future of organizational control. Academy of Management Perspectives, 35(4), 676–694. https://doi.org/10.5465/amp.2019.0062 [Google Scholar] [Crossref]

119. Park, Y., & Kim, H. (2023). Robotic process automation: A systematic literature review and future research directions. Data & Knowledge Engineering, 147, 102229. https://doi.org/10.1016/j.datak.2023.102229 [Google Scholar] [Crossref]

120. Peterson, R. (2004). Crafting information technology governance. Information Systems Management, 21(4), 7–22. https://doi.org/10.1201/1078/44705.21.4.20040901/84183.2 [Google Scholar] [Crossref]

121. Polikar, R., Udpa, L., Udpa, S. S., & Honavar, V. (2001). Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 31(4), 497–508. https://doi.org/10.1109/5326.983933 [Google Scholar] [Crossref]

122. Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential liability for physicians using artificial intelligence. JAMA, 322(18), 1765–1766. https://doi.org/10.1001/jama.2019.15064 [Google Scholar] [Crossref]

123. Rabanser, S., Günnemann, S., & Lipton, Z. C. (2019). Failing loudly: An empirical study of methods for detecting dataset shift. In NeurIPS 2019. https://doi.org/10.48550/arXiv.1810.11953 [Google Scholar] [Crossref]

124. Radanliev, P., De Roure, D., & others. (2025). AI ethics: Integrating transparency, fairness, and privacy in autonomous systems. Applied Artificial Intelligence, 39(1), 1–24. https://doi.org/10.1080/08839514.2025.2463722 [Google Scholar] [Crossref]

125. Rahman, M. S., Saha, S., & Hasan, S. (2024). Runtime verified neural networks for cyber-physical systems. In Proceedings of the 32nd ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. https://doi.org/10.1145/3679008.3685547 [Google Scholar] [Crossref]

126. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y [Google Scholar] [Crossref]

127. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. “Sandy”, Roberts, M. E., Shariff, A., Tenenbaum, J. B., & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y [Google Scholar] [Crossref]

128. Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19) (pp. 429–435). Association for Computing Machinery. https://doi.org/10.1145/3306618.3314244 [Google Scholar] [Crossref]

129. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873 [Google Scholar] [Crossref]

130. Ramadge, P. J., & Wonham, W. M. (1987). Supervisory control of a class of discrete event processes. SIAM Journal on Control and Optimization, 25(1), 206–230. https://doi.org/10.1137/0325013 [Google Scholar] [Crossref]

131. Reason, J. (1990). Human error. Cambridge University Press. https://doi.org/10.1017/CBO9781139062367 [Google Scholar] [Crossref]

132. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778 [Google Scholar] [Crossref]

133. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x [Google Scholar] [Crossref]

134. Saltzer, J. H., & Schroeder, M. D. (1975). The protection of information in computer systems. Proceedings of the IEEE, 63(9), 1278–1308. https://doi.org/10.1109/PROC.1975.9939 [Google Scholar] [Crossref]

135. Saltzer, J. H., & Schroeder, M. D. (1975). The protection of information in computer systems. Proceedings of the IEEE, 63(9), 1278–1308. https://doi.org/10.1109/PROC.1975.9939 [Google Scholar] [Crossref]

136. Sánchez, C., Merz, S., & Viganò, L. (2019). A survey of challenges for runtime verification from advanced application domains. Formal Methods in System Design, 54(3), 279–335. https://doi.org/10.1007/s10703-019-00337-w [Google Scholar] [Crossref]

137. Sandhu, R. S., Coyne, E. J., Feinstein, H. L., & Youman, C. E. (1996). Role-based access control models. Computer, 29(2), 38–47. https://doi.org/10.1109/2.485845 [Google Scholar] [Crossref]

138. Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15. https://doi.org/10.3389/frobt.2018.00015 [Google Scholar] [Crossref]

139. Saria, S., & Subbaswamy, A. (2019). Tutorial: Safe and reliable machine learning. arXiv. https://doi.org/10.48550/arXiv.1904.07204 [Google Scholar] [Crossref]

140. Sarter, N. B., & Woods, D. D. (1997). Team play with a powerful and independent agent: Operational experiences and automation surprises on the Airbus A-320. Human Factors, 39(4), 553–569. https://doi.org/10.1518/001872097778667997 [Google Scholar] [Crossref]

141. Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54–63. https://doi.org/10.1145/3381831 [Google Scholar] [Crossref]

142. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598 [Google Scholar] [Crossref]

143. Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log likelihood function. Journal of Statistical Planning and Inference, 90(2), 227–244. https://doi.org/10.1016/S0378-3758(00)00115-4 [Google Scholar] [Crossref]

144. Shivakumar, S., Desai, A., Seshia, S. A., & Akella, S. (2020). SOTER on ROS: A run-time assurance framework for distributed robotic systems. In Runtime Verification (pp. 171–189). Springer. https://doi.org/10.1007/978-3-030-60508-7_10 [Google Scholar] [Crossref]

145. Shumway, D. O., & Hartman, H. J. (2024). Medical malpractice liability in large language model artificial intelligence: Legal review and policy recommendations. Journal of Osteopathic Medicine, 124(7), 287–290. https://doi.org/10.1515/jom-2023-0229 [Google Scholar] [Crossref]

146. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. https://doi.org/10.1038/nature24270 [Google Scholar] [Crossref]

147. Simmhan, Y. L., Plale, B., & Gannon, D. (2005). A survey of data provenance in e science. ACM SIGMOD Record, 34(3), 31–36. https://doi.org/10.1145/1084805.1084812 [Google Scholar] [Crossref]

148. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118. https://doi.org/10.2307/1884852 [Google Scholar] [Crossref]

149. Skitka, L. J., Mosier, K. L., & Burdick, M. D. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006. https://doi.org/10.1006/ijhc.1999.0252 [Google Scholar] [Crossref]

150. Stark, D., & Pais, I. (2020). Algorithmic management in the platform economy. Sociologica, 14(3), 47–72. https://doi.org/10.6092/ISSN.1971-8853/12221 [Google Scholar] [Crossref]

151. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1), 9–44. https://doi.org/10.1023/A:1022633531479 [Google Scholar] [Crossref]

152. Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. https://doi.org/10.1080/14494035.2021.1928377 [Google Scholar] [Crossref]

153. Thrun, S., & Mitchell, T. M. (1995). Lifelong robot learning. Robotics and Autonomous Systems, 15(1–2), 25–46. https://doi.org/10.1016/0921-8890(95)00004-Y [Google Scholar] [Crossref]

154. Tiwana, A., Konsynski, B., & Bush, A. (2010). Platform evolution: Coevolution of platform architecture, governance, and environmental dynamics. Information Systems Research, 21(4), 675–687. https://doi.org/10.1287/isre.1100.0323 [Google Scholar] [Crossref]

155. Tsymbal, A. (2004). The problem of concept drift: Definitions and related work. Computer Science Department, Trinity College Dublin (Technical Report). https://doi.org/10.13140/RG.2.2.16165.55520 [Google Scholar] [Crossref]

156. Tuttle, B., & Vandervelde, S. D. (2007). An empirical examination of COBIT as an internal control framework for information technology. International Journal of Accounting Information Systems, 8(4), 240–263. https://doi.org/10.1016/j.accinf.2007.09.001 [Google Scholar] [Crossref]

157. Tuyls, K., & Weiss, G. (2012). Multiagent learning: Basics, challenges, and prospects. AI Magazine, 33(3), 41–52. https://doi.org/10.1609/aimag.v33i3.2426 [Google Scholar] [Crossref]

158. Vallas, S., & Schor, J. B. (2020). What do platforms do? Understanding the gig economy. Annual Review of Sociology, 46, 273–294. https://doi.org/10.1146/annurev-soc-121919-054857 [Google Scholar] [Crossref]

159. van der Aalst, W. M. P. (2012). Process mining. Communications of the ACM, 55(8), 76–83. https://doi.org/10.1145/2240236.2240257 [Google Scholar] [Crossref]

160. van der Aalst, W. M. P. (2012). Process mining: Overview and opportunities. ACM Transactions on Management Information Systems, 3(2), Article 7. https://doi.org/10.1145/2229156.2229157 [Google Scholar] [Crossref]

161. van der Aalst, W. M. P. (2013). Business process management: A comprehensive survey. ISRN Software Engineering, 2013, 507984. https://doi.org/10.1155/2013/507984 [Google Scholar] [Crossref]

162. Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J., Jaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Paine, T. L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., & Silver, D. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782), 350–354. https://doi.org/10.1038/s41586-019-1724-z [Google Scholar] [Crossref]

163. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005 [Google Scholar] [Crossref]

164. Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology. https://doi.org/10.2139/ssrn.3063289 [Google Scholar] [Crossref]

165. Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., Fan, L., & Anandkumar, A. (2023). Voyager: An open-ended embodied agent with large language models. arXiv. https://doi.org/10.48550/arXiv.2305.16291 [Google Scholar] [Crossref]

166. Watkins, C. J. C. H., & Dayan, P. (1992). Q learning. Machine Learning, 8, 279–292. https://doi.org/10.1007/BF00992698 [Google Scholar] [Crossref]

167. Webb, G. I., Hyde, R., Cao, H., Nguyen, H. L., & Petitjean, F. (2016). Characterizing concept drift. Data Mining and Knowledge Discovery, 30, 964–994. https://doi.org/10.1007/s10618-015-0448-4 [Google Scholar] [Crossref]

168. weeney, L. (2002). k anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge Based Systems, 10(5), 557–570. https://doi.org/10.1142/S0218488502001648 [Google Scholar] [Crossref]

169. Wexler, R. (2018). Life, liberty, and trade secrets: Intellectual property in the criminal justice system. Stanford Law Review, 70(5), 1343–1422. https://doi.org/10.2139/ssrn.2920883 [Google Scholar] [Crossref]

170. Widmer, G., & Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Machine Learning, 23, 69–101. https://doi.org/10.1023/A:1018046501280 [Google Scholar] [Crossref]

171. Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 1–18. https://doi.org/10.1145/3351095.3372833 [Google Scholar] [Crossref]

172. Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good gig, bad gig: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–75. https://doi.org/10.1177/0950017018785616 [Google Scholar] [Crossref]

173. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152. https://doi.org/10.1017/S0269888900008122 [Google Scholar] [Crossref]

174. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). ReAct: Synergizing reasoning and acting in language models. arXiv. https://doi.org/10.48550/arXiv.2210.03629 [Google Scholar] [Crossref]

175. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158 [Google Scholar] [Crossref]

176. Zarsky, T. Z. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575 [Google Scholar] [Crossref]

177. Zhang, P., Chen, Y., & Seshia, S. A. (2023). Model predictive runtime verification for cyber-physical systems. In Runtime Verification (pp. 163–181). Springer. https://doi.org/10.1007/978-3-031-42626-1_10 [Google Scholar] [Crossref]

178. Zhao, S. (2025). A comprehensive review on control barrier functions. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2025.3633800 [Google Scholar] [Crossref]

179. Zliobaitė, I. (2010). Learning under concept drift: An overview. arXiv. https://doi.org/10.48550/arXiv.1010.4784 [Google Scholar] [Crossref]

180. Žliobaitė, I. (2010). Learning under concept drift: An overview. arXiv (peer referenced overview used widely in drift literature). https://doi.org/10.48550/arXiv.1010.4784 [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles