A Multisectoral and Democratized AI Governance Policy for St. Paul University Manila Countering Global Techno-Authoritarianism and Abuse
Authors
St. Paul University Manila (Philippines)
Article Information
DOI: 10.51244/IJRSI.2026.13010093
Subject Category: Education
Volume/Issue: 13/1 | Page No: 1040-1078
Publication Timeline
Submitted: 2026-01-22
Accepted: 2026-01-27
Published: 2026-02-03
Abstract
This study investigates the influence of generative artificial intelligence (AI) on higher education governance, with a focus on creating a multisectoral and democratized policy framework for St. Paul University Manila (SPUM). It discusses the ethical and practical implications of AI adoption, highlighting the increasing control of AI by global corporations and the risks associated with techno-authoritarianism. The paper asserts that universities, particularly Catholic institutions, must assert leadership in the governance of AI by creating policies that align with their moral and civic responsibilities. By examining both open-source and for-profit AI systems, the study emphasizes the need for universities to manage AI's integration through participatory, transparent governance that incorporates faculty, students, administrators, IT professionals, and community stakeholders. Key mechanisms for user influence on AI, such as prompt engineering and system-level personalization, are also explored, alongside the economic pressures shaping AI's architecture and functionality. The proposed framework offers a balance between centralized control for risk management and decentralized user involvement in decision-making processes. The paper concludes by recommending a localized AI governance policy for SPUM, grounded in Catholic social teachings, and emphasizes the necessity of ongoing review and adaptation to ensure AI's responsible use in teaching, research, and administration.
Keywords
AI governance, Generative AI, Ethical frameworks, Higher education policy, Catholic institutions
Downloads
References
1. Anthropic. (2023). Claude: The next generation AI assistant. https://www.anthropic.com [Google Scholar] [Crossref]
2. Beer, S. (1981). The brain of the firm: The managerial cybernetics of organization. Allen & Unwin. [Google Scholar] [Crossref]
3. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press. [Google Scholar] [Crossref]
4. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 [Google Scholar] [Crossref]
5. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa [Google Scholar] [Crossref]
6. Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. Sage. [Google Scholar] [Crossref]
7. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. https://arxiv.org/abs/2005.14165 [Google Scholar] [Crossref]
8. Buhmann, A., & Berendt, B. (2022). Human-centered AI: Managing risk and governance. AI & Society, 37(3), 763–777. https://doi.org/10.1007/s00146-021-01315-4 [Google Scholar] [Crossref]
9. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Sage. [Google Scholar] [Crossref]
10. Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. [Google Scholar] [Crossref]
11. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. [Google Scholar] [Crossref]
12. Feldstein, S. (2019). The global expansion of AI surveillance. Carnegie Endowment for International Peace. [Google Scholar] [Crossref]
13. Floridi, L. (2023). Ethics, governance, and policies for artificial intelligence. Springer. [Google Scholar] [Crossref]
14. Floridi, L. (2023). The logic of AI governance: Principles for responsible artificial intelligence. Oxford University Press. [Google Scholar] [Crossref]
15. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 [Google Scholar] [Crossref]
16. Hao, K. (2022). Open-source AI: Benefits, risks, and the future. MIT Technology Review. https://www.technologyreview.com [Google Scholar] [Crossref]
17. Hao, K. (2023). AI is shaping up to be a tool of institutional power. MIT Technology Review. https://www.technologyreview.com [Google Scholar] [Crossref]
18. Hugging Face. (2023). Transformers library. https://huggingface.co/transformers [Google Scholar] [Crossref]
19. Jonassen, D. H. (1999). Computers as mindtools for schools: Engaging critical thinking. Prentice Hall. [Google Scholar] [Crossref]
20. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage. [Google Scholar] [Crossref]
21. Midgley, G. (2000). Systemic intervention: Philosophy, methodology, and practice. Springer. [Google Scholar] [Crossref]
22. Morozov, E. (2011). The net delusion: The dark side of Internet freedom. PublicAffairs. [Google Scholar] [Crossref]
23. OpenAI. (2023a). ChatGPT: Optimizing language models for dialogue. https://openai.com/research/chatgpt [Google Scholar] [Crossref]
24. OpenAI. (2023b). GPT-4 technical report. https://openai.com/research/gpt-4 [Google Scholar] [Crossref]
25. OpenAI. (2023c). ChatGPT and GPT-4 technical overview. https://openai.com/research [Google Scholar] [Crossref]
26. Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28(9), 1435–1448. https://doi.org/10.1177/0170840607081138 [Google Scholar] [Crossref]
27. Pontifical Council for Culture. (2020). Ethics in the digital age: Human dignity and artificial intelligence. Vatican Press. [Google Scholar] [Crossref]
28. Rae, J., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., … Irving, G. (2021). Scaling language models: Methods and insights. arXiv preprint arXiv:2106.04554. https://arxiv.org/abs/2106.04554 [Google Scholar] [Crossref]
29. Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8 [Google Scholar] [Crossref]
30. Schwandt, T. A. (2014). The Sage dictionary of qualitative inquiry (4th ed.). Sage. [Google Scholar] [Crossref]
31. Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities (4th ed.). Sage Publications. [Google Scholar] [Crossref]
32. Stake, R. E. (1995). The art of case study research. Sage. [Google Scholar] [Crossref]
33. Stiennon, N., Ouyang, L., Wu, J., Ziegler, D. M., Lowe, R., Voss, C., … Christiano, P. (2023). Learning to summarize with human feedback. Journal of Machine Learning Research, 24(1), 1–42. https://www.jmlr.org/papers/v24/22-0474.html [Google Scholar] [Crossref]
34. Touvron, H., Martin, L., Stone, E., Albert, P., Almahairi, A., Babaei, Y., … Douze, M. (2023). LLaMA 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. https://arxiv.org/abs/2307.09288 [Google Scholar] [Crossref]
35. Trist, E. (1981). The evolution of socio-technical systems. In A. Van de Ven & W. Joyce (Eds.), Perspectives on organization design and behavior (pp. 283–292). Wiley. [Google Scholar] [Crossref]
36. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO Publishing. https://unesdoc.unesco.org/ark:/48223/pf0000373434 [Google Scholar] [Crossref]
37. Vanderburg, W. H. (2022). Technology and human responsibility: Catholic perspectives on digital ethics. Georgetown University Press. [Google Scholar] [Crossref]
38. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar] [Crossref]
39. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., … Le, Q. (2023). Emergent capabilities of large language models. Transactions on Machine Learning Research, 10, 1–42. [Google Scholar] [Crossref]
40. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0 [Google Scholar] [Crossref]
41. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- Assessment of the Role of Artificial Intelligence in Repositioning TVET for Economic Development in Nigeria
- Teachers’ Use of Assure Model Instructional Design on Learners’ Problem Solving Efficacy in Secondary Schools in Bungoma County, Kenya
- “E-Booksan Ang Kaalaman”: Development, Validation, and Utilization of Electronic Book in Academic Performance of Grade 9 Students in Social Studies
- Analyzing EFL University Students’ Academic Speaking Skills Through Self-Recorded Video Presentation
- Major Findings of The Study on Total Quality Management in Teachers’ Education Institutions (TEIs) In Assam – An Evaluative Study