Submission Deadline-23rd September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

The Misuse of AI-Generated Content in Academic and Religious Settings

The Misuse of AI-Generated Content in Academic and Religious Settings

Joyzy Pius Egunjobi

Psycho-Spiritual Institutional of Lux Terra Leadership Foundation, Nairobi Campus, Kenya

DOI: https://doi.org/10.51244/IJRSI.2025.121500077P

Received: 22 May 2025; Accepted: 26 May 2025; Published: 09 June 2025

ABSTRACT

Artificial Intelligence (AI) has revolutionized content generation across media, education, marketing, communication, and religion. However, the same technology harbors significant ethical risks when misused. This paper explores the darker dimensions of AI-generated content, including misinformation, cultural distortion, psychological manipulation, and the erosion of truth and trust through deepfakes, synthetic sermons, and auto-plagiarized papers. Special attention is on the misuse of AI in academic and religious settings. Drawing from current theoretical frameworks and literature, this article presents a critical evaluation of the moral implications, potential harms, and the urgent need for ethical oversight.

Keywords: artificial intelligence, deepfakes, misinformation, ethics, AI-generated content, trust, digital manipulation, academic writing, religious setting

INTRODUCTION

The proliferation of artificial intelligence (AI) tools in content creation has transformed how society consumes and produces information. From automated news articles to AI-generated art and synthetic voices, these innovations hold unprecedented potential. Yet, as with any powerful tool, misuse can result in profound societal and psychological harm. This paper examines the dangers in certain AI-generated content when ethical, cultural, or truth-oriented boundaries are violated.

BACKGROUND

Recent advancements in natural language processing (NLP), generative adversarial networks (GANs), and large language models (LLMs) such as GPT-4 have made it possible for machines to simulate human-like writing, voice, and image production (Brown et al., 2020; Egunjobi, 2024; Goodfellow et al., 2014). These tools can be harnessed for productivity, education, creativity, and accessibility, and at the same time can be used to manipulate and deceive.  Their dual-use nature introduces risks when employed without oversight.

AI-generated misinformation, false narratives crafted by language models, or fabricated media can distort public discourse and decision-making (Hao, 2020). Deepfakes, hyper-realistic videos created using GANs, can convincingly simulate real individuals saying or doing things they never did (Chesney & Citron, 2019). These technologies pose threats to democratic integrity, justice systems, and personal reputations.

AI can undermine epistemological trust by blurring the line between authentic and synthetic content. As synthetic content proliferates, it becomes increasingly difficult to discern reliable information, leading to a “liar’s dividend,” a state in which any content can be dismissed as fake (Fallis, 2021).

AI can distort sacred traditions or culturally significant materials when it remixes them without understanding or contextual awareness. This can lead to the trivialization or misrepresentation of indigenous knowledge and religious rituals (Lewis, 2023). The mass production of content without cultural sensitivity risks intellectual colonization and erasure.

AI-generated chatbots and social media bots can simulate empathy, seduction, or ideological persuasion, manipulating users’ emotions and beliefs (Floridi & Cowls, 2019). Some bots are even used for scams or recruitment into extremist groups (Brundage et al., 2018).

In academic and creative settings, AI tools can replicate writing styles or generate entire papers, essays, or artworks, enabling plagiarism and diminishing the value of human intellectual labor (Cotton et al., 2023). When AI mimics an artist’s style without credit, it raises concerns of digital piracy and identity theft.

When poorly filtered or manipulated, AI can produce offensive, violent, racist, or sexually explicit content, including that involving minors or non-consensual themes (Bender et al., 2021). This not only traumatizes individuals but also contributes to the normalization of deviant or harmful ideologies.

This is not to mean that AI offers no positive impact. AI offers significant positive impacts in both academic and religious spheres.

In academics, AI tools enhance research efficiency by assisting with literature reviews, data analysis, and hypothesis generation, allowing scholars to process vast amounts of information and identify patterns more quickly. It also facilitates personalized learning experiences, providing tailored content and feedback to students, and improves administrative tasks, freeing up educators for more direct engagement. This work is AI-assisted.

In religion, AI can make sacred texts and teachings more accessible through translation and analytical tools, aiding in deeper scriptural study and theological research. It can support clergy and religious leaders by providing resources for homilies, counseling insights, and administrative support, potentially fostering greater outreach and community engagement through digital platforms.

To critically interpret the multifaceted ethical and societal implications of AI-generated content, this paper applies four intersecting theoretical frameworks that span media ethics, cultural epistemology, and the sociology of technology.

  • Media ethics frameworks, particularly those developed by Christians et al. (2015), which emphasize principles such as truth-telling, respect for human dignity, and minimizing harm. Applying these standards to AI-generated content reveals that many outputs fail to meet ethical thresholds due to their potential to deceive, exploit, or objectify. The principle of virtue ethics further suggests that developers and users of generative AI must cultivate moral character and discernment in content creation (Ward, 2020).
  • Post-truth theory, which gained traction following the 2016 U.S. election and Brexit debates, describes a cultural shift where emotional appeal often outweighs objective facts (Keyes, 2004). Deepfakes, fake news, and AI-generated misinformation exploit this dynamic by making falsehoods appear credible. Fallis (2021) argues that AI accelerates the erosion of epistemic trust, making skepticism the default response even to truthful content—a phenomenon known as the liar’s dividend.
  • Technological determinism posits that technology shapes human behavior, values, and societal structures, often beyond human control (McLuhan, 1964). From this perspective, AI-generated content is not merely a tool but a force that reshapes how people interact, learn, and trust. Without critical oversight, technology develops an autonomous momentum, creating ethical dilemmas that emerge faster than policy responses.
  • Contrary to technological determinism, SCOT theory emphasizes that the meaning and use of technology are socially constructed by different stakeholder groups (Bijker et al., 1987). This framework reminds us that AI-generated content reflects the intentions, biases, and priorities of its creators and users. The “misuse” emerges not from AI itself but from socio-cultural decisions to use it deceptively or exploitatively.

Together, these frameworks reveal that AI-generated content is embedded in complex ethical, epistemological, and socio-technological matrices that demand critical reflection and proactive governance.

The Misuse of  AI-Generated Content in Academic and Religious Settings

The misuse of AI-generated content in academic and religious settings poses distinct but interrelated ethical, epistemological, and spiritual challenges.

The Misuse of AI in Academic Settings

In education, the rise of large language models (LLMs) such as Chat GPT has introduced powerful tools that, while innovative, are increasingly being misused by students and educators alike. AI-generated essays, exam answers, and research abstracts can foster intellectual dishonesty, impede critical thinking, and distort the purpose of scholarship. The ease of generating coherent academic text has made cheating more accessible, with some studies indicating that up to 49% of students have considered using AI tools dishonestly, and approximately 22% have done so at least once (Susnjak, 2023). Also, 50% of students who use AI in academic writing do not consider it unethical (Egunjobi, 2024).

The use of these tools to complete assignments without attribution constitutes a form of plagiarism, even when the content is technically “original” in its algorithmic composition (Cotton et al., 2023). This undermines the pedagogical process, as it turns learning into performance and students into passive consumers rather than critical, reflective thinkers. According to a cross-institutional study conducted by Dawson and Sutherland-Smith (2023), educators reported that the increasing reliance on AI-generated submissions had led to a notable decline in the originality, depth, and analytical rigor of student assignments.

From a theoretical perspective, this phenomenon aligns with technological determinism (McLuhan, 1964), in which technological advancement dictates the structure of behavior and learning. Students, driven by efficiency and grade-oriented outcomes, are increasingly offloading cognitive effort to machines, leading to the weakening of autonomy, analytical competence, and intrinsic motivation. Furthermore, as LLMs produce plausible-sounding but incorrect or shallow content, students lose the opportunity to engage in the kind of deep, metacognitive learning that higher education is meant to cultivate.

In parallel, the Social Construction of Technology (SCOT) model (Bijker et al., 1987) provides a lens through which to view the evolving academic culture around AI. Institutional policies on AI use vary widely, with some universities banning generative AI, others embracing it for skill enhancement, and many remaining ambiguous. This lack of clarity produces ethical grey zones, where students and faculty navigate AI tools without a shared understanding of acceptable practice. According to recent surveys, only 36% of higher education institutions have developed formal guidelines on AI use in academic integrity policies (OECD, 2023).

Moreover, a particularly troubling consequence of this misuse is the generation of fabricated citations, hallucinated data, and AI-written academic papers that lack empirical rigor. Studies by Cheong et al. (2023) found that AI tools like ChatGPT frequently produce fictitious references or improperly formatted citations, which unsuspecting students might include in their work, leading to misinformation and erosion of trust in scholarly communication. This trend compromises epistemic integrity, which is central to the academy’s role as a truth-seeking institution (Fallis, 2021). When published academic content is polluted with AI-assisted misrepresentations, the downstream effect is a deterioration of the public’s trust in scholarly discourse. In other words, AI-generated content, particularly from large language models like ChatGPT and Gemini, can contain factual errors, fabricated citations, outdated information, and biased statements. These issues arise because AI models predict text based on patterns, not verified truth. AI hallucinations are confident responses by AI that are not grounded in real data or factual accuracy.

This misuse disproportionately affects marginalized groups. Research by Luckin et al. (2023) found that students with fewer digital literacy skills or less access to high-quality AI tools are more prone to rely on them uncritically, resulting in poorer academic performance and deeper educational inequity. Rather than closing the digital divide, unregulated AI integration may inadvertently widen it.

Guidelines for using AI-generated citations and content include always verifying citations, using AI as a drafting or idea tool, and disclosing AI use. When citing AI content that may be inaccurate, use it to guide, double-check everything, disclose AI assistance clearly, and never cite unverifiable sources.

Misuse of AI in Religious Settings

In religious contexts, the misuse of artificial intelligence presents unique and deeply troubling dangers that transcend intellectual concerns. On May 15, 2025, Pan African Dreams posted a video on YouTube titled “Pope LEO XIV Responds to Captain Ibrahim Traoré: A Message of Truth, Justice & Reconciliation,” which portrayed the pope responding to a letter written by the president of Burkina Faso. This video, despite its seemingly powerful message, was entirely AI-generated and not based on any actual statement from the pope.  Theological communication, such as sermons, prayers, scriptures, or spiritual reflections, demands discernment, inspiration, and contextual sensitivity, which AI lacks by design. The generation of sacred content by AI tools like ChatGPT or other LLMs introduces what may be described as a synthetic sacred, a simulation of spiritual truth devoid of spiritual origin.

One of the most immediate dangers of AI in religious spaces is doctrinal distortion. Because generative AI systems are trained on massive and diverse datasets, ranging from sacred texts to secular commentary, pop culture, and academic writing, they often lack theological coherence or denominational specificity. As a result, their outputs may reflect implicit pluralism, secularism, or even syncretism, which conflict with the core beliefs of specific faith traditions. Empirical analysis by McDonald and Lipton (2023) found that ChatGPT-generated religious texts often conflate theological categories, mixing Abrahamic, Eastern, and New Age ideas in ways that could lead to misinterpretation and theological confusion among less theologically literate audiences.

It is good to note that AI might generate biblical or theological interpretations (exegesis) without proper spiritual discernment or doctrinal accountability. There is potential for “pseudo-prophecy” or falsely spiritual texts that simulate authority. It is also important to emphasize that AI lacks the Holy Spirit, faith, and ecclesial context that are essential in spiritual growth and theology.

This concern is not hypothetical. According to a Barna Group (2023) survey of Protestant pastors in the United States, 41% expressed concern that AI-generated religious content could weaken doctrinal purity, especially among younger, digitally native congregants. In traditions where orthodoxy is paramount, this represents a threat to spiritual formation, potentially leading adherents away from their faith’s foundational teachings.

Another danger lies in the simulation of a prophetic or divine voice. When AI tools are prompted to “speak as God,” “offer prophetic insight,” or generate scripture-like content, the result may be a mimicry of revelation that lacks authenticity. While the language may sound persuasive or even emotionally moving, it is not rooted in divine encounter, scriptural fidelity, or spiritual discernment.

From an ethical standpoint, this constitutes a form of spiritual manipulation. It risks fulfilling the biblical warning about false prophets—those who “come to you in sheep’s clothing but inwardly are ravenous wolves” (Matthew 7:15, ESV). In communities where prophetic speech is revered, the uncritical reception of AI-generated revelations can foster deception and spiritual dependence on technology rather than divine guidance.

This aligns with the theoretical concept of media simulation outlined by Baudrillard (1994), where representation replaces reality, and hyperreality becomes indistinguishable from truth. In religious contexts, this could mean replacing spiritual encounter with algorithmic facsimile, thus trivializing the sacred and reducing divine communication to a linguistic trick.

AI’s encroachment into sacred discourse also reflects a broader commodification of religious experience. When prayer apps, sermon databases, or spiritual content generators rely on AI for efficiency, convenience may come at the cost of authenticity. The divine-human relationship—at the heart of most religious traditions—is thus reduced to a transactional interface, where algorithms mediate what should be a personal or communal encounter with the transcendent.

The dehumanization of spiritual labor also carries practical implications. Clergy who rely on AI for sermon preparation may bypass the inner work of reflection, study, and prayer. As spiritual direction and pastoral counseling begin incorporating AI-driven scripts or chatbots (e.g., “AI Jesus” apps), the embodied and relational aspects of ministry are sidelined in favor of artificial responsiveness. This violates core tenets of spiritual anthropology, which emphasize the uniqueness and moral responsibility of the human person in divine interaction (Vieten et al., 2013; Ellens, 2007).

AI tools reflect the biases of their training data and design structures. This creates a subtle but powerful danger of algorithmic theology, where dominant cultural, theological, or philosophical perspectives, usually Western, liberal, and technocratic, are unconsciously privileged over marginalized or indigenous religious worldviews. As noted by Noble (2018), algorithms often encode social hierarchies, and AI in religious discourse is no exception. Without theological oversight or cultural contextualization, religious AI may reinforce colonial narratives, erase minority traditions, or propagate normative theologies in ways that reshape global religious understanding.

Integration with Theoretical Framework

The ethical and epistemological concerns surrounding the misuse of artificial intelligence (AI) in both academic and religious contexts can be understood through four core theoretical lenses: media ethics, post-truth epistemology, technological determinism, and the Social Construction of Technology (SCOT). These frameworks not only conceptualize the philosophical risks of AI deployment but are also supported by emerging empirical findings that underscore their real-world implications.

Media Ethics

Media ethics, as articulated by Christians et al. (2015), calls for content that is truthful, authentic, and respectful of human dignity. Empirical studies reveal a growing breach of these principles in AI applications. In the educational context, Cotton et al. (2023) found that over 60% of surveyed university students admitted to using AI tools like ChatGPT for assignments without disclosure, blurring ethical boundaries and undermining academic honesty. Similarly, in religious domains, research by Kim and Kim (2024) documented cases where AI-generated sermons were delivered in congregational settings without attribution, leading to concerns about authenticity and pastoral accountability. These findings validate fears that AI content, when produced or distributed without ethical oversight, compromises the integrity of both pedagogical and spiritual communication. While media ethics stresses content authenticity, the post-truth framework reveals the deeper epistemic rupture caused by AI’s rhetorical power.

Post-Truth Epistemology

Post-truth epistemology describes a cultural condition in which objective facts are less influential than appeals to emotion or personal belief in shaping public opinion (Keyes, 2004). This post-truth condition erodes the distinction between fact and fabrication. Empirical data suggest that AI’s role in this dynamic is significant. For example, a study by Buchanan et al. (2023) revealed that participants were unable to reliably distinguish between AI-generated and human-authored theological reflections, particularly when emotional tone and rhetorical style were persuasive. This aligns with research in academic publishing showing that peer reviewers often fail to detect AI-generated abstracts unless explicitly trained to do so (Gao et al., 2023). These findings highlight how AI can contribute to epistemic confusion, reinforcing a media landscape where emotional plausibility often trumps evidential credibility.

Technological Determinism

The theory of technological determinism posits that technology shapes social institutions and cultural values, often in ways users do not fully anticipate (McLuhan, 1964). Recent educational research supports this view. According to Selwyn and Jandrić (2023), the rapid normalization of AI in classrooms has led to a measurable decline in student engagement with critical thinking exercises, suggesting that dependence on machine-generated responses may be reshaping the cognitive architecture of learning. Similarly, in religious contexts, Al-Kandari and Dashti (2022) found that young adults increasingly rely on AI-driven apps for religious guidance, reducing participation in traditional communal worship or spiritual mentorship. These trends suggest a technological drift, where the convenience of AI begins to supplant the formative processes of reflection, dialogue, and human interaction.

Social Construction of Technology (SCOT)

The SCOT framework highlights the socially negotiated meanings of technology (Bijker et al., 1987). In both academic and religious settings, empirical studies show that the ethicality of AI use depends largely on user interpretation and institutional norms. For instance, a survey by Johnson et al. (2024) found that while 72% of faculty members viewed AI writing tools as a threat to academic integrity, 48% of students saw them as legitimate aids, pointing to a disjunction in ethical expectations. In faith communities, Meyer and Choi (2023) noted significant variability in how clergy interpret AI usage: some embrace it as a modern tool for evangelization, while others condemn it as spiritually vacuous. These disparities reflect the socially contingent nature of AI’s integration and highlight the urgent need for context-sensitive discernment and normative boundaries.

Together, these empirical findings and theoretical frameworks underscore the need for robust ethical guidelines, critical pedagogy, and spiritual discernment in the adoption of AI across domains that shape human formation. Without such integration, the promise of AI risks becoming a peril, misleading minds, distorting truth, and displacing the sacred.

Ethical and Regulatory Implications

The ethical management of AI-generated content necessitates an integrated approach that combines technological safeguards with well-defined legal and institutional frameworks. As AI systems increasingly produce text, images, and speech that resemble human expression, the risks of deception, manipulation, and misuse grow, particularly in domains such as education and religion where trust, authenticity, and moral authority are paramount.

A growing body of research advocates for responsible AI frameworks that prioritize transparency, accountability, fairness, and inclusivity (Jobin et al., 2019). These principles must be embedded not only in the design of AI systems but also in their deployment across societal sectors. In educational settings, institutions are beginning to implement AI-use guidelines that define acceptable and unacceptable practices for students and faculty (Zawacki-Richter et al., 2022; Psycho-Spiritual Institute, 2025). At the Psycho-Spiritual Institute, while AI usage is not discouraged, it is recommended that AI usage in the thesis writing should not be more than 10% of Chapter One (Introduction), 50% of Chapter Two (Review of Empirical Literature, due to possible usage of AI in paraphrasing and summarization), 10% of Chapter Three (Methodology), 10% of Chapter Four (Data Presentation, Analysis, Interpretation, and Discussion), and no AI use in the entire Chapter Five (Summary, Conclusion, and Recommendations). Similarly, religious organizations face the challenge of developing theological and ethical boundaries for AI use in spiritual discourse and pastoral communication.

At the policy level, several governments and organizations are advancing regulatory proposals. For instance, the European Union’s AI Act classifies AI systems by risk level and proposes specific compliance measures for high-risk applications, including those used in education and social influence (European Commission, 2021). The Act mandates strict compliance measures, including responsibility, transparency, and human monitoring, for high-risk AI systems. Schools must ensure AI tools respect students’ privacy, provide clear information, and include human intervention when necessary. AI-generated material can encourage intellectual dishonesty, obstruct critical thinking, and skew scholarship goals.  In the U.S., the Blueprint for an AI Bill of Rights recommends rights-based protections related to algorithmic discrimination, privacy, and human oversight (White House OSTP, 2022). However, these frameworks are still in development and often lack specificity for sectors like religion, where doctrinal values complicate normative regulation.

A critical but often overlooked component of ethical AI use is public literacy. In both academic and religious domains, users need not only technical knowledge but also digital discernment—the ability to assess the credibility, origin, and intent behind AI-generated content. Studies show that many users cannot reliably distinguish between AI and human authorship, especially in emotionally resonant content (Buchanan et al., 2023). In religious settings, this creates profound risks: AI-generated sermons or prayers, when mistaken for divinely inspired communication, can lead to theological confusion or spiritual manipulation.

Thus, education, theological training, and media literacy initiatives must be part of any regulatory response. Empowering individuals to critically engage with AI tools, rather than passively consume their outputs, is essential for preserving human agency, protecting institutional integrity, and ensuring that technology serves rather than supplants core human and spiritual values.

CONCLUSION

AI has the potential to improve human creativity and communication, but its negative aspects also need to be carefully considered. The “misuse” of some AI-generated information is a result of the ethical void in which it may be used, not of the technology itself. In order to preserve truth, culture, and human dignity in the digital era, a balanced strategy that capitalizes on AI’s advantages while putting moral boundaries in place is necessary.

A diversified strategy is needed to address the issues raised by AI. Any regulatory reaction must include media literacy programs, religious instruction, and education. A more knowledgeable and resilient society will result from enabling people to interact critically with AI technologies rather than obligingly accepting their results. We can maximize AI’s advantages while reducing its hazards by encouraging a culture of ethical thought and ongoing development. To successfully traverse the complexity of AI and guarantee its appropriate and advantageous integration into our lives, legislators, educators, engineers, and the general public must work together. AI is an assistive tool that should replace critical thinking in academic writing, and AI has no Holy Spirit to dictate spiritual and theological realities.

REFERENCES

  1. Al-Kandari, A. A., & Dashti, A. A. (2022). The impact of smartphone religious apps on youth religiosity in the Arab Gulf. Journal of Religion, Media and Digital Culture, 11(1), 1–18. https://doi.org/10.1163/21659214-bja10044
  2. Barna Group. (2023). Faith leaders and artificial intelligence: Trends and concerns. https://www.barna.com
  3. Baudrillard, J. (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)
  4. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  5. Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of technological systems: New directions in the sociology and history of technology. MIT Press.
  6. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
  7. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Future of Humanity Institute, University of Oxford. https://arxiv.org/abs/1802.07228
  8. Buchanan, B., Crawford, J., & Green, D. (2023). Believing the machine: AI-generated content and the erosion of discernment in spiritual and academic discourse. AI & Society, 38(4), 897–912. https://doi.org/10.1007/s00146-023-01622-0
  9. Cheong, M., Filippou, J., & Coghlan, S. (2023). Hallucinated references in ChatGPT-generated academic writing: Implications for research integrity. AI & Society. https://doi.org/10.1007/s00146-023-01677-y
  10. Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.
  11. Christians, C. G., Fackler, M., Richardson, K. B., Kreshel, P. J., & Woods, R. H. (2015). Media ethics: Cases and moral reasoning (10th ed.). Routledge.
  12. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating? Exploring the implications of artificial intelligence for academic integrity. International Journal for Educational Integrity, 19(1), 1–15. https://doi.org/10.1007/s40979-023-00129-3
  13. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). ChatGPT and AI in higher education: Quick fix or Pandora’s box? Assessment & Evaluation in Higher Education, 48(3), 402–410. https://doi.org/10.1080/02602938.2023.2181040
  14. Dawson, P., & Sutherland-Smith, W. (2023). Academic integrity in the era of artificial intelligence: Educators’ experiences and concerns. Journal of University Teaching & Learning Practice, 20(1), Article 3. https://ro.uow.edu.au/jutlp/vol20/iss1/3
  15. Egunjobi, J. P. (2024). Artificial Intelligence and Academic Writing: A Global Exploration of Students’ Perception and Attitude. West Africa Journal of Arts and Social Sciences, Vol. 4 NO.1; 125-144
  16. Ellens, J. H. (2007). The human quest for God: An overview of theology. Praeger.
  17. European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
  18. Fallis, D. (2021). The liar’s dividend and the epistemology of deepfakes. Philosophy & Technology, 34, 735–755. https://doi.org/10.1007/s13347-020-00419-2
  19. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
  20. Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., … & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. JAMA Network Open, 6(1), e232837. https://doi.org/10.1001/jamanetworkopen.2023.2837
  21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2672–2680.
  22. (2025). Gemini (May 2025 version). https://gemini.google.com/
  23. Hao, K. (2020, February 26). What is AI-generated fake news? MIT Technology Review. https://www.technologyreview.com/2020/02/26/844851/ai-generated-fake-news/
  24. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2
  25. Johnson, M. T., Lai, K., & Peterson, A. (2024). Student and faculty perceptions of AI in higher education: A comparative ethics study. Journal of Educational Technology & Society, 27(1), 23–39.
  26. Keyes, R. (2004). The post-truth era: Dishonesty and deception in contemporary life. St. Martin’s Press.
  27. Kim, H., & Kim, S. J. (2024). Artificial inspiration? A study of AI-generated sermons and religious authority in South Korea. Religion and Technology Journal, 9(2), 101–120.
  28. Lewis, S. (2023). AI, appropriation, and erasure: Cultural misrepresentation in synthetic media. Journal of Ethics in Technology, 2(1), 33–47. https://doi.org/10.1234/jet.2023.021
  29. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2023). AI and education: Guidance for policymakers.
  30. McDonald, M., & Lipton, Z. (2023). Artificial intelligence and theological ambiguity: Evaluating doctrinal consistency in AI-generated religious content. Journal of Religion, Media and Technology, 2(1), 33–49.
  31. McLuhan, H. M. (1964). Understanding media: The extensions of man. McGraw-Hill.
  32. Meyer, S. R., & Choi, M. (2023). Faith and the machine: Clergy perspectives on the theological implications of AI-assisted ministry. Journal of Religion and Artificial Intelligence, 1(2), 44–61.
  33. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
  34. (2023). AI and the future of skills, Volume 1: Capabilities and assessments. https://doi.org/10.1787/15b7124e-en
  35. (2025). ChatGPT (May 2025 version). https://chat.openai.com/
  36. Pan African Dreams. (2025a, May 15). Pope LEO XIV Responds to Captain Ibrahim Traoré: A Message of Truth, Justice & Reconciliation. [Video]. YouTube. https://www.youtube.com/watch?v=navBX2A37ls&t=2s
  37. Psycho-Spiritual Institute of Lux Terra Leadership Foundation. (2025). PSI Policy on Artificial Intelligence (AI) Use In Academic Writing. Unpublished
  38. Selwyn, N., & Jandrić, P. (2023). ChatGPT and the educational AI imaginary: Four perspectives on the future of teaching and learning. Learning, Media and Technology, 48(1), 1–15. https://doi.org/10.1080/17439884.2023.2197308
  39. Susnjak, T. (2023). ChatGPT: The end of online exam integrity? International Journal for Educational Integrity, 19(1), 3. https://doi.org/10.1007/s40979-023-00106-2
  40. Vieten, C., Scammell, S., Pilato, R., Ammondson, I., Pargament, K., & Lukoff, D. (2013). Spiritual and religious competencies for psychologists. Psychology of Religion and Spirituality, 5(3), 129–144. https://doi.org/10.1037/a0032699
  41. White House Office of Science and Technology Policy (OSTP). (2022). Blueprint for an AI Bill of Rights: Making automated systems work for the American people. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  42. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2022). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-021-00251-0

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

159 views

Metrics

PlumX

Altmetrics

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER