The Misuse of AI-Generated Content in Academic and Religious Settings
Joyzy Pius Egunjobi
Psycho-Spiritual Institutional of Lux Terra Leadership Foundation, Nairobi Campus, Kenya
DOI: https://doi.org/10.51244/IJRSI.2025.121500077P
Received: 22 May 2025; Accepted: 26 May 2025; Published: 09 June 2025
Artificial Intelligence (AI) has revolutionized content generation across media, education, marketing, communication, and religion. However, the same technology harbors significant ethical risks when misused. This paper explores the darker dimensions of AI-generated content, including misinformation, cultural distortion, psychological manipulation, and the erosion of truth and trust through deepfakes, synthetic sermons, and auto-plagiarized papers. Special attention is on the misuse of AI in academic and religious settings. Drawing from current theoretical frameworks and literature, this article presents a critical evaluation of the moral implications, potential harms, and the urgent need for ethical oversight.
Keywords: artificial intelligence, deepfakes, misinformation, ethics, AI-generated content, trust, digital manipulation, academic writing, religious setting
The proliferation of artificial intelligence (AI) tools in content creation has transformed how society consumes and produces information. From automated news articles to AI-generated art and synthetic voices, these innovations hold unprecedented potential. Yet, as with any powerful tool, misuse can result in profound societal and psychological harm. This paper examines the dangers in certain AI-generated content when ethical, cultural, or truth-oriented boundaries are violated.
Recent advancements in natural language processing (NLP), generative adversarial networks (GANs), and large language models (LLMs) such as GPT-4 have made it possible for machines to simulate human-like writing, voice, and image production (Brown et al., 2020; Egunjobi, 2024; Goodfellow et al., 2014). These tools can be harnessed for productivity, education, creativity, and accessibility, and at the same time can be used to manipulate and deceive. Their dual-use nature introduces risks when employed without oversight.
AI-generated misinformation, false narratives crafted by language models, or fabricated media can distort public discourse and decision-making (Hao, 2020). Deepfakes, hyper-realistic videos created using GANs, can convincingly simulate real individuals saying or doing things they never did (Chesney & Citron, 2019). These technologies pose threats to democratic integrity, justice systems, and personal reputations.
AI can undermine epistemological trust by blurring the line between authentic and synthetic content. As synthetic content proliferates, it becomes increasingly difficult to discern reliable information, leading to a “liar’s dividend,” a state in which any content can be dismissed as fake (Fallis, 2021).
AI can distort sacred traditions or culturally significant materials when it remixes them without understanding or contextual awareness. This can lead to the trivialization or misrepresentation of indigenous knowledge and religious rituals (Lewis, 2023). The mass production of content without cultural sensitivity risks intellectual colonization and erasure.
AI-generated chatbots and social media bots can simulate empathy, seduction, or ideological persuasion, manipulating users’ emotions and beliefs (Floridi & Cowls, 2019). Some bots are even used for scams or recruitment into extremist groups (Brundage et al., 2018).
In academic and creative settings, AI tools can replicate writing styles or generate entire papers, essays, or artworks, enabling plagiarism and diminishing the value of human intellectual labor (Cotton et al., 2023). When AI mimics an artist’s style without credit, it raises concerns of digital piracy and identity theft.
When poorly filtered or manipulated, AI can produce offensive, violent, racist, or sexually explicit content, including that involving minors or non-consensual themes (Bender et al., 2021). This not only traumatizes individuals but also contributes to the normalization of deviant or harmful ideologies.
This is not to mean that AI offers no positive impact. AI offers significant positive impacts in both academic and religious spheres.
In academics, AI tools enhance research efficiency by assisting with literature reviews, data analysis, and hypothesis generation, allowing scholars to process vast amounts of information and identify patterns more quickly. It also facilitates personalized learning experiences, providing tailored content and feedback to students, and improves administrative tasks, freeing up educators for more direct engagement. This work is AI-assisted.
In religion, AI can make sacred texts and teachings more accessible through translation and analytical tools, aiding in deeper scriptural study and theological research. It can support clergy and religious leaders by providing resources for homilies, counseling insights, and administrative support, potentially fostering greater outreach and community engagement through digital platforms.
To critically interpret the multifaceted ethical and societal implications of AI-generated content, this paper applies four intersecting theoretical frameworks that span media ethics, cultural epistemology, and the sociology of technology.
Together, these frameworks reveal that AI-generated content is embedded in complex ethical, epistemological, and socio-technological matrices that demand critical reflection and proactive governance.
The Misuse of AI-Generated Content in Academic and Religious Settings
The misuse of AI-generated content in academic and religious settings poses distinct but interrelated ethical, epistemological, and spiritual challenges.
The Misuse of AI in Academic Settings
In education, the rise of large language models (LLMs) such as Chat GPT has introduced powerful tools that, while innovative, are increasingly being misused by students and educators alike. AI-generated essays, exam answers, and research abstracts can foster intellectual dishonesty, impede critical thinking, and distort the purpose of scholarship. The ease of generating coherent academic text has made cheating more accessible, with some studies indicating that up to 49% of students have considered using AI tools dishonestly, and approximately 22% have done so at least once (Susnjak, 2023). Also, 50% of students who use AI in academic writing do not consider it unethical (Egunjobi, 2024).
The use of these tools to complete assignments without attribution constitutes a form of plagiarism, even when the content is technically “original” in its algorithmic composition (Cotton et al., 2023). This undermines the pedagogical process, as it turns learning into performance and students into passive consumers rather than critical, reflective thinkers. According to a cross-institutional study conducted by Dawson and Sutherland-Smith (2023), educators reported that the increasing reliance on AI-generated submissions had led to a notable decline in the originality, depth, and analytical rigor of student assignments.
From a theoretical perspective, this phenomenon aligns with technological determinism (McLuhan, 1964), in which technological advancement dictates the structure of behavior and learning. Students, driven by efficiency and grade-oriented outcomes, are increasingly offloading cognitive effort to machines, leading to the weakening of autonomy, analytical competence, and intrinsic motivation. Furthermore, as LLMs produce plausible-sounding but incorrect or shallow content, students lose the opportunity to engage in the kind of deep, metacognitive learning that higher education is meant to cultivate.
In parallel, the Social Construction of Technology (SCOT) model (Bijker et al., 1987) provides a lens through which to view the evolving academic culture around AI. Institutional policies on AI use vary widely, with some universities banning generative AI, others embracing it for skill enhancement, and many remaining ambiguous. This lack of clarity produces ethical grey zones, where students and faculty navigate AI tools without a shared understanding of acceptable practice. According to recent surveys, only 36% of higher education institutions have developed formal guidelines on AI use in academic integrity policies (OECD, 2023).
Moreover, a particularly troubling consequence of this misuse is the generation of fabricated citations, hallucinated data, and AI-written academic papers that lack empirical rigor. Studies by Cheong et al. (2023) found that AI tools like ChatGPT frequently produce fictitious references or improperly formatted citations, which unsuspecting students might include in their work, leading to misinformation and erosion of trust in scholarly communication. This trend compromises epistemic integrity, which is central to the academy’s role as a truth-seeking institution (Fallis, 2021). When published academic content is polluted with AI-assisted misrepresentations, the downstream effect is a deterioration of the public’s trust in scholarly discourse. In other words, AI-generated content, particularly from large language models like ChatGPT and Gemini, can contain factual errors, fabricated citations, outdated information, and biased statements. These issues arise because AI models predict text based on patterns, not verified truth. AI hallucinations are confident responses by AI that are not grounded in real data or factual accuracy.
This misuse disproportionately affects marginalized groups. Research by Luckin et al. (2023) found that students with fewer digital literacy skills or less access to high-quality AI tools are more prone to rely on them uncritically, resulting in poorer academic performance and deeper educational inequity. Rather than closing the digital divide, unregulated AI integration may inadvertently widen it.
Guidelines for using AI-generated citations and content include always verifying citations, using AI as a drafting or idea tool, and disclosing AI use. When citing AI content that may be inaccurate, use it to guide, double-check everything, disclose AI assistance clearly, and never cite unverifiable sources.
Misuse of AI in Religious Settings
In religious contexts, the misuse of artificial intelligence presents unique and deeply troubling dangers that transcend intellectual concerns. On May 15, 2025, Pan African Dreams posted a video on YouTube titled “Pope LEO XIV Responds to Captain Ibrahim Traoré: A Message of Truth, Justice & Reconciliation,” which portrayed the pope responding to a letter written by the president of Burkina Faso. This video, despite its seemingly powerful message, was entirely AI-generated and not based on any actual statement from the pope. Theological communication, such as sermons, prayers, scriptures, or spiritual reflections, demands discernment, inspiration, and contextual sensitivity, which AI lacks by design. The generation of sacred content by AI tools like ChatGPT or other LLMs introduces what may be described as a synthetic sacred, a simulation of spiritual truth devoid of spiritual origin.
One of the most immediate dangers of AI in religious spaces is doctrinal distortion. Because generative AI systems are trained on massive and diverse datasets, ranging from sacred texts to secular commentary, pop culture, and academic writing, they often lack theological coherence or denominational specificity. As a result, their outputs may reflect implicit pluralism, secularism, or even syncretism, which conflict with the core beliefs of specific faith traditions. Empirical analysis by McDonald and Lipton (2023) found that ChatGPT-generated religious texts often conflate theological categories, mixing Abrahamic, Eastern, and New Age ideas in ways that could lead to misinterpretation and theological confusion among less theologically literate audiences.
It is good to note that AI might generate biblical or theological interpretations (exegesis) without proper spiritual discernment or doctrinal accountability. There is potential for “pseudo-prophecy” or falsely spiritual texts that simulate authority. It is also important to emphasize that AI lacks the Holy Spirit, faith, and ecclesial context that are essential in spiritual growth and theology.
This concern is not hypothetical. According to a Barna Group (2023) survey of Protestant pastors in the United States, 41% expressed concern that AI-generated religious content could weaken doctrinal purity, especially among younger, digitally native congregants. In traditions where orthodoxy is paramount, this represents a threat to spiritual formation, potentially leading adherents away from their faith’s foundational teachings.
Another danger lies in the simulation of a prophetic or divine voice. When AI tools are prompted to “speak as God,” “offer prophetic insight,” or generate scripture-like content, the result may be a mimicry of revelation that lacks authenticity. While the language may sound persuasive or even emotionally moving, it is not rooted in divine encounter, scriptural fidelity, or spiritual discernment.
From an ethical standpoint, this constitutes a form of spiritual manipulation. It risks fulfilling the biblical warning about false prophets—those who “come to you in sheep’s clothing but inwardly are ravenous wolves” (Matthew 7:15, ESV). In communities where prophetic speech is revered, the uncritical reception of AI-generated revelations can foster deception and spiritual dependence on technology rather than divine guidance.
This aligns with the theoretical concept of media simulation outlined by Baudrillard (1994), where representation replaces reality, and hyperreality becomes indistinguishable from truth. In religious contexts, this could mean replacing spiritual encounter with algorithmic facsimile, thus trivializing the sacred and reducing divine communication to a linguistic trick.
AI’s encroachment into sacred discourse also reflects a broader commodification of religious experience. When prayer apps, sermon databases, or spiritual content generators rely on AI for efficiency, convenience may come at the cost of authenticity. The divine-human relationship—at the heart of most religious traditions—is thus reduced to a transactional interface, where algorithms mediate what should be a personal or communal encounter with the transcendent.
The dehumanization of spiritual labor also carries practical implications. Clergy who rely on AI for sermon preparation may bypass the inner work of reflection, study, and prayer. As spiritual direction and pastoral counseling begin incorporating AI-driven scripts or chatbots (e.g., “AI Jesus” apps), the embodied and relational aspects of ministry are sidelined in favor of artificial responsiveness. This violates core tenets of spiritual anthropology, which emphasize the uniqueness and moral responsibility of the human person in divine interaction (Vieten et al., 2013; Ellens, 2007).
AI tools reflect the biases of their training data and design structures. This creates a subtle but powerful danger of algorithmic theology, where dominant cultural, theological, or philosophical perspectives, usually Western, liberal, and technocratic, are unconsciously privileged over marginalized or indigenous religious worldviews. As noted by Noble (2018), algorithms often encode social hierarchies, and AI in religious discourse is no exception. Without theological oversight or cultural contextualization, religious AI may reinforce colonial narratives, erase minority traditions, or propagate normative theologies in ways that reshape global religious understanding.
Integration with Theoretical Framework
The ethical and epistemological concerns surrounding the misuse of artificial intelligence (AI) in both academic and religious contexts can be understood through four core theoretical lenses: media ethics, post-truth epistemology, technological determinism, and the Social Construction of Technology (SCOT). These frameworks not only conceptualize the philosophical risks of AI deployment but are also supported by emerging empirical findings that underscore their real-world implications.
Media Ethics
Media ethics, as articulated by Christians et al. (2015), calls for content that is truthful, authentic, and respectful of human dignity. Empirical studies reveal a growing breach of these principles in AI applications. In the educational context, Cotton et al. (2023) found that over 60% of surveyed university students admitted to using AI tools like ChatGPT for assignments without disclosure, blurring ethical boundaries and undermining academic honesty. Similarly, in religious domains, research by Kim and Kim (2024) documented cases where AI-generated sermons were delivered in congregational settings without attribution, leading to concerns about authenticity and pastoral accountability. These findings validate fears that AI content, when produced or distributed without ethical oversight, compromises the integrity of both pedagogical and spiritual communication. While media ethics stresses content authenticity, the post-truth framework reveals the deeper epistemic rupture caused by AI’s rhetorical power.
Post-Truth Epistemology
Post-truth epistemology describes a cultural condition in which objective facts are less influential than appeals to emotion or personal belief in shaping public opinion (Keyes, 2004). This post-truth condition erodes the distinction between fact and fabrication. Empirical data suggest that AI’s role in this dynamic is significant. For example, a study by Buchanan et al. (2023) revealed that participants were unable to reliably distinguish between AI-generated and human-authored theological reflections, particularly when emotional tone and rhetorical style were persuasive. This aligns with research in academic publishing showing that peer reviewers often fail to detect AI-generated abstracts unless explicitly trained to do so (Gao et al., 2023). These findings highlight how AI can contribute to epistemic confusion, reinforcing a media landscape where emotional plausibility often trumps evidential credibility.
Technological Determinism
The theory of technological determinism posits that technology shapes social institutions and cultural values, often in ways users do not fully anticipate (McLuhan, 1964). Recent educational research supports this view. According to Selwyn and Jandrić (2023), the rapid normalization of AI in classrooms has led to a measurable decline in student engagement with critical thinking exercises, suggesting that dependence on machine-generated responses may be reshaping the cognitive architecture of learning. Similarly, in religious contexts, Al-Kandari and Dashti (2022) found that young adults increasingly rely on AI-driven apps for religious guidance, reducing participation in traditional communal worship or spiritual mentorship. These trends suggest a technological drift, where the convenience of AI begins to supplant the formative processes of reflection, dialogue, and human interaction.
Social Construction of Technology (SCOT)
The SCOT framework highlights the socially negotiated meanings of technology (Bijker et al., 1987). In both academic and religious settings, empirical studies show that the ethicality of AI use depends largely on user interpretation and institutional norms. For instance, a survey by Johnson et al. (2024) found that while 72% of faculty members viewed AI writing tools as a threat to academic integrity, 48% of students saw them as legitimate aids, pointing to a disjunction in ethical expectations. In faith communities, Meyer and Choi (2023) noted significant variability in how clergy interpret AI usage: some embrace it as a modern tool for evangelization, while others condemn it as spiritually vacuous. These disparities reflect the socially contingent nature of AI’s integration and highlight the urgent need for context-sensitive discernment and normative boundaries.
Together, these empirical findings and theoretical frameworks underscore the need for robust ethical guidelines, critical pedagogy, and spiritual discernment in the adoption of AI across domains that shape human formation. Without such integration, the promise of AI risks becoming a peril, misleading minds, distorting truth, and displacing the sacred.
Ethical and Regulatory Implications
The ethical management of AI-generated content necessitates an integrated approach that combines technological safeguards with well-defined legal and institutional frameworks. As AI systems increasingly produce text, images, and speech that resemble human expression, the risks of deception, manipulation, and misuse grow, particularly in domains such as education and religion where trust, authenticity, and moral authority are paramount.
A growing body of research advocates for responsible AI frameworks that prioritize transparency, accountability, fairness, and inclusivity (Jobin et al., 2019). These principles must be embedded not only in the design of AI systems but also in their deployment across societal sectors. In educational settings, institutions are beginning to implement AI-use guidelines that define acceptable and unacceptable practices for students and faculty (Zawacki-Richter et al., 2022; Psycho-Spiritual Institute, 2025). At the Psycho-Spiritual Institute, while AI usage is not discouraged, it is recommended that AI usage in the thesis writing should not be more than 10% of Chapter One (Introduction), 50% of Chapter Two (Review of Empirical Literature, due to possible usage of AI in paraphrasing and summarization), 10% of Chapter Three (Methodology), 10% of Chapter Four (Data Presentation, Analysis, Interpretation, and Discussion), and no AI use in the entire Chapter Five (Summary, Conclusion, and Recommendations). Similarly, religious organizations face the challenge of developing theological and ethical boundaries for AI use in spiritual discourse and pastoral communication.
At the policy level, several governments and organizations are advancing regulatory proposals. For instance, the European Union’s AI Act classifies AI systems by risk level and proposes specific compliance measures for high-risk applications, including those used in education and social influence (European Commission, 2021). The Act mandates strict compliance measures, including responsibility, transparency, and human monitoring, for high-risk AI systems. Schools must ensure AI tools respect students’ privacy, provide clear information, and include human intervention when necessary. AI-generated material can encourage intellectual dishonesty, obstruct critical thinking, and skew scholarship goals. In the U.S., the Blueprint for an AI Bill of Rights recommends rights-based protections related to algorithmic discrimination, privacy, and human oversight (White House OSTP, 2022). However, these frameworks are still in development and often lack specificity for sectors like religion, where doctrinal values complicate normative regulation.
A critical but often overlooked component of ethical AI use is public literacy. In both academic and religious domains, users need not only technical knowledge but also digital discernment—the ability to assess the credibility, origin, and intent behind AI-generated content. Studies show that many users cannot reliably distinguish between AI and human authorship, especially in emotionally resonant content (Buchanan et al., 2023). In religious settings, this creates profound risks: AI-generated sermons or prayers, when mistaken for divinely inspired communication, can lead to theological confusion or spiritual manipulation.
Thus, education, theological training, and media literacy initiatives must be part of any regulatory response. Empowering individuals to critically engage with AI tools, rather than passively consume their outputs, is essential for preserving human agency, protecting institutional integrity, and ensuring that technology serves rather than supplants core human and spiritual values.
AI has the potential to improve human creativity and communication, but its negative aspects also need to be carefully considered. The “misuse” of some AI-generated information is a result of the ethical void in which it may be used, not of the technology itself. In order to preserve truth, culture, and human dignity in the digital era, a balanced strategy that capitalizes on AI’s advantages while putting moral boundaries in place is necessary.
A diversified strategy is needed to address the issues raised by AI. Any regulatory reaction must include media literacy programs, religious instruction, and education. A more knowledgeable and resilient society will result from enabling people to interact critically with AI technologies rather than obligingly accepting their results. We can maximize AI’s advantages while reducing its hazards by encouraging a culture of ethical thought and ongoing development. To successfully traverse the complexity of AI and guarantee its appropriate and advantageous integration into our lives, legislators, educators, engineers, and the general public must work together. AI is an assistive tool that should replace critical thinking in academic writing, and AI has no Holy Spirit to dictate spiritual and theological realities.