11
• theological risks in unsupervised AI content, and governance strategies for
Shariah compliance.
Expanded Analytical Framework: Addressing Technical Risks
To align the methodology with the critical discussion of technical risks in the results, the analysis extended beyond
conceptual theological concerns to include the technical mechanisms of AI failure. The literature review
specifically sought content that described or hypothesized about the technical processes underlying
misinterpretation and bias, namely:
• Algorithmic Bias and Inference: Examining studies that discuss how dataset contamination (using
unverified or contradictory training data) leads to outputs inconsistent with orthodox Salafi methodology.
• Case Study and Application Analysis: While lacking direct empirical data, the methodology involved the
conceptual analysis of documented AI applications (e.g., general Islamic Q&A chatbots) to infer the
potential risks and opportunities if such systems were applied specifically within Salafi institutional
contexts, forming the basis for the empirical examples discussed.
This expanded framework ensures that the theoretical concerns regarding the distortion of the Salafi creed are
directly linked to verifiable technical flaws in AI development and data management, supporting the detailed
discussion in the results section.
RESULTS AND DISCUSSION
Opportunities in AI-based Da’wah
AI enhances accessibility to Islamic knowledge via chatbots and interactive applications, allowing personalized
learning experiences and immediate response systems for faith-related queries (Hassan et al., 2023).
To ground the discussion, empirical examples illustrate AI’s influence. Several non-Salafi, but Islamic Q&A
chatbots (example platforms using large language models) are already used for immediate faith-related queries.
While beneficial for accessibility, a potential case study for a Salafi-aligned institution would be an automated
Fatwa generation system. Suc a system, trained exclusively on verified classical Salafi texts (e.g., works by Ibn
Taymiyyah and Ibn Baz) and contemporary scholarly rulings, could significantly scale the reach of meticulously
screened teachings globally. Another example involves AI-powered content filtering to identify and flag content
that promotes speculative theology (Kalam) or innovations (Bid’ah), ensuring that only materials consistent with
the methodology of Ahlus Sunnah wal Jamaah are disseminated.
Risks of Misinterpretation
Without scholarly supervision, AI systems risk propagating content inconsistent with Salafi methodology.
Training data drawn from unverified sources can embed philosophical biases contrary to orthodox creed (AlJarhi,
2020).
The theological risks of misinterpretation and bias stem from specific technical mechanism within AI systems,
especially Large Language Models (LLMs). Dataset Contamination: Bias is introduced when the training data
(Shariah-compliant datasets) is “contaminated”. This occurs if the dataset, though vast, includes texts fromm
schools of thought or philosophical traditions (e.g., Mu’tazilah or even extreme interpretations) that contradict
the orthodox Salafi creed. The AI system, via algorithmic inference, cannot distinguish between verified and
unverified content without human-defined tags, leading it to output responses that embed philosophical biases.
• Algorithmic Inference and Oversimplification: Misinterpretation often occurs through the process of
algorithmic inference. AI models are designed to find patterns and provide the most probable answer,
which can lead to oversimplification of complex theological issues (‘Aqidah). For instance, an AI might