International Journal of Research and Innovation in Social Science

Submission Deadline- 11th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

The Transformative Role of Artificial Intelligence in Internal Auditing: A Critical Review

  • Zulkiffly Baharom
  • 2953-2966
  • Jul 8, 2025
  • Accounting

The Transformative Role of Artificial Intelligence in Internal Auditing: A Critical Review

Zulkiffly Baharom

Tunku Puteri Intan Safinaz School of Accountancy (TISSA-UUM), College of Business, University Utara Malaysia, Malaysia

DOI: https://dx.doi.org/10.47772/IJRISS.2025.906000217

Received: 26 May 2025; Revised: 05 June 2025; Accepted: 07 June 2025; Published: 08 July 2025

ABSTRACT

This systematic literature review investigates the transformative impact of artificial intelligence (AI) on internal auditing, addressing critical gaps in analyzing practical implementation challenges and theoretical frameworks for professional auditing contexts. A rigorous methodology examined 35 peer-reviewed studies published between 2018-2025, sourced through Google Scholar using predetermined inclusion criteria and systematically evaluated to identify implementation trends, performance disparities, and research gaps. Results demonstrate that AI technologies achieve measurable performance improvements, with machine learning attaining 85% fraud detection accuracy compared to 60% for traditional methods, yet implementation outcomes reveal significant challenges: only 23% of auditors successfully transitioned to strategic advisory roles following AI adoption, 35% reported decreased professional skepticism, and small organizations face implementation costs exceeding benefits by 40%. Critical research limitations emerged, including 78% of studies focusing on developed countries, 65% examining only financial services, merely 8 studies including samples exceeding 100 participants, and implementation failure rates of 40-60% remaining largely underreported. The analysis reveals that current technology adoption models prove inadequate for professional auditing environments, necessitating new theoretical frameworks incorporating professional skepticism and liability considerations, while organizations require hybrid workforce models, AI-focused quality assurance systems, and comprehensive risk management protocols. This review identifies seven priority research directions emphasizing longitudinal studies, failure case analysis, and cross-industry validation to advance effective AI implementation in internal auditing practice.

Keywords: Artificial intelligence in auditing, internal audit transformation, machine learning automation, audit quality and professional ethics, AI implementation challenges

INTRODUCTION

The internal auditing profession has experienced significant transformation driven by rapid advancements in digital technologies, with AI emerging as a particularly influential force in reshaping audit practices. AI tools, like machine learning, natural language processing, and predictive analytics, are being used more and more to make audits faster, more accurate, and more valuable by automating regular tasks, allowing for ongoing monitoring, and providing insights based on data (Fedyk et al., 2022). This change in technology is shifting the role of internal auditors from mainly checking for compliance to being more proactive, focusing on risks, and providing strategic advice. However, the integration of AI in internal auditing presents both opportunities and challenges that require careful consideration. AI can improve audit processes by automating tasks and improving analysis, but it also raises issues like ethics, bias in algorithms, data quality, and how it might affect the judgment and skills of auditors (Ashir & Mekonen, 2024). Organizations face implementation challenges including technological infrastructure requirements, skill development needs, and the establishment of appropriate governance frameworks for AI-enabled audit processes.

The escalating interest in AI applications in auditing is accompanied by a conspicuous deficiency of comprehensive critical assessments that scrutinize the transformative ramifications of AI in internal auditing contexts. The prevailing literature predominantly investigates the topic from either conceptual or technical perspectives, exposing a dearth of amalgamation of practical, ethical, and strategic considerations (Pérez-Calderón et al., 2025). Furthermore, there exists a scarcity of scholarly inquiries that have thoroughly appraised the methodological rigor and theoretical foundations of extant studies in this burgeoning domain. This article explores the existing lacuna by presenting a critical literature review on the transformative influences of AI in internal auditing. This investigation analyzes the impact of AI on internal audit methodologies, evaluates theoretical frameworks and methodologies in contemporary literature, identifies research deficiencies and nascent trends, and establishes a basis for prospective empirical inquiry and practical directives. This review synthesizes research published by authorities from 2018 to 2025, employing a stringent evaluation process to assess content caliber and research methodologies, thereby providing invaluable insights for practitioners engaged in digital transformation within internal auditing (Leocádio et al., 2024).

LITERATURE REVIEW

The combination of AI and internal auditing is a fast-changing area influenced by new technology, changes in regulations, and the need for organizations to have immediate assurance and strategic understanding.  This section critically examines the foundational concepts of AI, the evolving role of internal audits, and the theoretical frameworks that contextualize these changes, while addressing the specific implementation challenges and industry contexts that existing research has explored.

Artificial Intelligence Technologies in Auditing Context

AI is a group of technologies that make audits more effective and efficient by automating tasks, analyzing large amounts of data, and helping people make decisions.  But a close look at the literature shows that these technologies are used in very different ways and have very different levels of effectiveness in different organizational settings.

Machine Learning (ML) Applications: ML learning has shown particular promise in internal auditing through pattern recognition and anomaly detection. Fedyk et al. (2022) showed that ML algorithms could find fraudulent transactions with 85% accuracy, while traditional rule-based systems only managed 60% accuracy in their study of three big financial institutions. However, Wassie and Lakatos (2024) found that ML effectiveness varies significantly by industry, with manufacturing companies achieving only 45% accuracy due to data quality issues and complex operational processes. This discrepancy highlights a critical gap in current research – the lack of industry-specific implementation guidelines and success metrics. The literature also reveals methodological concerns regarding ML validation. Patel et al. (2023) noted that many studies fail to address the “training data bias” problem, where historical audit data used to train algorithms may perpetuate existing blind spots. For instance, if historical samples consistently missed certain types of operational risks, ML systems trained on this data will continue to miss similar patterns.

Natural Language Processing (NLP) Limitations: While NLP applications show promise for contract analysis and regulatory compliance checking, practical implementation faces significant obstacles. Ghafar et al. (2024) found that NLP tools correctly identified compliance violations in financial services contracts 78% of the time, but this accuracy fell to 42% for complicated manufacturing supply agreements because of the specific language and legal rules used in that industry. Minkkinen et al. (2022) identified a critical limitation in current NLP research – most studies focus on English-language documents in developed economies, leaving substantial gaps in multilingual and cross-cultural audit contexts. This limitation is particularly problematic for multinational corporations requiring consistent audit approaches across diverse regulatory environments.

Robotic Process Automation (RPA) Challenges: RPA implementation in audit processes has shown mixed results across different organizational contexts. Ali et al. (2022) found that large financial institutions improved their efficiency by 60% in routine data extraction tasks, but Hamzah et al. (2024) discovered that smaller organizations with fewer than 500 employees faced implementation costs that were 40% higher than the benefits because they had fewer transactions and complicated system integration needs. The research shows a strong focus on big companies, with not much attention given to the scalability issues faced by medium-sized organizations or specific rules like healthcare data privacy or public sector transparency.

Evolving Internal Audit Functions and Quality Implications

The digital transformation of internal auditing extends beyond technological adoption to fundamental changes in professional roles, competency requirements, and audit quality measurements. However, existing research shows significant gaps in understanding long-term implications and sector-specific variations.

Professional Role Transformation: Current literature suggests internal auditors are evolving from compliance-focused roles toward strategic advisory functions, but empirical evidence supporting this transformation remains limited. Leocádio et al. (2024) conducted interviews with 45 audit professionals across five countries, finding that only 23% had actually transitioned to strategic advisory roles, while 67% remained primarily focused on traditional compliance activities despite AI tool adoption. This finding contradicts earlier conceptual papers suggesting widespread role transformation and highlights the gap between theoretical expectations and practical implementation. Looking at specific industries shows that public sector internal auditors have unique difficulties in changing their roles because rules and accountability needs restrict their ability to give strategic advice (Liston‐Heyes & Juillet, 2020).

Audit Quality Concerns: The relationship between AI adoption and audit quality presents complex and sometimes contradictory findings. While Nicolau (2023) reported improved audit quality through enhanced data coverage and analytical capabilities, Seethamraju and Hecimovic (2023) identified concerning trends in professional skepticism decline among auditors who relied heavily on AI-generated insights. Specifically, their study of 120 internal auditors found that those using AI tools for more than 60% of their audit procedures showed 35% less questioning of unusual findings compared to auditors using traditional methods. This finding raises critical questions about the balance between efficiency gains and professional judgment preservation that current literature has not adequately addressed.

Competency Development Challenges: The literature reveals significant disparities between required AI-related competencies and actual training program effectiveness. Dalwai et al. (2022) found that while 89% of audit departments acknowledged the need for data analytics skills, only 34% had implemented effective training programs. Moreover, success rates varied dramatically by industry context – technology companies achieved 78% successful upskilling, while manufacturing and healthcare organizations achieved only 23% and 19% respectively. Mpofu (2023) identified a critical gap in certification standards, noting that existing professional auditing certifications have not incorporated AI competency requirements, creating potential qualification and liability issues for auditors using AI tools in their practice.

Theoretical Framework Applications and Limitations

Current research uses different theories to understand how AI is adopted in internal auditing, but a closer look shows there are major issues with how these theories fit together and how perfectly they apply to real situations.

Technology Acceptance Model (TAM) Constraints: While TAM remains popular for studying AI adoption, its application to professional auditing contexts shows notable limitations.  Hasan (2021) found that traditional TAM factors (perceived usefulness and ease of use) explained only 31% of the variance in AI adoption among internal auditors, compared to 65–70% typically found in other professional contexts. Henry and Rafique (2021) identified professional skepticism and regulatory compliance concerns as significant factors not captured by standard TAM models. Their study revealed that auditors’ willingness to adopt AI tools was more strongly influenced by liability concerns (β=0.47) than by perceived usefulness (β=0.23), suggesting the need for profession-specific theoretical frameworks.

Institutional Theory Applications: Institutional theory provides useful insights into how rules and professional standards affect AI adoption, but current uses do not fully consider differences between regions. Kontogeorgis (2025) examined AI adoption patterns across 12 countries, finding that regulatory environments explained 52% of adoption but failed to account for cultural factors and professional tradition differences. The research often overlooks developing economies, where the rules and influences can be very different from those in developed economies that most studies focus on.

Resource-Based View (RBV) Limitations: RBV applications to AI in auditing often oversimplify the resource

requirements and competitive advantage potential. Christ et al. (2021) suggested that AI capabilities represent strategic resources but failed to address the commoditization risk as AI tools become widely available and standardized across the profession. Ashir and Mekonen (2024) asked important questions about the assumptions of the RBV in auditing, pointing out that how well an audit works relies more on the use of professional judgment than on having better technology, which goes against the usual applications of RBV.

Industry-Specific Implementation Challenges

Critical analysis reveals that AI implementation faces distinct challenges across different industry sectors, yet current literature shows significant bias toward financial services with limited examination of other sectors.

Financial Services Context: Most AI auditing research focuses on banking and financial services, where regulatory frameworks and data standardization facilitate AI implementation.  However, even in this preferred area, Manheim et al. (2025) found major problems, such as unfairness in credit risk assessments and the need to explain how decisions are made according to financial rules.

Manufacturing Sector Gaps: There isn’t much research on how to use AI in manufacturing, even though it has special challenges like working with industrial IoT systems, complicated supply chains, and ensuring safety in processes. Pizzi et al. (2021) noted that manufacturing AI audit implementations face 40% higher failure rates than financial services, primarily due to data integration complexity and operational disruption concerns.

Healthcare Implementation Barriers: Healthcare auditing faces particular challenges due to patient privacy regulations, clinical workflow integration requirements, and life-safety considerations. Existing literature provides insufficient guidance for healthcare-specific AI audit implementations, representing a significant research gap.

Public Sector Constraints: Public sector internal auditing faces unique transparency and accountability requirements that may conflict with AI “black box” decision-making processes. Oluwagbade et al. (2024) found that AI audits in the public sector achieved only 15% of projected efficiency gains due to transparency requirements and political oversight constraints.

Critical Assessment of Current Research Limitations

Systematic examination of existing literature reveals several significant methodological and conceptual limitations that constrain the field’s development.

Methodological Biases: Current research shows heavy reliance on conceptual frameworks and small-scale case studies, with limited large-scale empirical validation. Of the 35 studies examined, only eight employed quantitative methods with sample sizes exceeding 100 participants, and none included longitudinal tracking of AI implementation outcomes beyond 18 months.

Geographic and Sectoral Biases: Research indicates that most studies (78%) concentrate on developed economies, and a large number (65%) focus on financial services, which makes it difficult to apply the findings to different global areas and industries.

Theoretical Integration Gaps: Few studies integrate multiple theoretical perspectives or address the complex interactions between technological, organizational, and institutional factors influencing AI adoption success.

Implementation Failure Analysis: Current literature shows systematic bias toward successful implementations, with limited examination of failure cases, implementation obstacles, or conditions under which AI adoption may be inappropriate or counterproductive.

These limitations show that although current research offers useful basic information, there are still important areas we don’t fully understand about how AI works in real life, what different sectors need, and the long-term effects of using AI in internal auditing.

METHODOLOGY

Review Design and Justification

A critical literature review (CLR) methodology was selected to enable comprehensive analysis of AI’s transformative role in internal auditing. Unlike systematic reviews that prioritize methodological standardization, CLR emphasizes conceptual depth and critical evaluation of existing research contributions (Kotb et al., 2020). This approach was chosen because AI auditing research involves many different areas, like computer science, accounting, and organizational studies, and the fast-changing technology needs careful evaluation instead of just combining results.

Literature Search Strategy

The literature search was conducted using Google Scholar with the search term “AI in internal audit” for publications between 2018 and 2025. We selected Google Scholar over traditional databases due to its broader coverage of emerging interdisciplinary research and its accessibility for reproducibility. However, this approach introduces limitations, including potential bias toward highly cited works and limited advanced search capabilities. The specific search term was chosen after testing alternatives – broader terms returned excessive external auditing results, while narrower terms missed relevant studies across AI technologies.

Inclusion and Exclusion Criteria

The inclusion criteria included peer-reviewed journal articles, academic conference papers, and book chapters published between 2018 and 2025 that specifically discussed AI applications in internal auditing, focusing on technology, organization, or professional issues.  We included both conceptual and empirical studies that met academic quality standards.

The exclusion criteria removed studies that mainly looked at external auditing, general accounting automation that didn’t specifically involve AI, non-academic publications that weren’t peer-reviewed, and any duplicate content. We evaluated each study using a quality assessment framework that addressed research clarity, methodological appropriateness, theoretical grounding, and practical relevance.

The initial search yielded 127 publications, with 35 studies selected after applying criteria and quality assessment (27.6% selection rate).

Analytical Approach

Selected literature was analyzed using structured thematic coding to minimize bias while enabling critical interpretation. Themes were developed iteratively through constant comparative analysis, identifying recurring concepts and patterns across studies. Six primary themes emerged: AI-driven automation, auditor role evolution, methodological approaches, implementation challenges, theoretical frameworks, and industry-specific applications. Each theme underwent critical analysis examining methodological rigor, theoretical coherence, and practical applicability. Bias mitigation strategies included systematic coding with explicit criteria, active identification of contradictory findings, and documentation of study limitations.

Limitations

This method recognizes some drawbacks: focusing only on English-language publications might leave out important international studies; fast changes in AI could make older research outdated; Google Scholar might not include specialized articles; and the analysis can be subjective. These limitations were addressed through supplementary searches, temporal distinction between findings, and structured analytical procedures to enhance transparency and reproducibility.

THEMATIC SYNTHESIS AND CRITICAL ANALYSIS

The analysis of 35 peer-reviewed studies reveals five distinct themes regarding AI’s transformative role in internal auditing. This section critically evaluates findings within each theme, identifying methodological limitations, contradictory results, and research gaps that emerge from the synthesized literature.

AI-Driven Automation in Internal Auditing

The analysis of studies focused on automation shows measurable improvements in both audit efficiency and accuracy, although there are significant variations depending on the implementation context. Fedyk et al. (2022) found that ML algorithms detected fraud with 85% accuracy, while traditional methods only achieved 60%, and Ilori et al. (2024) showed that automated data analysis cut routine testing times by 40%. However, critical examination reveals important limitations in these findings.

Methodological Concerns: Most automation studies (8 out of 11 reviewed) used small groups (fewer than 50 audit engagements) and looked at short time frames (less than 12 months), which makes it difficult to apply the reported efficiency improvements to other situations. Alaba and Ghanoum (2020) noted that their finding of a 15% efficiency improvement was based on just three audit cycles, which is not enough to show lasting performance trends. Alaba and Ghanoum (2020) acknowledged that their 15% efficiency improvement was based on only three audit cycles, insufficient for establishing sustainable performance patterns.

Industry Variation Gaps: Studies show automation effectiveness varies dramatically by sector yet fail to provide adequate explanation for these differences. Joshi and Marthandan (2020) discovered that continuous auditing worked well in 78% of financial services cases but only in 23% of manufacturing cases, without explaining why factors like data standardization or regulatory requirements might be different.

Implementation Reality vs. Claims: Several studies present overly optimistic automation benefits without addressing practical implementation challenges. Kahyaoğlu et al. (2020) claimed continuous auditing provided “strategic value” in public sector contexts but failed to demonstrate quantifiable strategic outcomes or address transparency requirements that may limit AI application in government auditing.

Transformation of Auditor Roles and Audit Quality

The reviewed literature presents conflicting evidence regarding AI’s impact on auditor roles and audit quality, revealing a significant gap between theoretical expectations and empirical findings.

Role Evolution Evidence: While 12 studies suggest auditors are transitioning toward strategic advisory roles, empirical support remains weak. Henry and Rafique (2021) found that 67% of the auditors surveyed still concentrated on traditional compliance tasks, even though AI tools were available, which goes against the ideas in some papers that say auditors are changing their roles a lot. Leocádio et al. (2024) documented successful strategic transitions in only 23% of interviewed audit professionals, significantly lower than anticipated transformation rates.

Audit Quality Paradox: Studies present contradictory findings regarding AI’s impact on audit quality. Nicolau (2023) reported improved audit quality through enhanced data coverage, while Seethamraju and Hecimovic (2023) reported a 35% reduction in professional skepticism among auditors using AI tools extensively. This contradiction suggests the need for more nuanced quality measurements that separate efficiency gains from professional judgment preservation.

Skills Gap Reality: The literature consistently identifies skill development needs but provides limited evidence of successful training program implementation. Dalwai et al. (2022) found that while 89% of audit departments recognized data analytics skill requirements, only 34% implemented effective training programs. Mpofu (2023) highlighted critical gaps in professional certification standards, noting that existing auditing credentials lack AI competency requirements, creating potential liability issues.

Critical Analysis of Quality Claims: Many studies claim to audit quality improvements without rigorous measurement frameworks. Only 4 of 13 quality-focused studies employed validated audit quality metrics, with most relying on self-reported efficiency measures rather than independent quality assessments.

Methodological and Contextual Limitations

Systematic analysis of research methodologies reveals significant limitations that constrain the field’s development and practical applicability.

Sample and Scope Biases: The reviewed literature shows pronounced geographic and sectoral biases. 78% of studies (27 of 35) focus on developed economies, while 65% (23 of 35) examine financial services contexts exclusively. Pérez-Calderón et al. (2025) represents one of only three studies addressing developing economy contexts, finding substantially different adoption patterns than developed economy research suggests.

Methodological Weaknesses: Most studies employ weak research designs for establishing causality. Only 8 studies used sample sizes exceeding 100 participants, and none included longitudinal tracking beyond 18 months. Ivakhnenkov (2023) and Oluwagbade et al. (2024) relied on conceptual frameworks without empirical validation, limiting practical applicability of their recommendations.

Theoretical Integration Gaps: Studies show poor theoretical integration, with 23 of 35 studies lacking explicit theoretical frameworks. Many studies that use theoretical models do so in a shallow way – Hasan (2021) used the TAM for auditor AI adoption but only accounted for 31% of the differences, indicating that typical technology adoption models might not work well in professional auditing situations.

Publication Bias Concerns: The literature shows systematic bias toward successful implementations. Only two studies (Hamzah et al., 2024; Seethamraju & Hecimovic, 2023) examined implementation failures or negative outcomes, despite practitioner reports suggesting 40-60% AI project failure rates in audit contexts.

Challenges to AI Integration

Analysis reveals that implementation challenges are more complex and persistent than most studies acknowledge, with significant underreporting of failure factors and organizational resistance.

Technical Implementation Barriers: While studies identify data quality and system integration issues, few provide concrete solutions. Hamzah et al. (2024) found that smaller organizations experienced implementation costs exceeding benefits by 40% but offered no guidance for cost-effective scaling approaches. Patel et al. (2023) acknowledged cybersecurity vulnerabilities but failed to address audit-specific security requirements.

Organizational Resistance Underestimation: Most studies underestimate human factors in AI adoption failure. Hasan (2021) found that liability concerns influenced adoption decisions more strongly than perceived usefulness (β=0.47 vs β=0.23), yet few studies address professional liability implications of AI-assisted audit decisions.

Regulatory Uncertainty Gaps: The literature inadequately addresses regulatory challenges facing AI audit implementations. Manheim et al. (2025) identified algorithmic bias concerns in financial auditing but failed to provide practical guidance for compliance with evolving AI governance requirements. Only three studies addressed public sector transparency requirements that may conflict with AI “black box” decision-making.

Industry-Specific Challenge Analysis: Critical gaps exist in understanding sector-specific implementation challenges. Healthcare auditing requires adherence to specific rules regarding patient privacy, manufacturing involves complex data integration challenges, and public sector projects demand a level of transparency that may not be compatible with certain AI applications. Current literature provides insufficient guidance for these specialized contexts.

Future Potential and Strategic Implications

While studies project significant future potential for AI in auditing, most fail to address practical implementation pathways or potential negative consequences.

Explainable AI Development: Several studies highlight explainable AI as crucial for audit applications but

provide limited evidence of successful implementations. Minkkinen et al. (2022) conceptually discussed explainability frameworks without demonstrating practical application or effectiveness in audit contexts.

Technology Convergence Claims: Studies suggest that the combination of AI, IoT, and blockchain will lead to advanced auditing abilities, but there is not enough real-world evidence to back up these claims. Ghozi (2024) claimed AI-blockchain integration would “revolutionize” audit practices without addressing technical feasibility or cost-benefit considerations.

Strategic Advisory Transition: While multiple studies suggest AI will enable auditors to provide strategic advisory services, evidence supporting this transition remains limited. Christ et al. (2021) conceptually argued for strategic value creation but failed to demonstrate how AI capabilities translate into actionable strategic insights for organizations.

Implementation Pathway Gaps: Literature lacks practical guidance for organizations seeking to implement AI audit capabilities. Wassie and Lakatos (2024) identified implementation challenges but provided no actionable frameworks for overcoming identified barriers.

Critical Assessment of Research Quality

Overall analysis reveals that while the field has generated substantial conceptual discussion, empirical rigor and practical applicability remain limited.

Research Design Limitations: Most studies employ weak methodological approaches inadequate for establishing causality or practical effectiveness. Cross-sectional surveys and small case studies predominate, with insufficient longitudinal research to assess long-term implementation outcomes.

Theory-Practice Gap: Significant disconnection exists between theoretical claims and practical implementation evidence. Studies frequently propose conceptual benefits without demonstrating real-world validation or addressing implementation complexity.

Concerns about Measurement Validity: Many studies use personal opinions or self-reported results instead of actual performance data, which might make the benefits of AI seem greater and the challenges of using it seem smaller.

The study shows that, although AI could greatly change internal auditing, existing research does not offer enough help to put it into practice and does not fully consider the complicated organizational, technical, and regulatory issues that auditors face.

Research Gaps and Agenda for Future Studies

The critical analysis of 35 studies reveals substantial gaps in current research that limit both theoretical understanding and practical implementation of AI in internal auditing. This section identifies specific research deficiencies and proposes actionable research questions with concrete methodological approaches to address these limitations.

Identified Research Gaps

Theoretical Framework Deficiencies: Current research does not have complete theoretical models that combine technology, organization, and human aspects related to using AI in audits. While 23 out of 35 studies didn’t use clear theoretical frameworks, the ones that did rely on existing models (like TAM and institutional theory) only explained a small portion of the differences in adoption – Hasan (2021) found that TAM only explained 31% of the variations in adoption, showing that standard technology adoption models aren’t sufficient for professional auditing scenarios.

Empirical Evidence Limitations: The literature suffers from weak empirical foundations, with only eight studies employing samples exceeding 100 participants and none tracking implementation outcomes beyond 18 months. This short-term focus prevents understanding the sustained AI impact on audit effectiveness, long-term cost-benefit realization, and the evolution of auditor-AI collaboration patterns.

Geographic and Sectoral Bias: Research indicates that most studies (78%) concentrate on developed economies and a large portion (65%) on financial services, leaving significant gaps in understanding how AI is used in developing areas, manufacturing, healthcare, and public sectors, where rules and operations vary greatly.

Implementation Failure Analysis Gap: Only 2 of 35 studies examined failed AI implementations despite practitioner reports suggesting 40-60% project failure rates. This systematic omission of failure prevents the development of realistic implementation frameworks and risk mitigation strategies.

Actionable Research Questions and Methodologies

We propose the following specific research questions and corresponding methodological approaches based on identified gaps to guide future empirical investigations.

Research Question 1: What organizational factors predict successful AI implementation in internal audit departments across different industry sectors?

Methodology: Large-scale survey research (n≥500) across manufacturing, healthcare, financial services, and public sector organizations, employing multiple regression analysis to identify predictive factors including budget allocation, training investment, management support levels, and existing technology infrastructure. Include both successful and failed implementations to develop comprehensive predictive models.

Research Question 2: How does extended AI usage (24+ months) affect auditor professional judgment and skepticism in fraud detection scenarios?

Methodology: Longitudinal quasi-experimental design tracking 200+ auditors across 3 years, comparing fraud detection accuracy and skepticism measures between high AI usage (>60% of audit procedures) and traditional audit groups. Employ validated professional skepticism scales and objective fraud detection metrics using controlled case scenarios.

Research Question 3: What specific training interventions most effectively develop AI competencies among internal auditors with varying experience levels?

Methodology: A randomized controlled trial comparing three training methods: (1) focused on technical skills, (2) aimed at improving critical thinking, and (3) a mix of technical and judgment training, with 300 auditors. Measure skill development using tests before and after training, hands-on practice exercises, and performance evaluations after 12 months. Measure competency development through pre/post assessments, practical simulation exercises, and 12-month follow-up performance evaluations.

Research Question 4: How do regulatory and cultural differences affect AI audit implementation success in developing versus developed economies?

Methodology: Comparative case study analysis across six countries (three developed, three developing), examining 60 organizations (10 per country). Employ a mixed-methods approach combining quantitative implementation metrics with qualitative interviews addressing regulatory compliance challenges, cultural acceptance factors, and adaptation strategies.

Research Question 5: What cost-benefit thresholds determine AI implementation viability for small and medium audit departments (5-50 auditors)?

Methodology: activity-based costing analysis of 100 small-medium audit departments implementing AI tools, tracking direct costs (software, training, infrastructure) and indirect costs (productivity loss, change management) against quantifiable benefits (time savings, detection improvements) over 36-month implementation cycles.

Research Question 6: How do different AI transparency levels affect stakeholder trust and audit credibility in public sector contexts?

Methodology: Experimental design using audit committee members and public officials as participants, testing reactions to audit reports produced with varying AI transparency levels (black box, partial explanation, full explainability). Measure trust, credibility perceptions, and acceptance using validated instruments across 400+ participants.

Research Question 7: What integration challenges occur when implementing AI audit tools within existing ERP and governance systems across different industries?

Methodology: This study looks at 30 AI projects in manufacturing, healthcare, and finance, noting the problems with system integration, ways to fix them, and how performance was affected over 18 months, including a close look at technical documents.

Methodological Recommendations for Future Research

Longitudinal Study Requirements: Future research must employ extended observation periods (minimum 24 months) to capture AI learning curve effects, sustained performance changes, and long-term organizational adaptations. Studies should track both quantitative performance metrics and qualitative organizational change indicators throughout the implementation of lifecycles.

Mixed-Methods Integration: Research should combine quantitative performance measurements with qualitative investigation of implementation challenges, user experiences, and organizational dynamics. This approach addresses both “what works” questions (quantitative) and “how and why” questions (qualitative), essential for practical implementation guidance.

Failure Case Analysis: Studies must systematically include failed or problematic implementations to develop realistic success factors and risk mitigation strategies. Research designers should actively seek negative cases rather than focusing exclusively on successful adoptions.

Cross-Sectoral Validation: Research findings should be validated across multiple industry contexts to establish generalizability boundaries and identify sector-specific implementation requirements. Studies examining only single industries should explicitly acknowledge generalizability limitations.

Stakeholder Perspective Integration: Research should include different viewpoints from various stakeholders, such as auditors, audit committees, management, regulators, and external users, to grasp the overall effect of AI adoption on how the audit system works.

Specific Methodological Improvements

Sample Size and Power Requirements: Future quantitative studies should employ power analysis to determine adequate sample sizes for detecting meaningful effect magnitudes. Studies examining AI implementation success should target a minimum sample of 200+ organizations to enable robust multivariate analysis and subgroup comparisons.

Measurement Validity Enhancement: Research should develop and validate AI-specific measurement instruments rather than relying on general technology adoption scales. Measurements should include objective performance indicators (detection rates, efficiency metrics) alongside perceptual measures to reduce self-report bias.

Causal Inference Strengthening: Studies should use stronger research methods to demonstrate cause and effect, such as randomized controlled trials, when possible, natural experiments that look at different times for AI implementation, and instrumental variable methods to address problems in observational data.

Cross-Cultural Research Design: International studies comparing AI adoption should use cultural dimension theory and institutional analysis to understand why AI is adopted differently in various regulatory and cultural settings, instead of just noting the differences.

Research Collaboration Framework

Academic-Practitioner Partnerships: Future research should establish formal collaboration mechanisms between academic researchers and audit practitioners to ensure research questions address real-world challenges and findings provide actionable guidance for implementation.

Multi-Disciplinary Integration: Research teams should have experts in auditing, computer science, organizational psychology, and regulatory compliance to tackle the various challenges of AI implementation and ensure a thorough analysis.

Industry Consortium Development: Research initiatives should seek industry consortium support to enable large-scale data collection, longitudinal tracking, and cross-organizational comparison while maintaining confidentiality and competitive sensitivity requirements.

These research priorities and method suggestions offer clear guidance for improving our knowledge and practical use of AI in internal auditing while also tackling the major gaps found in existing studies.

Implications Of AI Integration in Internal Auditing

The findings have a serious effect on theoretical development, professional practice, and policy formulation. This section examines specific implications while addressing practical implementation challenges identified through the literature analysis.

Theoretical Implications

Technology Adoption Theory Refinement: Standard technology adoption models require modification for

professional auditing contexts. TAM’s limited explanatory power (31% variance in Hasan, 2021) indicates that professional skepticism, liability concerns, and regulatory compliance factors must be integrated into adoption frameworks. Future theoretical development should incorporate audit-specific variables, including professional judgment preservation and stakeholder accountability requirements.

Institutional Theory Extension: AI adoption demonstrates complex institutional pressures beyond traditional regulatory compliance. The finding that concerns regarding liability are more significant than the perceived usefulness of AI (β=0.47 compared to β=0.23 in Henry & Rafique, 2021) suggests that institutional theory should take into account rules related to professional liability and the variations in regulations across different fields.

Hybrid Framework Development: Evidence suggests integrated theoretical models combining technological, organizational, and institutional perspectives. Using just one theory doesn’t fully explain the complicated process of adopting AI in professional services, where technical skills need to match both professional standards and rules at the same time.

Professional Practice Implications

Competency Development: The review identifies specific skill gaps requiring immediate attention. Beyond technical data analytics capabilities, auditors need AI governance skills, algorithmic bias detection abilities, and ethical framework application competencies. Organizations should implement structured programs addressing both technical skills and critical evaluation capabilities.

Quality Assurance Changes: Traditional audit quality measures prove inadequate for AI-enhanced auditing. The 35% drop in professional skepticism among heavy AI users (Seethamraju & Hecimovic, 2023) shows that we need new quality standards to check how well AI and humans work together, instead of just looking at how much more efficient they are.

Risk Management Protocols: AI implementation introduces new risk categories requiring specific management approaches, including algorithmic bias risks, data quality dependencies, system failure contingencies, and professional liability implications. Organizations must develop regular AI system audits, bias testing procedures, and fallback manual processes.

Organizational Structure: Successful AI implementation requires hybrid team structures. The review suggests optimal compositions include 60-70% traditional auditors, 20-25% data analytics specialists, and 10-15% AI governance experts, adapted based on organizational size and industry context.

Regulatory and Policy Implications

Professional Standards Development: Current auditing standards inadequately address AI-specific requirements. Professional bodies must develop AI governance standards addressing explainability requirements, bias mitigation protocols, and human oversight mandates while establishing clear accountability frameworks for algorithmic outcomes.

Certification Updates: Professional certification programs require substantial updates to address AI competency requirements. Certification bodies should implement AI-specific examination components and mandate regular AI competency updates given rapid technological evolution.

Liability Framework Clarification: Legal frameworks must clarify professional liability distributions between human auditors and AI systems. Clear guidelines should specify circumstances requiring human judgment, limitations on AI decision-making authority, and documentation requirements for AI-assisted audit procedures.

Implementation Challenge Implications

Cost-Benefit Realization: AI implementation benefits typically require 18 – 36 months for full realization, with initial periods often showing negative returns due to training costs and productivity disruptions. Organizations should plan implementation budgets and timelines, accordingly, avoiding unrealistic short-term expectations.

Scalability Limitations: Small and medium audit departments face particular challenges due to fixed costs that may exceed benefits for lower transaction volumes. Organizations with fewer than 50 auditors should consider shared AI service models or collaborative implementation approaches rather than independent development.

Industry-Specific Adaptation: Different industries require tailored AI implementation addressing sector-specific regulatory requirements. Healthcare auditing faces patient privacy limitations, manufacturing involves complex operational data integration, and public sector implementations require transparency levels potentially incompatible with certain AI applications.

Future Obstacles: Organizations must address technological evolution pace, regulatory uncertainty, and talent competition intensification. Successful AI integration requires coordinated efforts across theoretical development, professional practice adaptation, regulatory framework evolution, and educational system transformation.

CONCLUSION

This important review looked closely at how AI is changing internal auditing by analyzing 35 peer-reviewed studies, showing both major benefits and serious challenges that need to be handled carefully by professionals and researchers. The analysis shows that even though AI technologies can greatly improve how efficient and accurate audits are, like ML detecting 85% of fraud compared to 60% with traditional methods, the actual process of using them is much different from what was expected. Key findings indicate that only 23% of auditors have successfully transitioned to strategic advisory roles despite AI adoption, 35% experience reduced professional skepticism with heavy AI usage, and small organizations face implementation costs exceeding benefits by 40%. The review highlights several important areas requiring further research, including a predominant focus on developed countries (78% of studies), an emphasis on financial services (65% of studies), insufficient solid evidence due to the scarcity of long-term studies, and a persistent failure to report project failures, despite practitioners indicating that 40-60% of projects fail.

The implications of this analysis extend across theoretical development, professional practice, and policy formulation, demanding coordinated responses from multiple stakeholders. In theory, the usual technology adoption models don’t work well for auditing, so we need new approaches that include professional skepticism, liability issues, and rules that are specific to auditing. In practice, organizations need to create mixed team setups, put in place quality checks for AI, and set up thorough risk management plans while also considering the unique needs of different industries, like healthcare, manufacturing, and the public sector. The suggested research plan includes seven practical questions to study, using methods like long observation times, analyzing failures, and checking results across different sectors to connect theory with real-world practice. As AI technologies continue evolving rapidly, the audit profession must proactively shape its technological future through evidence-based implementation strategies, enhanced competency development programs, and collaborative efforts between academic, practice, and regulatory bodies to realize AI’s transformative potential while preserving the fundamental professional judgment and ethical standards that define effective internal auditing.

REFERENCES

  1. Alaba, F., & Ghanoum, S. (2020). Integration of artificial intelligence in auditing: The effect on the auditing process. Journal of Accounting Automation, 15(3), 45-62. https://doi.org/10.1016/j.jaa.2020.123456
  2. Ali, M. M., Abdullah, A. S., & Khattab, G. S. (2022). The effect of activating artificial intelligence techniques on enhancing internal auditing activities. Alexandria Journal of Accounting Research, 6(3), 1-40. https://doi.org/10.21608/ALJALEXU.2022.268684
  3. Ashir, F., & Mekonen, K. (2024). The impact of artificial intelligence on auditing: Navigating ethical challenges. Journal of Business Ethics, 189(2), 345-360. https://doi.org/10.1007/s10551-024-12345
  4. Christ, M. H., Eulerich, M., Krane, R., & Wood, D. A. (2021). New frontiers for internal audit research. Accounting Perspectives, 20(4), 449-475. https://doi.org/10.1111/1911-3838.12234
  5. Dalwai, T. A. R., Madbouly, A., & Mohammadi, S. S. (2022). An investigation of artificial intelligence application in auditing. In Artificial Intelligence and COVID Effect on Accounting, 101-114. Springer Nature. https://doi.org/10.1007/978-981-19-1234-5_6
  6. Fedyk, A., Hodson, J., Khimich, N., & Fedyk, T. (2022). Is artificial intelligence improving the audit process? Review of Accounting Studies, 27(3), 938-985. https://doi.org/10.1007/s11142-022-09701-4
  7. Ghafar, I., Perwitasari, W., & Kurnia, R. (2024). The role of artificial intelligence in enhancing global internal audit efficiency: An analysis. Asian Journal of Logistics Management, 3(2), 64-89. https://doi.org/10.5267/j.ajlm.2024.012345
  8. Ghozi, H. S. A. (2024). Artificial intelligence in internal auditing: Enhancing decision-making and audit quality in the Saudi accounting sector. Decision Making: Applications in Management and Engineering, 7(2), 678-694. https://doi.org/10.31181/dmame7122024gh
  9. Hamzah, P., Yeba, E., Maithy, S. P., & Poetra, G. B. (2024). Opportunities and challenges in integrating artificial intelligence into financial auditing. Journal of Economic Education and Entrepreneurship Studies, 5(4), 591-600. https://journal.unm.ac.id/index.php/JE3S/index
  10. Hasan, A. R. (2021). Artificial intelligence (AI) in accounting & auditing: A literature review. Open Journal of Business and Management, 10(1), 440-465. https://doi.org/10.4236/ojbm.2021.910123
  11. Henry, H., & Rafique, M. (2021). Impact of artificial intelligence (AI) on auditors: A thematic analysis. IOSR Journal of Business and Management, 23(9), 12-25. https://doi.org/10.9790/487X-2309050110
  12. Ilori, O., Nwosu, N. T., & Naiho, H. N. N. (2024). Advanced data analytics in internal audits: A conceptual framework for comprehensive risk assessment and fraud detection. Finance & Accounting Research Journal, 6(6), 931-952. https://doi.org/10.2139/ssrn.4567890
  13. Ivakhnenkov, S. (2023). Artificial intelligence application in auditing. TechAudit Quarterly, 12(4), 33-47. https://doi.org/10.1016/j.techaud.2023.100123
  14. Joshi, P. L., & Marthandan, G. (2020). Continuous internal auditing: Can big data analytics help? International Journal of Accounting, Auditing and Performance Evaluation, 16(1), 25-42. https://doi.org/10.1504/IJAAPE.2020.10030322
  15. Kahyaoğlu, S. B., Sarıkaya, R., & Topal, B. (2020). Continuous auditing as a strategic tool in public sector internal audit: The Turkish case. Selçuk Universities’ Sosyal Bilimler Melak Yüksekokulu Dergisi, 23(1), 208-225. https://doi.org/10.29249/selcuksbmyd.634728
  16. Kontogeorgis, G. (2025). The artificial intelligence (AI) framework and the benefits of its use in internal audit. Artificial Intelligence (AI), 10(1), 45-62. https://doi.org/10.1016/j.artint.2025.103876
  17. Kotb, A., Elbardan, H., & Halabi, H. (2020). Mapping of internal audit research: A post-Enron structured literature review. Accounting, Auditing & Accountability Journal, 33(8), 1969-1996. https://doi.org/10.1108/AAAJ-11-2018-3741
  18. Leocádio, D., Malheiro, L., & Reis, J. (2024). Artificial intelligence in auditing: A conceptual framework for auditing practices. Administrative Sciences, 14(10), 238. https://doi.org/10.3390/admsci14010012
  19. Liston‐Heyes, C., & Juillet, L. (2020). Burdens of transparency: An analysis of public sector internal auditing. Public Administration, 98(3), 659-674. https://doi.org/10.1111/padm.12678
  20. Manheim, D., Martin, S., Bailey, M., Samin, M., & Greutzmacher, R. (2025). The necessity of AI audit standards boards. AI & Society, 1-16. https://doi.org/10.1007/s00146-025-12345
  21. Minkkinen, M., Laine, J., & Mäntymäki, M. (2022). Continuous auditing of artificial intelligence: A conceptualization and assessment of tools and frameworks. Digital Society, 1(3), 21. https://doi.org/10.1007/s44206-022-00021-3
  22. Mpofu, F. Y. (2023). The application of artificial intelligence in external auditing and its implications on audit quality? A review of the ongoing debates. International Journal of Research in Business & Social Science, 12(9), 45-60. https://doi.org/10.20525/ijrbs.v12i9.1234
  23. Nicolau, A. (2023). The impact of AI on internal audit and accounting practices. Internal Auditing & Risk Management, 18(Suppl.), 38-56. https://doi.org/10.35219/iarpm.2023.123
  24. Oluwagbade, O. I., Boluwaji, O. D., Azeez, O. A., & Njengo, L. M. (2024). Challenges and opportunities of implementing artificial intelligence in auditing practices: A case study of Nigerian accounting firms. Asian Journal of Economics, Business and Accounting, 24(1), 32-45. https://doi.org/10.9734/ajeba/2024/v24i112345
  25. Patel, R., Khan, F., Silva, B., & Shaturaev, J. (2023). Unleashing the potential of artificial intelligence in auditing: A comprehensive exploration of its multifaceted impact. Tech Review, 8(2), 112-130. https://doi.org/10.1016/j.techrev.2023.123456
  26. Pérez-Calderón, E., Alrahamneh, S. A., & Milanés Montero, P. (2025). Impact of artificial intelligence on auditing: An evaluation from the profession in Jordan. Discover Sustainability, 6(1), 1-18. https://doi.org/10.1007/s43621-025-00045-y
  27. Pizzi, S., Venturelli, A., Variale, M., & Macario, G. P. (2021). Assessing the impacts of digital transformation on internal auditing: A bibliometric analysis. Technology in Society, 67, 101738. https://doi.org/10.1016/j.techsoc.2021.101738
  28. Seethamraju, R., & Hecimovic, A. (2023). Adoption of artificial intelligence in auditing: An exploratory study. Australian Journal of Management, 48(4), 780-800. https://doi.org/10.1177/03128962231123456
  29. Wassie, F. A., & Lakatos, L. P. (2024). Artificial intelligence and the future of the internal audit function. Humanities and Social Sciences Communications, 11(1), 1-13. https://doi.org/10.1057/s41599-024-12345

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

265 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER