Artificial Intelligence and the Future of Job Security: A Narrative Review of Risks, Resilience, and Policy Responses
- Dinesh Deckker
- Subhashini Sumanasekara
- 4384-4407
- Jul 16, 2025
- Education
Artificial Intelligence and the Future of Job Security: A Narrative Review of Risks, Resilience, and Policy Responses
Dinesh Deckker1*, Subhashini Sumanasekara2
1Department of Science and echnology, Wrexham University, United Kingdom
2Department of Computing and Social Sciences, University of Gloucestershire, United Kingdom
*Corresponding author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.906000333
Received: 12 June 2025; Accepted: 16 June 2025; Published: 16 July 2025
ABSTRACT
This narrative review explores the transformative impact of Artificial Intelligence (AI) on job security, addressing the associated risks, workforce resilience, and policy responses. The purpose of the study is to synthesise current empirical and theoretical research on AI’s influence on employment, moving beyond deterministic projections of mass job loss to provide a nuanced understanding of sectoral, demographic, and ethical implications. The method involved a narrative review of 38 selected studies from academic and policy sources published between 2015 and 2025. The results reveal that AI is both displacing and augmenting jobs, with clerical, routine, and middle-management roles facing the highest risks of automation. Vulnerable groups include women, older workers, and those in low-wage sectors. Conversely, AI is driving job creation in fields such as data science, AI ethics, and cybersecurity. Case studies from companies like UPS, Klarna, Duolingo, and CrowdStrike illustrate diverse pathways of AI-driven job displacement and business restructuring. The review identifies significant skills gaps and highlights the urgent need for reskilling, inclusive lifelong learning, and human-AI collaboration models. Policy responses remain fragmented and reactive, underscoring the necessity for transparent, ethical AI governance and inclusive workforce strategies. The conclusion emphasises that AI’s impact on job security is not inevitable but contingent on proactive organisational and policy choices. A human-centred AI paradigm that prioritises transparency, fairness, and social equity is essential to harness AI’s potential while safeguarding workers’ dignity and agency.
Keywords : Artificial Intelligence, job security, automation, workforce resilience, reskilling, human-AI collaboration, policy response, ethical AI, employment inequality, future of work.
INTRODUCTION
The transformative potential of Artificial Intelligence (AI) across global labour markets is a defining concern of the 21st century. AI-driven systems are permeating a wide range of sectors—from finance and logistics to healthcare, education, and creative industries—reshaping how work is performed and what skills are valued (Acemoglu et al., 2022; Miah, 2024). While AI enhances productivity and fosters innovation, it also raises significant concerns about job displacement, social inequality, and the sustainability of employment structures (Jadhav & Bankbook, 2024; Gmyrek et al., 2023).
This issue is no longer hypothetical. The COVID-19 pandemic served as a global catalyst for AI adoption, as organisations sought automated solutions to maintain operations amid lockdowns and the transition to remote work (Pavashe et al., 2023). In this shifting landscape, understanding AI’s nuanced impacts on job security has become an urgent scholarly and policy imperative.
The debate on automation and employment has long roots in economic theory. Keynes (1930) predicted that “technological unemployment” would be a temporary phase of economic development. However, the pace and scope of AI-driven automation present more immediate and complex challenges than those posed by earlier waves of mechanisation (Acemoglu & Restrepo, 2019).
Recent studies reveal both displacement and augmentation effects. AI is automating routine cognitive and manual tasks, particularly in manufacturing, logistics, and clerical work (Jadhav & Banubakode, 2024; Miah, 2024), while complementing human labour in areas such as data science, AI ethics, and cybersecurity (Gmyrek et al., 2023). Significantly, AI’s impacts vary across sectors: industries reliant on creativity, human interaction, or tacit knowledge remain comparatively resilient to full automation (Miah, 2024).
These impacts are also gendered and geographically uneven. Clerical roles—highly exposed to generative AI—represent a significant source of female unemployment, particularly in high- and upper-middle-income countries (Gmyrek et al., 2023). Older workers also face challenges in adapting to digital environments, with gaps in digital literacy hindering their ability to transition into AI-integrated workplaces (Marquis et al., 2024).
AI’s effects extend beyond task automation, driving broader transformations in organisational structures, work processes, and skill requirements (Olaniyi et al., 2024; Tenakwah & Watson, 2024). This underscores the importance of reskilling and lifelong learning as critical elements of workforce resilience (Li, 2022).
Research Gap
Despite a growing body of research, important gaps remain. Many studies adopt a technological determinist perspective, focusing on job loss projections without adequately accounting for job creation, role transformation, and human-AI collaboration (Acemoglu et al., 2022; Huertas et al., 2025). Furthermore, quantitative forecasts often lack the sectoral and regional granularity required to inform effective policy (Handel, 2022).
Ethical, psychological, and sociocultural dimensions of AI-driven labour transformations also remain underexplored. While public sentiment reflects both optimism about efficiency gains and fears of obsolescence (Loyola et al., 2024), few studies systematically examine how trust, transparency, and organisational culture mediate these outcomes (Olateju et al., 2024; Abhulimen & Ejike, 2024).
Finally, although there is broad consensus on the need for proactive policy responses, comparative analyses of existing regulatory frameworks—and their effectiveness in balancing innovation with worker protection—remain limited (Raj, 2025; Panchenko, 2025). A deeper interdisciplinary understanding is necessary to inform the design of equitable and sustainable policies (Gambhir & Gill, 2024).
Research Aim
This narrative review aims to provide a comprehensive synthesis of current research on the impact of AI on job security, highlighting both the risks and opportunities. Specifically, it seeks to:
- Examine empirical evidence on AI-driven job displacement, augmentation, and creation across sectors and demographics;
- Analyse the challenges posed by skills gaps and adaptation requirements in AI-integrated workplaces;
- Explore organisational strategies for fostering human-AI collaboration and workforce resilience;
- Assess policy responses and regulatory approaches to managing AI-induced labour market transitions;
- Identify ethical considerations and social implications surrounding AI-driven transformations of work.
By integrating findings from empirical studies, theoretical analyses, and policy discourse, this review offers a holistic perspective on the evolving relationship between AI and employment.
Research Questions
The following key questions guide this review:
- What are the main sectoral and demographic patterns of AI-induced job displacement, augmentation, and creation?
- How are organisations and workers adapting to AI-driven changes in required skills and work processes?
- What strategies are effective for building workforce resilience through upskilling, reskilling, and organisational innovation?
- How are policymakers responding to the labour market challenges and opportunities posed by AI integration?
- What ethical concerns and social implications arise from AI’s growing role in shaping employment dynamics?
This review makes a significant contribution to the literature in several ways. First, it moves beyond simplistic automation narratives to present a balanced and empirically grounded view of AI’s impacts on employment. Second, it foregrounds ethical and sociocultural dimensions—trust, transparency, and equity—that are often marginalised in economic analyses. Third, it highlights the critical role of organisational and policy interventions in shaping inclusive and sustainable AI transitions.
By synthesising diverse strands of research, this review aims to inform scholars, policymakers, and practitioners seeking to navigate the profound changes that AI is bringing to the world of work.
METHODOLOGY
This study adopts a narrative review methodology to synthesise current knowledge on the relationship between Artificial Intelligence (AI) and job security. Given the rapidly evolving and interdisciplinary nature of this topic, spanning labour economics, organisational psychology, AI ethics, public policy, and industrial relations, a narrative review provides the flexibility required to integrate insights from diverse research traditions. This approach complements recent calls for more holistic and integrative syntheses in the AI and labour market literature.
Literature Search Strategy
The review drew upon both peer-reviewed academic publications and grey literature to capture a comprehensive range of perspectives. Primary databases searched included Scopus, Web of Science, IEEE Xplore, SpringerLink, and Google Scholar. In addition, reports from key policy organisations—such as the International Labour Organisation (ILO), the World Economic Forum, and national government publications and news reports—were consulted to incorporate policy-relevant evidence.
The following keywords and Boolean combinations guided the search:
- “Artificial Intelligence” OR “AI” OR “automation” OR “robotics”
- AND “job security” OR “employment” OR “labour market” OR “workforce resilience”
- AND “skills gap” OR “reskilling” OR “upskilling”
- AND “policy response” OR “AI regulation” OR “AI ethics”
The initial search focused on literature published between 2015 and 2025 to capture the most current wave of AI development, with particular attention to post-pandemic trends. Key foundational works on automation and labour markets were also included to provide theoretical grounding.
Inclusion and Exclusion Criteria
Studies were included if they met the following criteria:
- Focused explicitly on the impacts of AI or related automation technologies on employment, job security, or labour market dynamics.
- Presented empirical evidence, systematic analysis, or theoretical frameworks.
- Published in English and subjected to scholarly peer review or recognised institutional review.
- Addressed at least one of the following dimensions: job displacement, job augmentation, skills transformation, workforce resilience, or policy/regulatory responses.
Studies were excluded if they:
- Addressed general technological change without specific focus on AI.
- Focused solely on AI technical development without labour market relevance.
- They were purely speculative and unsupported by empirical evidence or systematic theory.
Data Extraction and Synthesis
An initial pool of 85 relevant studies was identified. Following the application of inclusion and exclusion criteria and a full-text review, a final sample of 38 studies was selected for in-depth analysis. This sample included articles from leading journals in economics, management, information systems, public policy, and interdisciplinary AI ethics, as well as key policy reports.
Data extraction focused on identifying:
- Sectoral and demographic patterns of AI-driven job impacts
- Mechanisms of displacement and augmentation
- Evidence of skills gaps and adaptation challenges
- Organisational strategies for fostering human-AI collaboration
- Policy and regulatory responses
- Ethical and social dimensions of AI-driven labour transformations
The synthesis process followed principles of reflexive thematic analysis, enabling the identification of key cross-cutting themes and areas of emerging scholarly consensus or debate.
Methodological Limitations
As a narrative review, this study does not claim the exhaustive coverage or replicability associated with systematic reviews or meta-analyses. Although care was taken to ensure breadth and balance across disciplinary perspectives, the rapid pace of AI research means that some recent developments may not be fully captured. Moreover, the heterogeneity of study designs and measures in the reviewed literature limits the potential for formal meta-analytic synthesis.
Nonetheless, the narrative review approach remains well suited to the aim of providing a comprehensive, integrative, and policy-relevant overview of the evolving relationship between AI and job security.
LITERATURE REVIEW
AI’s Impact on Employment: Current Landscape
The impact of Artificial Intelligence (AI) on employment is a deeply contested and rapidly evolving field of inquiry. While early discourse was dominated by fears of mass unemployment through automation, more recent research presents a more nuanced view, highlighting sectoral variation, shifting skill demands, and complex patterns of job augmentation and displacement (Acemoglu & Restrepo, 2019; Miah, 2024).
Sectoral and Demographic Patterns
AI’s diffusion across sectors has been markedly uneven. High-exposure industries such as manufacturing, logistics, and clerical work have seen significant automation, whereas sectors reliant on human interaction, creativity, and tacit knowledge, such as education, healthcare, and creative industries, remain more resilient (Miah, 2024; Jadhav & Banubakode, 2024). For example, AI systems are replacing routine tasks in manufacturing and customer service through robotics and chatbots (Jadhav & Banubakode, 2024). In contrast, creative professions tend to benefit more from AI augmentation than substitution (Miah, 2024).
Research by Gmyrek et al. (2023) further challenges deterministic narratives of widespread job elimination. Their global analysis of the impact of Generative AI, including GPT-4, suggests that AI will predominantly augment, rather than fully automate, many occupations. However, this augmentation is uneven. Clerical roles—especially prominent in high- and upper-middle-income countries and a significant source of female employment—are among those most exposed to automation (Gmyrek et al., 2023), highlighting important gendered dimensions of AI’s labour market impact.
Geographically, AI-driven job impacts also vary substantially. High-income countries exhibit higher adoption rates of advanced AI tools, due to greater investment capacity and concentrations of digitally intensive industries (Gmyrek et al., 2023; Miah, 2024). Conversely, many lower-income economies remain shielded from large-scale displacement due to labour markets dominated by informal employment and low-tech services (Pavashe et al., 2023).
Evidence on Displacement and Augmentation
Foundational labour economics suggests that technological change induces both displacement and new job creation through ‘task reallocation’ (Acemoglu & Restrepo, 2019). Recent evidence supports this dual dynamic in the case of AI. While task-level automation is observable, entire occupations are rarely eliminated (Acemoglu et al., 2022). Rather, human tasks are being redefined, with AI assuming routine components and workers focusing on higher-order cognitive, interpersonal, and creative functions (Handel, 2022).
Marquis et al. (2024) add further nuance, demonstrating that AI integration enhances professional efficiency and decision-making rather than eliminating entire job categories. However, these changes also alter skill demands and job content, underscoring the need for ongoing adaptation (Marquis et al., 2024).
Public sentiment reflects this complexity. Loyola et al. (2024) found a predominance of positive perceptions regarding AI’s potential to improve efficiency and create new opportunities, though fears of displacement and obsolescence remain salient. These perceptions significantly influence how workers adapt to changes driven by AI.
Changing Nature of Work and Skills
AI is not simply replacing jobs; it is reshaping them (Ghosh & Sadeghian, 2024). This transformation is driving shifts in required competencies. According to Li (2022), by 2025, half of all employees will require reskilling due to the adoption of new technology, with over two-thirds of currently critical skills likely to be supplanted.
Generational divides are also emerging. Younger workers demonstrate higher readiness for AI adoption. In comparison, older workers face greater challenges in adapting to AI-driven workplaces, raising concerns about exclusion from the labour market without targeted reskilling initiatives (Marquis et al., 2024; Olaniyi et al., 2024).
At the organisational level, strategic workforce planning and human-AI collaboration models are increasingly recognised as vital. Tenakwah and Watson (2024) argue that successful adaptation requires AI-literate organisational cultures, innovative job models, and proactive alignment of AI strategies with business objectives. Findings from Ghosh and Sadeghian (2024) further suggest that workers generally view AI as a complement rather than a substitute, and that well-managed collaboration can enhance job quality and satisfaction.
Risks and Challenges to Job Security
While AI-driven innovation offers the potential to enhance productivity and create new employment opportunities, it also presents significant risks to job security across various sectors and demographic groups. The literature identifies multiple interconnected challenges, including job displacement, widening skills gaps, unequal access to adaptation resources, and the erosion of traditional employment protections. Addressing these risks requires a nuanced understanding of the mechanisms driving labour market transformations.
Job Displacement and Automation Risks
One of the most visible risks associated with AI implementation is job displacement, particularly in industries where routine, repetitive tasks comprise a substantial share of employment (Jadhav & Banubakode, 2024). The automation of such tasks through AI-powered robotics, algorithms, and chatbots is transforming sectors such as manufacturing, logistics, and customer service (Badhurunnisa & Dass, 2023; Jadhav & Banubakode, 2024). For instance, AI-driven systems now manage assembly lines, warehouse operations, and customer interactions—roles traditionally performed by human workers.
Acemoglu and Restrepo (2019) note that automation displaces certain types of labour while simultaneously creating new tasks. However, the transition is uneven and fraught with risks for vulnerable groups. Handel (2022), analysing US labour market data, found that while large-scale occupation-level collapse has not materialised, significant task-level displacement within occupations is occurring, often with adverse effects on job quality.
The displacement effect is also gendered and regionally uneven. Clerical occupations, which are highly exposed to generative AI, are a significant source of female employment in high- and upper-middle-income countries (Gmyrek et al., 2023). Without targeted interventions, the automation of these roles risks exacerbating existing gender inequalities in labour markets.
Public discourse reflects widespread anxiety about these trends. Sentiment analysis by Loyola et al. (2024) revealed that while many express optimism about AI’s potential to enhance efficiency, substantial concerns persist regarding job loss, skill obsolescence, and reduced employment opportunities.
Skills Gap and Adaptation Challenges
AI adoption is driving a rapid reconfiguration of skill requirements, creating gaps between existing workforce capabilities and those needed in an AI-augmented economy. Olaniyi et al. (2024) report that 69% of respondents in their study identified a substantial skills gap necessitating urgent educational and training interventions. Li (2022) similarly projects that by 2025, half of all employees will require reskilling due to the adoption of new technologies.
Required skills are shifting towards higher-order cognitive abilities, creativity, emotional intelligence, and technological fluency—competencies less susceptible to automation (Jadhav & Banubakode, 2024). However, many education and training systems remain ill-equipped to deliver such skills at the necessary scale and pace (Olaniyi et al., 2024).
Compounding this challenge is a generational digital divide. Marquis et al. (2024) found that younger professionals demonstrate greater readiness to engage with AI tools, while older workers face significant barriers to adaptation. Without targeted support, this divide risks deepening age-based inequalities in employment outcomes.
Inadequate Policy Frameworks and Social Protections
Current employment policies often lag behind the pace of technological change. Around half of the respondents in Olaniyi et al.’s (2024) study viewed existing employment policies as inadequate for addressing AI-related labour market transitions. This policy gap leaves many workers without clear pathways for reskilling, income support, or career mobility in the face of automation-driven disruptions.
Panchenko (2025) similarly warns that proactive measures are urgently needed to mitigate displacement risks and support workforce adaptation. Without such interventions, automation could exacerbate socio-economic inequalities, leaving marginalised groups disproportionately exposed to job insecurity.
Moreover, traditional employment protections, such as tenure-based rights and collective bargaining mechanisms, are often ill-suited to the dynamic nature of AI-augmented work (Gambhir & Gill, 2024). This institutional misalignment further compounds worker vulnerability.
Ethical and Equity Concerns
Beyond economic impacts, AI-driven labour market transformations raise profound ethical questions regarding equity, access, and fairness. The gendered impacts identified by Gmyrek et al. (2023) highlight the need for gender-sensitive policy responses. Access to reskilling opportunities is similarly unequal; Li (2022) emphasises that learning opportunities must be accessible, affordable, and inclusive to prevent the deepening of existing divides.
Issues of algorithmic bias, transparency, and trust in AI systems also directly affect employment outcomes. Olateju et al. (2024) demonstrate that the use of explainable AI (XAI) models enhances trust and supports ethical data handling, yet such models are not universally adopted. Without robust transparency frameworks, the deployment of AI in hiring, promotion, and workplace management could inadvertently reinforce existing inequities (Abhulimen & Ejike, 2024).
Opportunities and Potential Benefits
While much scholarly and public attention regarding Artificial Intelligence (AI) and employment focuses on risks and challenges, an equally important body of research highlights the opportunities and benefits that AI offers for the future of work. As Acemoglu and Restrepo (2019) argue, technological change not only displaces certain forms of labour but also generates new tasks, roles, and even entire sectors of economic activity. This dynamic is visible in the current wave of AI diffusion, where human labour augmentation, new job creation, and improvements in work quality and efficiency are emerging alongside the risks of displacement.
Job Creation and Emerging Roles
AI-driven innovation is stimulating the creation of new employment opportunities, particularly in sectors such as AI development, data analytics, cybersecurity, and digital services (Jadhav & Banubakode, 2024; Gmyrek et al., 2023). The development, deployment, and ongoing maintenance of AI systems require a broad array of highly skilled professionals, including software engineers, data scientists, AI ethicists, and human-AI interaction designers, thus fostering new labour market niches.
Empirical evidence supports this perspective. Acemoglu et al. (2022) show that firms adopting AI tools tend to generate increased demand for complementary human skills, including those involved in managing, interpreting, and governing AI outputs. While this demand does not fully offset displacement risks in all contexts, it highlights AI’s role as a catalyst for task innovation rather than pure substitution.
Similarly, Olaniyi et al. (2024) report that most respondents in their study recognised AI and automation as enablers of job creation, especially in sectors with high digital integration. Emerging roles demand not only technical competencies but also new combinations of soft skills, creativity, and ethical awareness, pointing towards a broader transformation of professional profiles (Li, 2022).
Enhancement of Work Quality and Efficiency
Beyond creating new jobs, AI enhances the quality and efficiency of existing work. By automating routine and repetitive tasks, AI enables human workers to focus on higher-order cognitive, interpersonal, and creative activities (Badhurunnisa & Dass, 2023). This redistribution of task content can improve both job satisfaction and organisational productivity.
Marquis et al. (2024), analysing data from 1,623 professionals across diverse sectors, found that AI tools significantly improved efficiency in areas such as data analysis, decision-making, and customer service. The use of AI to support, rather than replace, human judgment was associated with increased productivity and enhanced value creation.
Sentiment analysis by Loyola et al. (2024) further reveals that public discourse increasingly reflects a nuanced understanding of AI’s role. Many users express optimism about AI’s potential to augment human capabilities, reduce mundane workloads, and foster innovation, though concerns about equitable access to these benefits remain.
Human-AI Collaboration and Job Enrichment
An emerging strand of research focuses on the potential for well-designed human-AI collaboration models to enrich work experiences. Ghosh and Sadeghian (2024) found that workers often perceive AI not as a threat but as a partner that can complement human strengths. Their study suggests that when AI is thoughtfully integrated, it can enhance job meaningfulness and quality.
Organisational strategies play a critical role in realising these benefits. Tenakwah and Watson (2024) argue that fostering an AI-literate and adaptable workforce, along with creating new AI-centric roles and innovative job models, is essential for aligning AI adoption with positive work outcomes. Proactive leadership in managing the cultural and ethical dimensions of human-AI collaboration further enhances job quality and overall employee satisfaction.
At the 3.4 Fostering Economic Growth and Innovation e macroeconomic level, AI-driven productivity gains can stimulate broader economic growth and innovation (Acemoglu & Restrepo, 2019). By enabling firms to scale operations more efficiently, personalise services, and innovate rapidly, AI contributes to enhanced competitiveness and the potential expansion of markets.
Panchenko (2025) highlights that automation and robotics, when managed proactively, can serve as engines of economic growth rather than mere threats to employment. Realising these gains requires supportive policies and continued investment in workforce development to ensure inclusive participation in the resulting prosperity.
Workforce Adaptation and Resilience Strategies
As AI technologies continue to reshape the nature of work, fostering workforce adaptation and resilience has become a critical priority for policymakers, educators, and employers. While AI presents opportunities for job creation and enrichment, these benefits are not automatically distributed. They depend on proactive efforts to equip workers with new skills, redesign organisational models, and ensure inclusive access to emerging opportunities (Li, 2022; Olaniyi et al., 2024).
Upskilling and Reskilling Initiatives
A consistent theme in the literature is the urgent need for comprehensive upskilling and reskilling programmes to help workers transition into AI-augmented labour markets. The World Economic Forum predicts that by 2025, 50% of employees will require reskilling (Li, 2022). Similarly, Olaniyi et al. (2024) report that 69% of respondents in their study identified a significant skills gap necessitating immediate educational and training interventions.
Competencies required extend beyond technical AI-related skills to include soft skills such as critical thinking, creativity, emotional intelligence, and problem-solving—areas where humans complement rather than compete with AI (Jadhav & Banubakode, 2024). Programmes integrating both technical and transversal skill development are essential for fostering long-term employability.
Successful reskilling efforts require collaboration across multiple stakeholders. Jadhav and Banubakode (2024) highlight initiatives involving governments, educational institutions, and private companies that provide promising models for scalable workforce development. These efforts must prioritise inclusive access, ensuring that marginalised groups, older workers, and those in lower-wage sectors are not left behind (Li, 2022).
Organisational Preparation and Human-AI Collaboration
Employers play a pivotal role in facilitating workforce adaptation. Tenakwah and Watson (2024) argue that strategic workforce planning is crucial for aligning AI and automation strategies with business objectives and for cultivating an adaptable, AI-literate workforce. This involves not only technical training but also redesigning job roles and creating new career pathways that integrate human-AI collaboration (Ghosh & Sadeghian, 2024).
Effective human-AI collaboration models can enhance job quality and satisfaction. Ghosh and Sadeghian (2024) found that workers tend to view AI as a complementary partner rather than a replacement, particularly when organisational cultures promote transparency, ethical governance, and meaningful engagement with AI systems.
HR leaders must also act as translators between human and machine systems, addressing cultural and ethical considerations, managing change processes, and fostering trust (Tenakwah & Watson, 2024). This requires a shift from traditional HR practices towards more dynamic, cross-functional approaches that emphasise continuous learning and agile workforce development.
Promoting Lifelong Learning
Lifelong learning is central to building workforce resilience in an era of rapid technological change (Li, 2022). Static, one-time training models are insufficient; instead, workers must be empowered to engage in continuous learning throughout their careers.
Marquis et al. (2024) emphasise that younger workers show greater readiness to adopt AI tools, while older cohorts face challenges in adapting to them. Lifelong learning strategies must therefore be tailored to meet the diverse needs of learners, with flexible and accessible formats that support different learning styles and life circumstances.
Policymakers can support lifelong learning by incentivising participation, expanding access to affordable training, and fostering partnerships between public and private sector actors (Panchenko, 2025). Embedding lifelong learning into organisational cultures is equally important; as Li (2022) notes, career development and skill renewal should become strategic priorities for both individuals and employers.
Equity and Inclusion in Workforce Adaptation
Ensuring that workforce adaptation strategies are inclusive and equitable is a critical concern. Without deliberate interventions, AI-driven transitions risk exacerbating existing inequalities along lines of gender, age, geography, and socio-economic status (Gmyrek et al., 2023; Olaniyi et al., 2024).
Access to reskilling opportunities must be broad, affordable, and inclusive (Li, 2022). This includes targeted support for women in clerical and administrative roles, which are disproportionately exposed to automation (Gmyrek et al., 2023), as well as for older workers facing digital divides (Marquis et al., 2024).
Moreover, as Olateju et al. (2024) emphasise, building trust in AI systems through explainable AI and transparent governance is crucial for fostering positive worker engagement with new technologies. Ethical frameworks and inclusive organisational cultures can help ensure that AI adoption enhances rather than undermines social equity (Abhulimen & Ejike, 2024).
Case Studies: AI-Driven Job Displacement and Business Restructuring
Recent real-world cases demonstrate how AI is reshaping employment across industries, not only through direct automation but also by disrupting business models, devaluing professional roles, and reducing the need for middle management.
CrowdStrike (Cybersecurity)
In May 2025, CrowdStrike laid off approximately 500 employees (around 5% of its workforce), citing increased efficiency from AI-enhanced cybersecurity operations as a key driver (Zawrzel, 2025; Reuters, 2025). CEO George Kurtz described AI as a “force multiplier”, enabling faster innovation while reducing hiring needs, although the company continued to expand in strategic areas (Zawrzel, 2025; TechRadar, 2025).
Klarna (Fintech)
Swedish fintech company Klarna reduced its workforce from 5,500 to 3,000, replacing approximately 700 customer service roles with an OpenAI-powered assistant. This transition resulted in immediate annual cost savings of approximately $40 million (Siemiatkowski, 2025; Reuters, 2024). However, deteriorating service quality led Klarna to reverse course, rehiring human staff within two years and acknowledging that full automation had “gone too far” (Siemiatkowski, 2025; Economic Times, 2025; Reuters, 2025).
Duolingo (EdTech)
In early 2024, Duolingo adopted an “AI-first” strategy, reducing its contract content workforce by approximately 10% in favour of generative AI tools (Korn, 2024; Reuters, 2025). CEO Luis von Ahn emphasised that full-time staff were unaffected and cited productivity gains in engineering and language expansion. However, public backlash over declining content quality prompted the company to engage in reputation management efforts (Altchek, 2025; FT, 2025).
UPS (Logistics and Management)
In January 2024, UPS announced the elimination of 12,000 jobs, primarily targeting middle-management and contractor roles, as part of its “Network of the Future” initiative. AI-driven routing, logistics optimisation, and automated super hubs were central to the restructuring strategy (Chapman, 2024). This case illustrates how AI is expanding its influence beyond manual labour to include decision-making and managerial functions.
Chegg (EdTech Market Disruption)
AI-driven disruption can also result from external market forces. In May 2023, Chegg’s stock value halved after students increasingly turned to ChatGPT for homework support. In response, Chegg cut 441 jobs (around 23% of its workforce) in April 2024 (Reuters, 2024). This case exemplifies indirect displacement, where AI undermines existing business models and renders roles obsolete without directly automating their tasks.
G/O Media (Digital Journalism)
Beginning in 2023, G/O Media used AI to generate content for Gizmodo and The A.V. Club. The result was a decline in content quality and the resignation of key editorial staff (Davis, 2023). By early 2024, layoffs were implemented at The A.V. Club, highlighting how AI can devalue professional labour and prioritise quantity over quality (Washington Post, 2024).
Table 1. Impact of AI in Companies, Case Studies
Company | Impact Pathway | AI Role |
CrowdStrike | Efficiency in cybersecurity | AI automates threat analysis |
Klarna | Customer-service automation | Chatbots replace humans, then backtracked |
Duolingo | Content automation dynamics | AI replaced contractors, triggering backlash |
UPS | Management layer reduction | AI-enabled logistics restructuring |
Chegg | Market obsolescence | ChatGPT substitutes core offering |
G/O Media | Professional devaluation | AI-generated low-quality content |
These cases underscore that:
- White-collar roles across functions—from management to content, support, and market disruption—are subject to AI-driven displacement (Gmyrek et al., 2023; Jadhav & Banubakode, 2024; Panchenko, 2025).
- Layoffs are often justified as “AI efficiency gains”, but real-world outcomes frequently involve reversals or reputational risks (Siemiatkowski, 2025; Altchek, 2025; Economic Times, 2025).
- The paths to job loss are varied—direct automation, model obsolescence, and professional devaluation—highlighting the need for tailored policy, organisational, and ethical responses (Raj, 2025; Abhulimen & Ejike, 2024; Olateju et al., 2024).
These case studies concretise the risks and adaptation frameworks discussed earlier (Sections 3.2–3.5), emphasising the need for robust reskilling programs, transparency in AI deployment, and hybrid human-AI workforce models to sustain both economic efficiency and social integrity.
Policy Responses and Regulatory Frameworks
As Artificial Intelligence (AI) reshapes labour markets, effective policy responses and regulatory frameworks are essential to ensure that the benefits of AI adoption are equitably distributed while mitigating risks such as job displacement, inequality, and erosion of worker protections (Raj, 2025; Panchenko, 2025). The literature highlights that while AI innovation is primarily driven by private actors, governments and international organisations play a critical role in shaping its social impacts through regulation, labour market interventions, and collaborative governance (Acemoglu & Restrepo, 2019; Gambhir & Gill, 2024).
Government and Institutional Policies
The accelerating pace of AI adoption has exposed significant gaps in existing labour and technology policies (Olaniyi et al., 2024). Approximately half of the respondents in Olaniyi et al.’s (2024) study viewed current employment policies as inadequate to address the challenges posed by AI-driven transitions. Policymakers must develop frameworks that strike a balance between innovation, preserving employment, and promoting social equity (Raj, 2025).
Raj (2025) proposes a progressive framework integrating sector-specific regulations, skills development initiatives, and adaptive governance mechanisms. Such approaches aim to foster AI innovation while protecting worker interests through measures such as:
- Targeted reskilling programmes for at-risk occupations.
- Strengthened social safety nets to support displaced workers.
- Legal protections to ensure transparency and fairness in algorithmic decision-making (Olateju et al., 2024).
Globally, AI regulation remains fragmented, with considerable variation across jurisdictions (Raj, 2025). Comparative studies suggest that adaptive, multi-stakeholder governance models, which combine legal frameworks with industry self-regulation and public oversight, are most effective in responding to the dynamic nature of AI technologies (Gambhir & Gill, 2024).
Industry-Specific Policy Approaches
Given the uneven impact of AI across sectors, industry-specific interventions are often necessary (Panchenko, 2025). Sectors with high exposure to automation, such as manufacturing, logistics, and clerical work, require tailored strategies to manage displacement risks and facilitate transitions to emerging roles (Gmyrek et al., 2023).
Panchenko (2025) highlights the importance of:
- Conducting sector-specific risk assessments to identify vulnerable occupations.
- Designing lifelong learning ecosystems aligned with sectoral skill demands.
- Encouraging public-private partnerships to co-invest in workforce development.
Policies must also address the ethical dimensions of AI adoption. Abhulimen and Ejike (2024) emphasise the need for standardised ethical guidelines, stakeholder engagement, and transparent governance to ensure that AI systems deployed across industries uphold principles of fairness, accountability, and inclusivity.
Equity and Inclusion in Policy Design
Embedding equity and inclusion into AI policy frameworks is a recurring theme in the literature. The gendered impacts of AI-driven automation, particularly in clerical and administrative roles, highlight the need for gender-sensitive policy responses (Gmyrek et al., 2023). Similarly, older workers and those in low-wage sectors require targeted support to ensure their inclusion in AI-augmented labour markets (Marquis et al., 2024).
Access to reskilling opportunities must be broad and affordable (Li, 2022). Policymakers should prioritise inclusive access to education and training, supported by public funding and incentivised private sector investment (Olaniyi et al., 2024). Without such measures, AI adoption risks deepening existing social and economic inequalities (Gambhir & Gill, 2024).
Transparency, Trust, and Public Engagement
Building public trust in AI systems is a cornerstone of effective governance (Olateju et al., 2024). Explainable AI (XAI) models have been shown to enhance trust and support ethical data handling significantly (Olateju et al., 2024). Regulatory frameworks should mandate transparency, particularly in algorithmic hiring, promotion, and workplace surveillance systems, to safeguard worker rights (Abhulimen & Ejike, 2024).
Public engagement is equally important. Tenakwah and Watson (2024) emphasise the importance of clear communication regarding AI-driven changes, proactive management of worker expectations, and collaborative dialogue with affected stakeholders. Inclusive policymaking processes can help align AI adoption with broader societal values and labour rights (Gambhir & Gill, 2024).
While no single policy model will fit all contexts, the emerging consensus points towards adaptive, inclusive, and collaborative governance as the foundation for shaping the impacts of AI on work. The final section of this review examines the ethical considerations and social implications that must inform these policy efforts.
Ethical Considerations and Social Implications
The growing integration of Artificial Intelligence (AI) into work and organisational life raises a host of ethical considerations and social implications that must inform both policy responses and practical deployment strategies. Without deliberate attention to issues such as fairness, transparency, and inclusivity, AI adoption risks deepening existing inequalities and eroding public trust in both technology and labour market institutions (Abhulimen & Ejike, 2024; Olateju et al., 2024).
The literature highlights four interrelated areas of ethical concern: equity and access, algorithmic bias and transparency, demographic impacts, and public perception and trust.
Equity and Access
Ensuring equitable access to AI-related opportunities, such as reskilling, new forms of work, and participation in AI governance, is a fundamental ethical challenge (Li, 2022; Olaniyi et al., 2024). Without targeted interventions, the benefits of AI are likely to accrue primarily to highly educated and digitally literate populations, while marginalised groups face heightened risks of displacement and exclusion (Gambhir & Gill, 2024).
Olaniyi et al. (2024) report that current employment policies and training systems are insufficiently equipped to manage the scale and speed of AI-induced labour market transformations. Li (2022) emphasises that reskilling programmes must be accessible, affordable, and inclusive, with deliberate outreach to women, older workers, and those in low-income or rural contexts.
Gmyrek et al. (2023) further highlight the gendered nature of the impacts of AI-driven jobs. Clerical occupations—disproportionately staffed by women—are among the most exposed to automation. Gender-sensitive policies, including targeted reskilling initiatives and protections against algorithmic discrimination, are therefore essential components of an ethical AI transition.
Algorithmic Bias and Transparency
The potential for algorithmic bias in AI systems used for hiring, promotion, and workplace management presents a significant ethical risk (Abhulimen & Ejike, 2024). Without transparency and accountability, AI can reproduce or amplify existing biases related to gender, race, age, or other protected characteristics.
Olateju et al. (2024) demonstrate that explainable AI (XAI) models can enhance trust and support ethical data governance in employment contexts. However, XAI is not yet universally adopted, and many organisational AI systems remain opaque to both workers and regulators.
Abhulimen and Ejike (2024) call for standardised ethical guidelines and robust stakeholder engagement to ensure that AI deployments respect principles of fairness, transparency, and accountability. This is particularly critical in high-stakes contexts such as recruitment, performance evaluation, and workplace surveillance.
Demographic Impacts
AI’s labour market impacts are not evenly distributed across demographic groups. Beyond gender, age-based digital divides are a significant concern (Marquis et al., 2024). Younger workers demonstrate greater readiness to adopt AI tools, while older cohorts face steeper learning curves and risk marginalisation without targeted support (Marquis et al., 2024; Li, 2022).
Gmyrek et al. (2023) underscore that global patterns of AI adoption and exposure to automation are also shaped by income and geography. High-income countries exhibit higher AI integration, but also have a greater capacity to invest in adaptation. In lower-income contexts, AI risks entrenching global inequalities unless supported by international cooperation and development assistance.
Ethical frameworks must therefore move beyond one-size-fits-all principles to engage with the intersectional nature of AI’s social impacts, recognising how gender, age, class, geography, and other factors shape workers’ experiences of AI-driven change (Gambhir & Gill, 2024).
Public Perception and Trust
Public trust in AI systems is a critical enabler—or barrier—to their ethical and practical integration into the workplace (Olateju et al., 2024). Sentiment analysis by Loyola et al. (2024) reveals a complex landscape: while many users express optimism about AI’s potential to enhance efficiency and create new opportunities, significant concerns persist regarding job displacement, loss of agency, and erosion of privacy.
Tenakwah and Watson (2024) argue that transparent communication and inclusive dialogue with workers are essential for fostering trust in AI deployments. This includes clear explanations of how AI systems function, what data they use, and how algorithmic decisions affect employment outcomes (Olateju et al., 2024).
Building trust also requires ensuring that AI systems are aligned with shared social values and that workers have meaningful opportunities to participate in shaping AI governance at both organisational and policy levels (Abhulimen & Ejike, 2024).
Theoretical Implications
The evolving body of research on Artificial Intelligence (AI) and the future of job security offers important theoretical implications for understanding the complex interactions between technology, labour markets, and social structures. This section synthesises key insights from the literature and identifies emerging theoretical directions that can inform future research and policy design.
Moving Beyond Deterministic Models
Early discourse on automation and employment was often framed in deterministic terms, predicting large-scale and inevitable job loss driven by technological substitution (Frey & Osborne, 2017). However, the evidence reviewed here aligns more closely with task-based models of technological change (Acemoglu & Restrepo, 2019), which emphasise that AI reshapes the content of jobs rather than simply eliminating them.
Acemoglu et al. (2022) demonstrate that AI adoption leads to task reallocation within occupations, with human workers increasingly focusing on complementary cognitive and interpersonal activities. This supports the relevance of sociotechnical systems theory, which views technology, work organisation, and human agency as mutually shaping, as a more accurate framework for analysing AI’s labour market impacts (Baxter &
Integrating Equity and Ethical Perspectives
The literature also highlights the need to integrate equity and ethical considerations more centrally into theoretical models of technological change and employment. Traditional economic frameworks often treat labour markets as neutral arenas where skills and incentives drive outcomes (Autor, 2015). However, AI adoption interacts with existing social inequalities, including gender, age, class, and geography (Gmyrek et al., 2023; Marquis et al., 2024).
Theoretical approaches must therefore engage with intersectionality (Crenshaw, 1991) and critical technology studies (Eubanks, 2018), recognising that AI is not a neutral force but is shaped by and contributes to social power dynamics. As Abhulimen and Ejike (2024) argue, ethical frameworks and participatory governance mechanisms are crucial to ensuring that AI deployment aligns with the principles of fairness, transparency, and human dignity.
Expanding Concepts of Workforce Resilience
Workforce resilience emerges as a key theme across the literature (Li, 2022; Olaniyi et al., 2024). Theoretical models of resilience must move beyond individual adaptability to include organisational, institutional, and systemic dimensions.
Gambhir and Gill (2024) advocate for interdisciplinary research that integrates insights from organisational psychology, public policy, and AI ethics to develop holistic frameworks for resilience in AI-augmented labour markets. Such frameworks should address not only skills development but also cultural change, trust-building, and the redesign of social safety nets (Panchenko, 2025; Tenakwah & Watson, 2024).
Towards a Human-Centred AI Paradigm
The literature also points towards the theoretical importance of a human-centred AI paradigm (Shneiderman, 2020), which prioritises human well-being, agency, and flourishing in the design and deployment of AI systems. Ghosh and Sadeghian (2024) show that workers value AI as a collaborative partner rather than a replacement, underscoring the need for theoretical models that emphasise human-AI symbiosis.
This perspective aligns with emerging research in human-computer interaction (HCI) and design justice (Costanza-Chock, 2020), which advocates for participatory design processes and inclusive governance of AI technologies. The theoretical implication is that achieving socially beneficial AI integration requires moving beyond narrow efficiency metrics to embrace broader concepts of meaningful work, worker agency, and collective empowerment.
DISCUSSION
The findings of this narrative review reveal that Artificial Intelligence (AI) is rapidly transforming the world of work, presenting both opportunities and risks for job security. This section interprets these findings in the context of existing literature, highlights key patterns, and offers critical reflections on their implications for policy, practice, and future research.
AI as a Dual-Edged Force in Labour Markets
AI’s impact on employment is best understood as a dual-edged force, simultaneously driving productivity and innovation while exposing workers to new vulnerabilities. The evidence supports task-based theories of technological change (Acemoglu & Restrepo, 2019), which suggest that AI reshapes, rather than eliminates, jobs.
Case studies show that job loss occurs through multiple pathways:
- Direct automation of tasks, particularly in logistics (UPS), customer service (Klarna), and content creation (Duolingo; G/O Media).
- Market disruption driven by AI-enabled alternatives (Chegg).
- Organisational restructuring of management layers, facilitated by AI-augmented decision-making (CrowdStrike; UPS).
These findings align with literature that highlights AI’s differentiated impacts across sectors and roles (Acemoglu et al., 2022; Gmyrek et al., 2023). While some professions experience task augmentation, others face displacement or devaluation, often without sufficient transitional support (Jadhav & Banubakode, 2024).
Disproportionate Risks for Vulnerable Groups
AI-driven labour market transformations are not neutral; they are embedded within existing social and economic inequalities. The evidence highlights that:
- Clerical and administrative workers, particularly women, are disproportionately exposed to automation (Gmyrek et al., 2023).
- Older workers face significant barriers to adaptation, widening generational divides (Marquis et al., 2024).
- Workers in lower-wage and lower-skill sectors often lack access to effective reskilling pathways (Li, 2022; Olaniyi et al., 2024).
These patterns highlight the importance of applying intersectional perspectives (Crenshaw, 1991) in both research and policy design. Without deliberate interventions, AI adoption risks exacerbating existing labour market inequalities (Gambhir & Gill, 2024).
Gaps in Organisational and Policy Responses
While many organisations are pursuing AI-driven transformation, few are fully prepared to manage its human consequences:
- Reskilling initiatives remain fragmented and uneven across sectors and countries (Li, 2022; Panchenko, 2025).
- Human-AI collaboration models are underdeveloped in many firms, limiting opportunities for augmentation rather than replacement (Ghosh & Sadeghian, 2024).
- Transparency and ethical governance are inconsistently implemented, contributing to distrust and public concern (Olateju et al., 2024; Loyola et al., 2024).
Policy responses, though evolving, often lag behind the pace of technological change (Olaniyi et al., 2024; Raj, 2025). More adaptive frameworks are needed to ensure that AI adoption enhances, rather than undermines, social equity and human dignity in the workplace.
Towards Human-Centred AI Integration
A central implication of this review is the urgent need to advance a human-centred AI paradigm (Shneiderman, 2020). Such a paradigm would prioritise:
- Meaningful human participation in AI deployment decisions.
- Transparent and explainable AI systems that foster trust (Olateju et al., 2024).
- Inclusive reskilling and lifelong learning strategies (Li, 2022; Tenakwah & Watson, 2024).
- Ethical and participatory governance of AI at organisational and societal levels (Abhulimen & Ejike, 2024).
Achieving this vision requires not only policy innovation but also cultural change within organisations. Leadership commitment to human-centred values and proactive workforce planning will be critical in shaping AI futures that support both economic and social goals.
Contributions and Limitations
This review contributes to the academic literature by:
- Offering an integrated synthesis of AI’s labour market impacts across economic, organisational, and ethical dimensions.
- Providing empirical grounding through recent case studies of AI-driven job displacement and business restructuring.
- Highlighting the importance of equity and intersectionality in understanding and responding to AI’s impacts on work.
Limitations of the study include its narrative nature, which may not capture the full breadth of sector-specific developments, and the rapid pace of change in AI technologies, which necessitates ongoing empirical monitoring. While the case studies provide rich insights, they are illustrative rather than exhaustive.
AI is transforming work in profound ways—augmenting some roles, eliminating others, and creating entirely new professions. However, without proactive and inclusive governance, these changes risk exacerbating existing inequalities and eroding trust in both technology and the institutions that govern it. This review underscores that the future of job security in the AI era is not predetermined; it is a political, organisational, and ethical choice. By centring human well-being, transparency, and social equity in AI strategy, policymakers and employers can shape a labour market that leverages AI’s benefits while safeguarding workers’ dignity and agency.
Table 2. Summary of Answers to Research Questions
Research Question | Answer Summary |
1. What are the main sectoral and demographic patterns of AI-induced job displacement, augmentation, and creation? | AI is displacing routine and clerical tasks (e.g., customer service, content creation, middle management). Sectors highly affected include logistics (UPS), fintech (Klarna), EdTech (Duolingo, Chegg), cybersecurity (CrowdStrike), and digital journalism (G/O Media). Women, clerical workers, and older employees face disproportionate risks of displacement (Gmyrek et al., 2023; Marquis et al., 2024). Meanwhile, AI is augmenting roles in cybersecurity, AI governance, engineering, and advanced data analytics. New jobs are emerging in AI ethics, explainability, and human-AI collaboration (Li, 2022; Acemoglu et al., 2022). |
2. How are organizations and workers adapting to AI-driven changes in required skills and work processes? | Organisations vary: some adopt AI-first strategies (e.g., Duolingo), while others combine AI tools with human-centred redesign (Tenakwah & Watson, 2024). Workers face significant skills gaps; reskilling programs remain fragmented (Olaniyi et al., 2024). Younger professionals demonstrate greater readiness, while older workers face challenges in adapting (Marquis et al., 2024). Human-AI collaboration models are emerging in cybersecurity, engineering, and logistics (CrowdStrike; UPS). Ethical design and transparency remain uneven across firms. |
3. What strategies are effective for building workforce resilience through upskilling, reskilling, and organizational innovation? | Successful strategies include sector-targeted reskilling (Raj, 2025; Panchenko, 2025), fostering lifelong learning cultures (Li, 2022), adopting explainable AI to build trust (Olateju et al., 2024), and implementing hybrid human-AI collaboration models (Ghosh & Sadeghian, 2024). However, current efforts are insufficient to fully close the skills gap or address equity challenges, particularly in gender-sensitive sectors such as clerical work (Gmyrek et al., 2023). |
4. How are policymakers responding to the labour market challenges and opportunities posed by AI integration? | Policy responses remain fragmented and reactive (Olaniyi et al., 2024). Stronger interventions are emerging in India (Raj, 2025), the EU (XAI guidelines), and some US sectors. Inclusive frameworks emphasize explainability, fairness, and worker participation (Abhulimen & Ejike, 2024). However, gaps persist in gender-sensitive policymaking, sector-specific support, and long-term safety net reforms (Panchenko, 2025; Gambhir & Gill, 2024). |
5. What ethical concerns and social implications arise from AI’s growing role in shaping employment dynamics? | Key concerns include algorithmic bias in hiring and performance management (Olateju et al., 2024), displacement without adequate reskilling (Duolingo; UPS), erosion of professional value (G/O Media), and unequal access to new opportunities (Li, 2022; Gmyrek et al., 2023). Public trust is fragile—AI adoption must prioritize transparency, fairness, and meaningful human participation (Loyola et al., 2024; Ghosh & Sadeghian, 2024). Ethical governance remains an urgent research and policy frontier. |
Methodological Limitations and Future Research Directions
The growing body of research on Artificial Intelligence (AI) and job security provides valuable insights into the complex dynamics of AI-driven labour market transformations. However, the literature also exhibits several methodological limitations that constrain our understanding of these processes and highlight important avenues for future research.
Methodological Limitations
Overreliance on Predictive Models
Much of the early literature on AI and employment relied heavily on predictive models based on occupational-level automation risk assessments (Frey & Osborne, 2017). While these models were influential in shaping public discourse, they often failed to capture the dynamic and context-dependent nature of AI adoption and its interaction with organisational and institutional factors (Acemoglu & Restrepo, 2019).
Handel (2022) provides a corrective by demonstrating that actual employment trends in occupations deemed highly automatable have not followed predictions of mass displacement. This highlights the need for more empirically grounded, longitudinal approaches that take into account task-level reconfigurations and the co-evolution of technology and work practices (Acemoglu et al., 2022).
Insufficient Sectoral and Contextual Granularity
Another limitation is the lack of sector-specific and context-sensitive analysis. Many studies adopt broad, cross-sectoral perspectives that risk obscuring important differences in how AI impacts are distributed across industries, occupations, and national labour markets (Panchenko, 2025; Gmyrek et al., 2023).
As Gambhir and Gill (2024) argue, more granular, mixed-method research is needed to capture the heterogeneity of AI adoption and its labour market effects. This includes comparative studies across sectors and regions, as well as qualitative research that explores workers’ lived experiences of AI-driven change (Huertas et al., 2025).
Limited Consideration of Psychological and Social Dimensions
While economic analyses dominate the literature, there is relatively limited attention to the psychological and social dimensions of AI adoption in the workplace. Studies such as Loyola et al. (2024) and Ghosh and Sadeghian (2024) begin to address this gap by examining public sentiment and perceptions of job meaningfulness in AI-augmented environments.
More research is needed to understand how factors such as trust, transparency, perceived fairness, and organisational culture mediate workers’ responses to AI integration (Olateju et al., 2024). Without such insights, policy and organisational interventions risk overlooking key drivers of worker acceptance and well-being.
Insufficient Integration of Ethical and Equity Perspectives
There is also a need for greater integration of ethical and equity considerations into empirical research on AI and employment (Abhulimen & Ejike, 2024; Gambhir & Gill, 2024). While many studies acknowledge the potential for algorithmic bias and unequal access to opportunities (Gmyrek et al., 2023; Li, 2022), few provide detailed empirical analysis of how these dynamics play out in practice or how they might be mitigated through governance and design choices.
Future research must adopt intersectional frameworks (Crenshaw, 1991) and engage with critical perspectives on technology and society to ensure that analyses of AI’s labour market impacts are both rigorous and socially attuned (Eubanks, 2018).
Future Research Directions
Building on these limitations, several promising directions for future research emerge.
Advancing Longitudinal and Mixed-Method Research
There is a strong need for longitudinal studies that track the evolving impacts of AI adoption on employment, skill requirements, and job quality over time (Acemoglu et al., 2022; Handel, 2022). Combining quantitative labour market data with qualitative insights from workers, managers, and policymakers can provide a more comprehensive understanding of dynamic adaptation processes (Huertas et al., 2025).
Deepening Sector-Specific Analysis
Future research should prioritise sector-specific analyses that examine how AI is transforming work in different industries and occupations (Panchenko, 2025). This includes exploring the organisational and institutional factors that shape patterns of displacement, augmentation, and job creation within specific contexts.
Such research can inform more targeted and effective policy and workforce development strategies, aligned with the distinctive needs and opportunities of different sectors (Raj, 2025; Tenakwah & Watson, 2024).
Exploring Psychological and Social Dynamics
Understanding the psychological and social dynamics of AI integration is critical for fostering positive outcomes for workers (Ghosh & Sadeghian, 2024; Loyola et al., 2024). Future research should examine how factors such as trust, perceived autonomy, and organisational justice influence worker engagement with AI systems.
This includes investigating how organisational culture, leadership practices, and HR policies can support human-AI collaboration in ways that enhance job meaningfulness and worker well-being (Tenakwah & Watson, 2024; Olateju et al., 2024).
Centring Equity and Ethical Considerations
Future research must also place greater emphasis on equity and ethical considerations, both conceptually and methodologically (Abhulimen & Ejike, 2024). This involves applying intersectional analyses to examine how AI-driven changes affect different groups of workers and designing studies that foreground issues of algorithmic fairness, transparency, and worker agency (Gmyrek et al., 2023; Li, 2022).
Engaging with participatory research methods and collaborating with workers, unions, and civil society organisations can help ensure that research on AI and employment reflects diverse perspectives and experiences, contributing to more just and inclusive AI futures (Gambhir & Gill, 2024).
CONCLUSION
This review has demonstrated that the relationship between Artificial Intelligence (AI) and job security is complex, multifaceted, and evolving rapidly. AI is neither a purely destructive nor purely creative force in labour markets. Instead, it acts as a transformational catalyst—automating routine tasks, reshaping job content, and driving demand for new skills and roles (Acemoglu & Restrepo, 2019; Gmyrek et al., 2023). Importantly, AI’s impact is not uniform across sectors, occupations, or demographic groups; it is mediated by organisational choices, policy environments, and broader socio-economic structures (Olaniyi et al., 2024; Panchenko, 2025).
The case studies presented (UPS, CrowdStrike, Klarna, Duolingo, Chegg, G/O Media) underscore that AI-driven job loss and role transformation are already materialising across sectors—from logistics and management to education, content creation, and fintech. These examples illustrate how AI reshapes employment both through internal automation and by disrupting business models.
At the same time, AI offers opportunities for job enrichment, creation of new professional roles, and economic innovation—but these benefits rely on proactive and inclusive workforce adaptation strategies (Jadhav & Banubakode, 2024; Tenakwah & Watson, 2024).
The primary aim of this narrative review was to synthesise current empirical and theoretical knowledge on how AI is transforming job security, with attention to the risks, adaptation strategies, policy responses, and ethical considerations that shape this process. The study aimed to move beyond simplistic narratives of automation-driven job loss, providing a more nuanced understanding of AI’s varied impacts across sectors and demographic groups, while highlighting strategies to foster resilience and inclusion.
Future Research and Policy Recommendations
In light of the findings and limitations identified, several avenues for future research and policy development are recommended:
- Incorporate Systematic Review Elements: Future studies could adopt elements of systematic review or meta-analytical approaches to improve methodological rigour and enable quantitative synthesis, particularly in assessing sectoral vulnerabilities and policy effectiveness across diverse contexts.
- Deepen Analysis of Underexplored Areas: There is a need for more focused examination of underexplored dimensions such as the psychological impacts of AI, trust in workplace automation, and shifts in organisational culture and identity.
- Update and Expand the Literature Base: Given the rapid pace of AI innovation and its evolving labour market implications, future reviews should be regularly updated to capture emerging trends, technologies, and policy interventions.
- Include Primary Research: Supplementing secondary analyses with primary data—such as interviews with workers, employers, or policymakers—would enhance contextual specificity and empirical validity.
- Conduct Comparative Policy Analysis: A systematic cross-national comparison of AI governance strategies, workforce reskilling initiatives, and ethical frameworks could offer valuable insights into best practices for managing AI-driven labour transitions.
- Develop Actionable Policy Recommendations: Ultimately, translating research findings into practical, evidence-based recommendations for governments, educational institutions, and industry stakeholders is crucial to fostering equitable, inclusive, and future-ready labour markets.
These directions aim to bridge the gap between theory and practice, ensuring that research not only interprets technological disruption but also actively informs the design of socially AI futures.
Recap of Key Findings and Contributions
Key findings of this review include:
- AI is augmenting as much as it is displacing: While specific routine and clerical tasks are at high risk of automation, many professions are experiencing task-level augmentation rather than wholesale elimination (Acemoglu et al., 2022; Handel, 2022).
- Risks are unevenly distributed: Clerical workers, women, older workers, and employees in high-exposure sectors face disproportionate risks of displacement (Gmyrek et al., 2023; Marquis et al., 2024).
- Skills gaps and adaptation challenges are profound: Reskilling needs are urgent and large-scale, yet current training ecosystems are inadequate to meet demand (Li, 2022; Olaniyi et al., 2024).
- Organisational and policy responses are pivotal: Companies that strategically align AI with human-centred collaboration models (Ghosh & Sadeghian, 2024; Tenakwah & Watson, 2024) and governments that adopt inclusive, adaptive policy frameworks (Raj, 2025; Panchenko, 2025) can mitigate risks and promote equitable outcomes.
- Ethical considerations must be embedded: Transparency, fairness, explainability, and inclusive access to AI-related opportunities are essential to ensure that AI adoption does not deepen existing inequalities (Abhulimen & Ejike, 2024; Olateju et al., 2024).
Practical Implications
For policymakers, the findings underscore the urgency of:
- Implementing inclusive and accessible reskilling programmes.
- Developing transparent, participatory AI governance frameworks.
- Designing sector-specific strategies to manage transition risks and create new employment pathways.
For employers, the review suggests:
- Designing AI adoption to augment human capabilities, not replace them.
- Fostering AI literacy and building trust through transparency and fairness.
- Embedding lifelong learning into organisational culture.
For workers and civil society, the results highlight:
- The importance of engaging with AI developments proactively.
- Advocating for ethical AI use and worker protections.
- Participating in shaping inclusive AI futures through social dialogue.
This narrative review reveals that AI’s impact on job security is a contested and negotiated process, not an inevitable trajectory. Whether AI contributes to greater prosperity and well-being—or to widening inequalities and job precarity—depends fundamentally on human choices and institutional responses.
Policymakers, employers, educators, and civil society actors all have a role to play in ensuring that AI adoption is ethical, inclusive, and human-centred. The coming years will be pivotal: decisions made today will shape the contours of work and social justice in the age of AI.
Study Contribution to Academia
This study contributes to academic discourse in several ways:
- It offers one of the most integrative reviews to date, combining economic, organisational, and ethical analyses of AI’s labour market impacts.
- It advances theoretical frameworks by emphasising sociotechnical systems and human-centred AI paradigms (Baxter & Sommerville, 2011; Shneiderman, 2020), moving beyond technological determinism.
- It foregrounds intersectional and equity considerations, enriching economic analysis with insights from critical social theory (Crenshaw, 1991; Eubanks, 2018).
- It provides a roadmap for future interdisciplinary research that addresses current methodological gaps and deepens understanding of AI’s evolving role in the world of work.
Ultimately, this study aims to help academia move the debate forward: from describing AI’s disruptive potential to shaping strategies for equitable and sustainable integration of AI into the future of employment.
REFERENCES
- Abhulimen, A. O., & Ejike, O. G. (2024). Ethical considerations in AI use for SMEs and supply chains: Current challenges and future directions. International Journal of Applied Research in Social Sciences, 6(8). https://doi.org/10.51594/ijarss.v6i8.1391
- Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2022). Artificial intelligence and jobs: Evidence from online vacancies. Journal of Labor Economics, 40(S1), S293–S340. https://doi.org/10.1086/718327
- Chapman, M. (2025, April 29). UPS to cut 20,000 jobs, close some facilities as it reduces amount of Amazon shipments it handles. AP News. https://apnews.com/article/ups‑amazon‑ece105621fe23b2d0de76a2247df6b8b
- Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. https://doi.org/10.1257/jep.29.3.3
- Badhurunnisa, M., & Dass, V. S. (2023). Challenges and opportunities involved in implementing AI in workplace. International Journal for Multidisciplinary Research, 5(6). https://doi.org/10.36948/ijfmr.2023.v05i06.10001
- Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23(1), 4–17. https://doi.org/10.1016/j.intcom.2010.07.003
- Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.
- Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of colour. Stanford Law Review, 43(6), 1241–1299.
- The Economic Times. (2025, June 9). Company that sacked 700 workers with AI now regrets it — scrambles to rehire as automation goes horribly wrong. The Economic Times. https://economictimes.indiatimes.com/news/international/us/company-that-sacked-700-workers-with-ai-now-regrets-it-scrambles-to-rehire-as-automation-goes-horribly-wrong/articleshow/121732999.cms?from=mdr
- Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. Martin’s Press.
- Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019
- Ghosh, K., & Sadeghian, S. (2024). The impact of AI on perceived job decency and meaningfulness: A case study. org. https://doi.org/10.48550/arXiv.2406.14273
- Gmyrek, P., Berg, J., & Bescond, D. (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality. https://doi.org/10.54394/fhem8239
- Handel, M. (2022). Growth trends for selected occupations considered at risk from automation. Monthly Labor Review. https://doi.org/10.21916/mlr.2022.21
- Huertas, J. L., Vallejos, A. R., Zarate, L. F., & Malvaceda-Espinoza, E. (2025). Percepción sobre la application de la inteligencia artificial en personal administrativo de una empresa de Lima Metropolitana. ECONDATA. https://doi.org/10.56205/econdata.1-1.3
- Jadhav, R. D., & Banubakode, A. (2024). The implications of artificial intelligence on the employment sector. International Journal For Multidisciplinary Research, 6(3). https://doi.org/10.36948/ijfmr.2024.v06i03.22716
- Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in persuasion. Macmillan.
- Korn, J. (2024, January 9). Duolingo lays off staff as language learning app shifts toward AI. CNN. https://edition.cnn.com/2024/01/09/tech/duolingo-layoffs-due-to-ai
- Li, L. Reskilling and Upskilling the Future-ready Workforce for Industry 4.0 and Beyond. Inf Syst Front 26, 1697–1712 (2024). https://doi.org/10.1007/s10796-022-10308-y
- Loyola, R. D., Escototo, E., Columida, N., & Jr., E. D. P. (2024). Sentiment analysis of AI’s impact on lab or market: Opportunities and threats. Philippine Journal of Science, Engineering, and Technology, 1(1). https://doi.org/10.63179/pjset.v1i1.50
- Marquis , Yewande Alice, Tunbosun Oyewale Oladoyinbo, Samuel Oladiipo Olabanji, Oluwaseun Oladeji Olaniyi, and Samson Abidemi Ajayi. 2024. “Proliferation of AI Tools: A Multifaceted Evaluation of User Perceptions and Emerging Trend”. Asian Journal of Advanced Research and Reports 18 (1):30-55. https://doi.org/10.9734/ajarr/2024/v18i1596
- Miah, M. (2024). Unveiling the evolutionary impact of artificial intelligence on the workforce. Informatică Economică, 28(1), 45–56. https://doi.org/10.24818/issn14531305/28.1.2024.04
- Olaniyi, O. O., Ezeugwa, F. A., Okatta, C. G., Arigbabu, A. S., & Joeaneke, P. C. (2024). Dynamics of the digital workforce: Assessing the interplay and impact of AI, automation, and employment policies. Archives of Current Research International, 24(5), 1–16. https://doi.org/10.9734/acri/2024/v24i5690
- Olateju, O., Okon, S. U., Olaniyi, O. O., Samuel-Okon, A. D., & Asonze, C. U. (2024). Exploring the concept of explainable AI and developing information governance standards for enhancing trust and transparency in handling customer data. Journal of Engineering Research and Reports, 26(7), 124–135. https://doi.org/10.9734/jerr/2024/v26i71206
- Panchenko, O. (2025). The impact of robotization on the labour market: Trends and challenges. Economic Scope, 200, 79–83. https://doi.org/10.30838/ep.200.79-83
- Pavashe, A. S., Kadam, P. D., Zirange, V. B., & Katkar, R. D. (2023). The impact of artificial intelligence on employment and workforce trends in the post-pandemic era. International Journal for Research in Applied Science and Engineering Technology, 11(6), 1238–1247. https://doi.org/10.22214/ijraset.2023.56418
- Raj, R. (2025). Regulating artificial intelligence and its impact on employment in India: Global trends and strategic legal pathways. International Journal of Research Publication and Reviews, 6(5), 123–134. https://doi.org/10.55248/gengpi.6.0525.1636
- (2023, August 27). Sweden’s Klarna says AI chatbots help shrink headcount. Reuters. https://www.reuters.com/technology/artificial-intelligence/swedens-klarna-says-ai-chatbots-help-shrink-headcount-2024-08-27/
- (2025, May 12). Chegg to lay off 23% of workforce as it leans into AI. Reuters. https://www.reuters.com/world/americas/chegg-lay-off-22-workforce-ai-tools-shake-up-edtech-industry-2025-05-12/
- (2025, May 7). CrowdStrike to lay off 5% of staff, reaffirms forecasts. Reuters. https://www.reuters.com/sustainability/crowdstrike-lay-off-5-staff-reaffirms-forecasts-2025-05-07/
- Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
- Siemiatkowski, S. (2025, June 8). Klarna CEO warns AI may cause a recession as the technology comes for white-collar jobs. Business Insider; The Times. https://www.businessinsider.com/klarna-ceo-ai-may-cause-recession-white-collar-jobs-threat-2025-6
- (2025, May 10). The Gr-AI-m Reaper: Hundreds of jobs at IBM and CrowdStrike vanish as artificial intelligence makes humans more dispensable. https://www.techradar.com/pro/security/the-gr-ai-m-reaper-hundreds-of-jobs-at-ibm-and-crowdstrike-vanish-as-artificial-intelligence-makes-humans-more-dispensable
- Tenakwah, E.S. and Watson, C. (2025), “Embracing the AI/automation age: preparing your workforce for humans and machines working together”, Strategy & Leadership, Vol. 53 No. 1, pp. 32-48. https://doi.org/10.1108/SL-05-2024-0040
- Davis, W. (2023, July 8). Gizmodo’s staff isn’t happy about G/O Media’s AI‑generated content. The Verge. https://www.theverge.com/2023/7/8/23788162/gizmodo-g-o-media-ai-generated-articles-star-wars
- V, V., Gambhir, V., & Gill, A. (2024). Understanding the societal impacts of artificial intelligence and machine learning on employment and workforce dynamics. In 2024 International Conference on Advances in Computing Research on Science Engineering and Technology (ACROSET). https://doi.org/10.1109/ACROSET62108.2024.10743405
- von Ahn, L. (2025, May). Workers need a ‘mind shift’ amid the AI revolution, says Duolingo CEO. Business Insider.
- Altchek, A. (2025, June 9). Workers need a “mind shift” amid the AI revolution, says Duolingo CEO. Business Insider. https://www.businessinsider.com/duolingo-ceo-ai-workers-mind-shift-ai-revolution-jobs-2025-6