Reclaiming Autonomy in the Age of AI: A Conceptual Framework Linking Algorithmic Management and Job Happiness
- Abd Rahman Ahmad
- Hairul Rizad Md Sapry
- Alaa S Jameel
- 7222-7228
- Sep 23, 2025
- Business Management
Reclaiming Autonomy in the Age of AI: A Conceptual Framework Linking Algorithmic Management and Job Happiness
Abd Rahman Ahmad1, Hairul Rizad Md Sapry2, Alaa S Jameel3
1Johor Business School, University Tun Hussein Onn Malaysia, 86400 Batu Pahat, Johor, MALAYSIA
2University Kuala Lumpur (UNIKL) Kampus Cawangan Malaysian Institute of Industrial Technology, Johor, MALAYSIA
3Department of Business Administration, Al- Idrisi University College, Ramadi, Al-Anbar, IRAQ
DOI: https://dx.doi.org/10.47772/IJRISS.2025.908000597
Received: 20 August 2025; Accepted: 28 August 2025; Published: 23 September 2025
ABSTRACT
The recruitment, monitoring, and evaluation of employees are being revolutionised by the integration of algorithmic systems into human resource management (HRM). Although artificial intelligence (AI) provides unparalleled scalability and efficiency, its increasing impact raises significant concerns regarding psychological well-being, trust, and employee autonomy. The workplace has been transformed by the emergence of algorithmic management, which has resulted in new power dynamics. Decisions are becoming more automated, opaque, and difficult to contest. These dynamics have direct implications for the way in which employees perceive motivation, fairness, and satisfaction in the workplace. Through the mediating functions of perceived autonomy and trust in AI systems, this paper introduces a conceptual framework that establishes a connection between algorithmic management and job happiness. In addition, it suggests that the effects of algorithmic systems on employee outcomes can be mitigated by human-centered AI practices, including transparency, explainability, and participatory design. The framework underscores the necessity for organisations to perceive AI adoption as a profoundly human endeavour, rather than merely a technical process, by incorporating self-determination theory and fairness theory. This study contributes to ongoing discussions on ethical AI and responsible digital transformation in HRM by emphasising employee experiences and psychological requirements. It also offers practical guidance for organisations that are striving to reconcile technological advancement with human dignity and workplace happiness, as well as a theoretical foundation for future empirical research.
Keywords— algorithmic management, human resource management, artificial intelligence, job happiness, human-centered design, AI
INTRODUCTION
Researchers such Jarrahi et al. (2021) and Meijerink et al. (2021) have noted that artificial intelligence (AI) has emerged as a pivotal element of contemporary human resource management (HRM), transcending mere automation to significantly impact essential decision-making processes. As organisations progressively incorporate AI into functions such as recruitment, performance assessment, and workforce planning, the ramifications extend beyond operational efficiency to profoundly influence employee work experiences. In this changing milieu, algorithmic management has arisen as a potent, yet contentious, phenomena. Characterised as the application of algorithms to monitor, assess, and manage employees (Lee et al., 2022), it has garnered acclaim for mitigating human bias while facing criticism for engendering obscure, unassailable mechanisms of control. For example, corporations like Uber and Amazon have implemented algorithmic methods to allocate jobs, oversee productivity, and determine promotions or disciplinary actions—frequently without direct human intervention (Calvard & Jeske, 2022). This transition has rekindled essential enquiries of trust, accountability, and dignity within the workplace.
Research by Ajunwa (2021) and Zhou et al. (2021) critically shows that the growing dependence on algorithmic decision-making may undermine employee autonomy and psychological safety, particularly when systems lack transparency or explainability. Employees may encounter regulations they do not comprehend, assessments they cannot dispute, and feedback provided by automated systems instead of supervisors. This violates a fundamental principle of motivated psychology: the human necessity for autonomy and self-governance. Concurrently, confidence in AI persists as a double-edged sword. Some employees value the perceived impartiality of AI-driven evaluations, whereas others express sentiments of alienation and dehumanisation (Logg et al., 2019). Trust encompasses not only accuracy but also the perception of institutions as equitable, courteous, and congruent with human values (Shneiderman, 2020).
Notwithstanding the seriousness of these matters, current research has predominantly concentrated on technical performance, regulatory frameworks, or managerial implementation, frequently overlooking the human experience. Attention has seldom been directed towards the impact of algorithmic systems on workers’ perceived autonomy, trust in technology, and, consequently, their job happiness—a concept that includes both emotional well-being and workplace contentment (Chamorro-Premuzic et al., 2019; Ryan & Deci, 2020).
This research presents a conceptual framework that examines the influence of algorithmic management on job happiness, mediated via perceived autonomy and trust in AI systems. Grounded in self-determination theory, fairness theory, and socio-technical systems thinking, the framework promotes a transition from an exclusively efficiency-oriented AI to a human-centric paradigm. We contend that such a transformation is essential not just for organisational success but also for maintaining ethical, inclusive, and psychologically healthy workplaces in the era of intelligent systems. Here, this research has two key contributions. Initially, it establishes job happiness—frequently neglected in AI-HRM discussions—as a strategic and ethical necessity. Secondly, it recognises autonomy and trust as critical factors through which algorithmic management may either facilitate or undermine meaningful work. From this perspective, we present a framework for forthcoming empirical investigations and ethical human resource management practices.
LITERATURE REVIEW
The Rise of Algorithmic Management in Human Resource Management
Balestra et al. (2022) delineate the advancement of human resource technologies as a transition from administrative digitisation to autonomous decision-making, fundamentally transforming HRM principles. What commenced as basic digital instruments for record-keeping and payroll has now evolved into sophisticated systems capable of candidate screening, performance tracking, and real-time evaluation generation. In this setting, algorithmic management—the application of AI to oversee, assess, and direct human labor—has become a fundamental aspect of the contemporary digital workplace.
In contrast to previous HR automation, algorithmic solutions integrate administrative logic into software and implement choices with minimal human supervision. Duggan et al. (2020) observe that this transformation shifts organisations from human-enhancing to human-substituting models, especially within extensive and remote workforces. Platforms like HireVue and Pymetrics increasingly utilise facial recognition and psychometric analysis for interviews, frequently circumventing conventional human evaluation in the initial employment phases (Raisch & Krakowski, 2021). The salient features of algorithmic management encompass much more than just efficiency. Fundamental elements encompass real-time monitoring of employee conduct, predictive analytics for performance results, and automated task assignment—frequently guided by obscure or proprietary algorithms. Giermindl et al. (2022) contend that these technologies do not only “assist” HR practitioners; they fundamentally reorganise the dynamics of management authority and supervision.
Perceived Autonomy in Algorithmic Work Environments
Deci and Ryan (1987) contended that autonomy transcends mere freedom of choice; it encompasses the psychological experience of volition and self-endorsement in one’s acts. As algorithmic management supplants conventional human oversight, this essential experience is progressively eroded. Employees increasingly function under automated, non-negotiable AI-driven regulations and performance benchmarks that are frequently imperceptible to them. The illusion of choice is a prevalent problem in gig economy platforms. Munn (2021) elucidates that workers may seem autonomous due to their ability to choose shifts or employment; yet, the underlying algorithmic incentives and punishments constrain their selections, hence restricting their genuine control. This generates cognitive conflict between apparent and actual autonomy (Cristofaro & Giardino, 2021).
Trust in AI Systems
Trust is the cornerstone of effective algorithmic management. In its absence, even the most advanced AI systems encounter scepticism, opposition, or apathy. In this context, trust encompasses not just faith in the algorithm’s technical precision but also belief in its fairness, transparency, and congruence with human values. Glikson and Woolley (2020) assert that confidence in AI is inherently relational; employees assess not only the system’s output but also deduce its intent. If an algorithm yields consistent outcomes yet remains inscrutable or unresponsive, consumers may question its consideration of their interests. Trust diminishes when institutions appear impersonal, remote, or without accountability.
Job Happiness in the Algorithmic Era
Job happiness is widely regarded as a vital indicator of sustainable organisational effectiveness, especially in knowledge-driven and technologically facilitated work settings. In contrast to job happiness, which pertains to transient assessments of particular employment elements, job happiness embodies a profound and lasting emotional attachment to one’s work (Fisher, 2010). This distinction is crucial in algorithmic situations, where regularity and monitoring may diminish the emotional experience of work. Lyubomirsky and Lepper (1999) contend that enjoyment arises from both inherent dispositions and contextual stimuli. In conventional workplaces, engagements with superiors and peers foster a sense of belonging and significance. Nevertheless, algorithmic systems frequently eliminate these human interactions, substituting empathy with efficiency.
Algorithmic Management and Employee Experience Integration
Employee experience (EX) has become a strategic need for organisations aiming to attract, engage, and retain people in a progressively digital environment. It includes all interactions employees have with their organisation, ranging from recruitment and onboarding to everyday operations, performance evaluation, and exit procedures. As algorithmic management (AM) increasingly characterises contemporary workplaces, the issue resides in incorporating automation into employee experience (EX) in a manner that enriches, rather than diminishes, the human aspect of work. Morgan (2017) contends that employee experience is influenced by three environments: the physical, the cultural, and the technology. Algorithmic systems function primarily inside the technology realm, however they exert influence on culture and perception. When algorithms are regarded as equitable, empowering, and beneficial, they enhance employee experience (EX). Conversely, when perceived as obtrusive or arbitrary, they undermine employee trust and emotional engagement.
Theoretical Synthesis and Research Gaps
An extensive synthesis of algorithmic management literature uncovers a complex, interdisciplinary landscape, integrating concepts from information systems, organisational behaviour, human resource management, and psychology. Despite increasing interest, substantial gaps persist in elucidating how algorithmic systems influence the lived experience of work—especially regarding autonomy, trust, and well-being. Puranam, Alexy, and Reitzig (2014) contend that coordinating devoid of hierarchy is becoming progressively attainable via digital systems. Algorithmic management illustrates this potential; nonetheless, the majority of the literature persists in interpreting it via the perspective of control rather than empowerment. This indicates a necessity to transcend conventional supervisory models and embrace theories that more accurately represent platform-mediated and AI-enhanced work environments.
CONCEPTUAL FRAMEWORK
The growing popularity of algorithmic systems in management has transformed contemporary work environments. These systems standardise procedures and enhance efficiency while also transforming the dynamics of authority, decision-making, and employee experience. The transition from human discretion to computer oversight amplifies the emotional repercussions for employees. In this context, the current conceptual paradigm positions Algorithmic Management (AM) as the primary influencing element that indirectly affects Job Happiness through three essential mediators: Perceived Autonomy, Trust in AI Systems, and Employee Experience (EX). This concept does not merely assess technology functionality; rather, it highlights the human-centered results that emerge when employees engage with non-human entities within organised control systems. This paradigm aims to enhance our comprehension of how algorithmic settings might facilitate or impede emotional and motivational well-being at work by including concepts from organisational psychology, human-computer interaction, and digital labour research.
Underpinning Theories
The basis of the conceptual framework is Self-Determination Theory (SDT), which underscores the need of fulfilling essential psychological needs—autonomy, competence, and relatedness—for optimal functioning and well-being (Ryan & Deci, 2017). Algorithmic systems that restrict decision-making autonomy or lack substantial participation may impede these needs and lead to reduced job happiness. Sociotechnical Systems Theory enhances SDT by offering a structural viewpoint on the interplay between social behaviours and technological design (Trist & Bamforth, 1951; Mumford, 2006). In algorithmically governed settings, tasks are often fragmented, impersonal, and controlled by digital feedback systems—conditions that need a reassessment of the interplay between technological efficiency and human values. The Organisational Support Theory posits that perceived organisational support is closely linked to emotional involvement and positive work attitudes (Eisenberger et al., 2001). If employees perceive algorithmic decisions as impersonal or biassed, the resulting decrease in perceived support may adversely affect trust and morale.
Proposed Conceptual Framework
The suggested paradigm, based on empirical evidence and theoretical models, demonstrates the indirect pathways by which Algorithmic Management (AM) influences Job Happiness, mediated by three critical constructs: Perceived Autonomy, Trust in AI Systems, and Employee Experience. Studies indicate that AM frequently diminishes autonomy (Meijerink & Bondarouk, 2021), erodes trust when decision-making is non-transparent (Anjomshoae et al., 2019), and deteriorates the employee experience when feedback is devoid of human empathy (Möhlmann & Zalmanson, 2017). Nonetheless, when autonomy is maintained, trust is fostered, and experiences are affirmative, these factors substantially improve job happiness (Ryan & Deci, 2017; Bakker & Demerouti, 2017).
Fig. 1. Proposed Conceptual Framework
Framework Summary
The suggested conceptual framework asserts that the relationship between Algorithmic Management (AM) and Job Happiness is indirect, mediated by three principal variables: Perceived Autonomy, Trust in AI Systems, and Employee Experience. Each channel represents a unique psychological or emotional process through which algorithmic systems can affect workers’ well-being.
1) Algorithmic Management → Perceived Autonomy → Job Happiness:
This pathway illustrates how algorithmic technologies can either diminish or enhance employees’ perception of control over their work. When AM imposes stringent schedules, automated assessments, or prescriptive task directives, it may undermine the employee’s capacity to exercise discretion, make decisions, or participate in self-directed work. The loss of volition is psychologically harmful, as autonomy is a crucial requirement for drive and fulfilment. Conversely, when algorithmic tools are crafted with adaptability and user input—such as accommodating scheduling preferences or offering responsive feedback—they can enhance autonomy and foster increased involvement. Autonomy has repeatedly been associated with positive feelings, intrinsic motivation, and enduring job happiness. This approach emphasises that, even in data-driven contexts, maintaining aspects of human agency is essential for ensuring well-being.
2) Algorithmic Management → Trust in AI Systems → Job Happiness: This relationship highlights the cognitive and relational aspects of human–AI interaction. Trust in AI is influenced by views of system dependability, equity, transparency, and congruence with users’ ideals. Algorithmic systems that function as “black boxes” or provide results without transparent justification may inspire scepticism or opposition among employees. When algorithms are explicable, consistent, and regarded as equitable, they cultivate trust, diminish uncertainty, and bolster employees’ emotional stability. Trust serves as a stabilising influence in the workplace, allowing workers to perceive automation not as a danger but as an auxiliary instrument. In this setting, job happiness arises not solely from the system’s functions, but from the emotional experience it provides to employees—feeling valued, safeguarded, and comprehended.
3) Algorithmic Management → Employee Experience → Job Happiness: This pathway encompasses the comprehensive, emotional, and experienced aspects of operating under an algorithmically governed world. Employee experience includes daily contacts with technology, communication systems, performance assessments, and feedback processes. If these features are excessively impersonal, mechanically imposed, or emotionally disconnected, they can diminish the overall quality of work life. Conversely, when algorithmic management is executed with empathy—via personalised feedback, human oversight, or contextual awareness—it can enhance the work experience in a more positive and significant manner. An encouraging environment cultivates a sense of belonging, acknowledgement, and emotional wellness, which are vital elements of job happiness.
CONCLUSION
As organisations progressively implement algorithmic tools to oversee human labour, it is essential to comprehend the wider implications of these technologies beyond mere efficiency improvements. This paper examines the indirect yet significant impact of algorithmic management on job happiness, presenting a conceptual framework based on modern theories and empirical studies. The model identifies perceived autonomy, trust in AI systems, and employee experience as essential mediating factors, encapsulating the intricate psychosocial processes that influence worker well-being in digitally managed environments. This theory reconceptualises algorithmic management not merely as an operational transition, but as a matter of human experience that transforms employees’ emotions, cognition, and relationships with their work. The proposed model emphasises that job happiness cannot be achieved only through performance measurements and automated feedback; it relies on how algorithmic technologies either support or hinder essential human needs. This conceptual framework provides essential guidance for forthcoming empirical research aimed at validating the postulated linkages. It also offers practical ideas for organisational leaders, system designers, and politicians seeking to utilise algorithmic technology ethically. As digital management becomes standard, the problem will extend beyond merely optimising algorithms for productivity; it will involve aligning them with the emotional and psychological aspects of fulfilling work.
ACKNOWLEDGMENT
The authors would like to acknowledge the support provided by their respective institutions in conducting this research.
REFERENCES
- Ajunwa, I. (2021). Algorithmic management, autonomy, and the future of work. Harvard Law & Policy Review, 15(2), 329–361.
- Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 1078–1088.
- Bakker, A. B., & Demerouti, E. (2017). Job demands–resources theory: Taking stock and looking forward. Journal of Occupational Health Psychology, 22(3), 273–285.
- Balestra, M., Elia, G., Margherita, A., & Passiante, G. (2022). Human resource management in the age of artificial intelligence: Emerging roles, challenges, and opportunities. Journal of Business Research, 153, 403–412.
- Calvard, T. S., & Jeske, D. (2022). Algorithmic management, HR analytics and sustainable human resource management: Challenges and opportunities. Human Resource Management Review, 32(4), 100867.
- Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., & Sherman, R. (2019). The datafication of talent: How technology is advancing the science of human potential at work. Current Directions in Psychological Science, 28(1), 46–51.
- Cristofaro, M., & Giardino, P. L. (2021). Algorithmic decision-making and the loss of autonomy: An ethical perspective. Technology in Society, 66, 101662.
- Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024–1037.
- Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114–132.
- Eisenberger, R., Stinglhamber, F., Vandenberghe, C., Sucharski, I. L., & Rhoades, L. (2001). Perceived supervisor support: Contributions to perceived organizational support and employee retention. Journal of Applied Psychology, 87(3), 565–573.
- Fisher, C. D. (2010). Happiness at work. International Journal of Management Reviews, 12(4), 384–412.
- Giermindl, L. M., Strich, F., Christ, O., & Leicht-Deobald, U. (2022). The dark sides of people analytics: Reviewing the perils for organizations and employees. European Journal of Information Systems, 31(3), 257–284.
- Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660.
- Jarrahi, M. H., Chaudhry, B., & Sutherland, W. (2021). Trust and transparency in algorithmic HRM: A socio-technical perspective. AI & Society, 36(3), 743–755.
- Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2022). Working with machines: The impact of algorithmic and data-driven management on human workers. Communications of the ACM, 65(1), 54–61.
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.
- Lyubomirsky, S., & Lepper, H. S. (1999). A measure of subjective happiness: Preliminary reliability and construct validation. Social Indicators Research, 46(2), 137–155.
- Meijerink, J., Bondarouk, T., & Lepak, D. (2021). When HRM meets artificial intelligence: A systematic review of research and implications for HRM. The International Journal of Human Resource Management, 32(20), 4285–4322.
- Morgan, J. (2017). The Employee Experience Advantage: How to Win the War for Talent by Giving Employees the Workspaces they Want, the Tools they Need, and a Culture They Can Celebrate. Wiley.
- Munn, L. (2021). Algorithmic control: Understanding the role of algorithms in gig work. AI & Society, 36(2), 477–484.
- Mumford, E. (2006). The story of socio-technical design: Reflections on its successes, failures and potential. Information Systems Journal, 16(4), 317–342.
- Puranam, P., Alexy, O., & Reitzig, M. (2014). What’s “new” about new forms of organizing? Academy of Management Review, 39(2), 162–180.
- Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210.
- Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Press.
- Ryan, R. M., & Deci, E. L. (2020). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Press.
- Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
- Trist, E., & Bamforth, K. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38.
- Zhou, Y., Arshad, S. Z., Luo, X., & Luo, X. (2021). Fairness and emotion in AI systems: A human-centered perspective. Computers in Human Behavior, 121, 106780.