AI Bias and Its Implications for the Hiring Process, Lending, and Consumer Analytics in the USA

Authors

Kwame Amponsah

College of Business, Westcliff University, Los Angeles, CA (United States)

Frank Boakye

University of Memphis, Memphis, Tennessee (United States)

Mark Osei Boateng

University of Memphis, Memphis, Tennessee (United States)

Opoku-Asamoah Fred

University of Memphis, Memphis, Tennessee (United States)

Nana Opoku Justice

University of Memphis, Memphis, Tennessee (United States)

Article Information

DOI: 10.51244/IJRSI.2026.130300009

Subject Category: Artificial Intelligence

Volume/Issue: 13/3 | Page No: 80-94

Publication Timeline

Submitted: 2026-03-05

Accepted: 2026-03-10

Published: 2026-03-24

Abstract

The incorporation of Artificial Intelligence in hiring processes, consumer analytics, and lending processes has transformed these procedures by ensuring data-driven decision-making, increased efficiency, and minimizing time spent on the processes. This article explores the multidimensional aspects of bias in AI-based hiring systems, lending systems, and consumer analytics, spotlighting how feature selection, historical data, and model design can unintentionally reinforce current economic, workplace, and societal inequalities. By exploring real-life case studies and analyzing commonly utilized machine learning models used for these processes, this study will identify sources of bias and their possible implications on underrepresented groups.
As a way of getting rid of these biases, this paper uses existing literature to recommend strategies for developing fair systems, including regular auditing protocols, diverse training datasets, and bias mitigation technique. Moreover, relying on top notch sources, the paper emphasizes the importance of ensuring trustworthiness and ethical alignment throughout the procedures. This paper aims to offer practical insights for policymakers, human resource professionals, developers, and policy makers to build and adopt AI-fueled hiring, lending, and consumer analytics solutions that are both efficient and equitable. As AI continues to redesign the future of these concepts, guaranteeing fairness throughout the processes is crucial to establishing diverse and inclusive models.

Keywords

AI bias, Algorithmic discrimination, Hiring practices, Lending decisions, Consumer analytics, Ethical AI, United States, Machine Learning

Downloads

References

1. Adewale, G. T., Umavezi, J. U., & Odumuwagun, O. O. (2025). Innovations in lending-focused FinTech: Leveraging AI to transform credit accessibility and risk assessment. International Journal of Computer Applications Technology and Research. https://doi.org/10.7753/ijcatr1401.1004 [Google Scholar] [Crossref]

2. Ahuchogu, M. C., Musa, G. F., Howard, E., & Mathur, K. (2025). AI and bias in recruitment: Ensuring fairness in algorithmic hiring. Journal of Informatics Education and Research, 5(3). https://doi.org/10.52783/jier.v5i3.3262 [Google Scholar] [Crossref]

3. Akter, S., Dwivedi, Y. K., Biswas, K., Michael, K., Bandara, R. J., & Sajib, S. (2021). Addressing algorithmic bias in AI-driven customer management. Journal of Global Information Management, 29(6), 1-27. https://doi.org/10.4018/jgim.20211101.oa3 [Google Scholar] [Crossref]

4. Bahangulu, J. K., & Owusu-Berko, L. (2025). Algorithmic bias, data ethics, and governance: Ensuring fairness, transparency and compliance in AI-powered business analytics applications. World Journal of Advanced Research and Reviews, 25(2), 1746-1763. https://doi.org/10.30574/wjarr.2025.25.2.0571 [Google Scholar] [Crossref]

5. Balamurugan, M., Shanmugasamy, K., & Balaguru, S. (2025). Humans in the loop, lives on the line: AI in high-risk decision making. European Journal of Computer Science and Information Technology, 13(51), 27-31. https://doi.org/10.37745/ejcsit.2013/vol13n512731 [Google Scholar] [Crossref]

6. Bhutta, N., Hizmo, A., & Ringo, D. (2024). How much does racial bias affect mortgage lending? Evidence from human and algorithmic credit decisions. Working paper (Federal Reserve Bank of Philadelphia). https://doi.org/10.21799/frbp.wp.2024.09 [Google Scholar] [Crossref]

7. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02079-x [Google Scholar] [Crossref]

8. Davtyan, N. (2024). AI in consumer behavior analysis and digital marketing: A strategic approach. The Integration of AI and Technology in Modern Business Practices, 61-70. https://doi.org/10.70301/conf.sbs-jabr.2024.1/1.5 [Google Scholar] [Crossref]

9. Funda, V. (2025). A systematic review of algorithm auditing processes to assess bias and risks in AI systems. Journal of Infrastructure Policy and Development, 9(2), 11489. https://doi.org/10.24294/jipd11489 [Google Scholar] [Crossref]

10. Harris, C. (2023). Mitigating age biases in resume screening AI models. The International FLAIRS Conference Proceedings, 36. https://doi.org/10.32473/flairs.36.133236 [Google Scholar] [Crossref]

11. Hofmann, B. (2025). Biases in AI: Acknowledging and addressing the inevitable ethical issues. Frontiers in Digital Health, 7. https://doi.org/10.3389/fdgth.2025.1614105 [Google Scholar] [Crossref]

12. John, A., Elly, A., & Wood, D. (2025). Addressing bias and fairness in AI-enabled hiring and financial systems. https://doi.org/10.2139/ssrn.5226418 [Google Scholar] [Crossref]

13. Krishnan, S. (2024). Leadership in the age of artificial intelligence (AI). The Integration of AI and Technology in Modern Business Practices, 43-51. https://doi.org/10.70301/conf.sbs-jabr.2024.1/1.3 [Google Scholar] [Crossref]

14. Langenbucher, K. (2020). Responsible A.I.-based credit scoring – A legal framework. European Business Law Review, 31(Issue 4), 527-572. https://doi.org/10.54648/eulr2020022 [Google Scholar] [Crossref]

15. Mak, M., & Luo, T. (2025). A framework for evaluating cultural bias and historical misconceptions in LLMs outputs. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 5(3), 100235. https://doi.org/10.1016/j.tbench.2025.100235 [Google Scholar] [Crossref]

16. Mavrogiorgos, K., Kiourtis, A., Mavrogiorgou, A., Menychtas, A., & Kyriazis, D. (2024). Bias in machine learning: A literature review. Applied Sciences, 14(19), 8860. https://doi.org/10.3390/app14198860 [Google Scholar] [Crossref]

17. Milne, S. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News. https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/ [Google Scholar] [Crossref]

18. Murikah, W., Nthenge, J. K., & Musyoka, F. M. (2024). Bias and ethics of AI systems applied in auditing - A systematic review. Scientific African, 25, e02281. https://doi.org/10.1016/j.sciaf.2024.e02281 [Google Scholar] [Crossref]

19. NIST. (2025, May 5). AI risk management framework. https://www.nist.gov/itl/ai-risk-management-framework [Google Scholar] [Crossref]

20. Oguntibeju, O. O. (2024). Mitigating artificial intelligence bias in financial systems: A comparative analysis of Debiasing techniques. Asian Journal of Research in Computer Science, 17(12), 165-178. https://doi.org/10.9734/ajrcos/2024/v17i12536 [Google Scholar] [Crossref]

21. Oladinni, A. (2025). AI-DRIVEN CREDIT ANALYTICS: ENHANCING EFFICIENCY AND FAIRNESS IN LOAN APPROVAL PROCESSES. International Research Journal of Modernization in Engineering Technology and Science, 07(01). https://www.doi.org/10.56726/IRJMETS66195 [Google Scholar] [Crossref]

22. Paleti, S. (2025). Transforming money transfers and financial inclusion: The impact of AI-powered risk mitigation and deep learning-based fraud prevention in cross-border transactions. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5158588 [Google Scholar] [Crossref]

23. Rane, J., Chaudhari, R. A., & Rane, N. L. (2025). Ethical considerations and bias detection in artificial intelligence/Machine learning applications. Deep Science Publishing. [Google Scholar] [Crossref]

24. Rigotti, C., & Fosch-Villaronga, E. (2024). Fairness, AI & recruitment. Computer Law & Security Review, 53, 105966. https://doi.org/10.1016/j.clsr.2024.105966 [Google Scholar] [Crossref]

25. Sele, D., & Chugunova, M. (2024). Putting a human in the loop: Increasing uptake, but decreasing accuracy of automated decision-making. PLOS ONE, 19(2), e0298037. https://doi.org/10.1371/journal.pone.0298037 [Google Scholar] [Crossref]

26. Soleimani, M., Intezari, A., Arrowsmith, J., Pauleen, D. J., & Taskin, N. (2025). Reducing AI bias in recruitment and selection: An integrative grounded approach. The International Journal of Human Resource Management, 36(14), 2480-2515. https://doi.org/10.1080/09585192.2025.2480617 [Google Scholar] [Crossref]

27. Sterling, J. Y. (2025). The great unbundling: How artificial intelligence is redefining the value of a human being. J. Y. Sterling. [Google Scholar] [Crossref]

28. Tigges, M., Mestwerdt, S., Tschirner, S., & Mauer, R. (2024). Who gets the money? A qualitative analysis of fintech lending and credit scoring through the adoption of AI and alternative data. Technological Forecasting and Social Change, 205, 123491. https://doi.org/10.1016/j.techfore.2024.123491 [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles