Cultural Bias in Machine Learning Systems: A Philosophical and Empirical Study of Algorithmic Knowledge Production

Authors

Nabulongo Ali

Kampala International University, Uganda (Uganda)

Peter Both Goah Wiech

Kampala International University, Uganda (Uganda)

Katwesigye Collins

Kampala International University, Uganda (Uganda)

Specioza Asiimwe

Kampala International University, Uganda (Uganda)

Article Information

DOI: 10.47772/IJRISS.2026.100300368

Subject Category: Social science

Volume/Issue: 10/3 | Page No: 4955-4970

Publication Timeline

Submitted: 2026-03-12

Accepted: 2026-03-17

Published: 2026-04-09

Abstract

Machine learning systems are increasingly functioning as epistemic infrastructures in high-stakes domains such as criminal justice, healthcare, finance, and employment. Despite this, their outputs are frequently treated as objective and neutral forms of knowledge. This study advances a synthesis of empirical and philosophical inquiry into cultural bias in machine learning, arguing that algorithms operate as sociotechnical agents embedded within historically situated structures of power and representation. Using the COMPAS Recidivism dataset (N = 7,214), a quantitative experimental design was employed to examine predictive disparities across protected attributes, specifically race and sex. Logistic Regression and Random Forest models were implemented within a controlled preprocessing pipeline and evaluated using standard performance metrics (accuracy, precision, recall, and F1-score), alongside subgroup fairness measures including false positive rates (FPR), false negative rates (FNR), and disparate impact ratios. To ensure robustness, subgroup disparities were further assessed using statistical significance testing. While overall model performance was moderate in aggregate metrics, subgroup analysis revealed consistent and structured disparities: African-American defendants exhibited elevated false positive rates, whereas females and underrepresented racial groups experienced disproportionately high false negative rates. These patterns persisted across model architectures, indicating that bias is structurally embedded in the data rather than solely a function of model design. However, extreme subgroup values should be interpreted with caution due to potential sample size imbalances within certain demographic categories. The findings challenge the assumption of epistemic neutrality in algorithmic systems, demonstrating that machine learning models participate in the cultural production of knowledge by reproducing historically grounded classifications and power asymmetries. The study argues that algorithmic outputs should be evaluated not only in terms of predictive performance but also through fairness-aware and context-sensitive frameworks that account for their broader ethical and epistemological implications.

Keywords

Algorithmic Bias; Machine Learning Fairness; Cultural Bias; Disparate Impact

Downloads

References

1. Alvarado, R. (2020). The epistemology of data science: Understanding data-driven inquiry. Philosophy & Technology, 33(3), 443–466. https://doi.org/10.1007/s13347-019-00346-2 [Google Scholar] [Crossref]

2. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org [Google Scholar] [Crossref]

3. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press. [Google Scholar] [Crossref]

4. Birhane, A. (2021). The impossibility of automating ambiguity. Artificial Life, 27(1), 44–61. https://doi.org/10.1162/artl_a_00336 [Google Scholar] [Crossref]

5. Foucault, M. (1977). Discipline and punish: The birth of the prison (A. Sheridan, Trans.). Pantheon Books. (Original work published 1975) [Google Scholar] [Crossref]

6. Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2021). A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 329–338. https://doi.org/10.1145/3442188.3445900 [Google Scholar] [Crossref]

7. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. [Google Scholar] [Crossref]

8. Kuhn, T. S. (2012). The structure of scientific revolutions (4th ed.). University of Chicago Press. (Original work published 1962) [Google Scholar] [Crossref]

9. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), Article 115). https://doi.org/10.1145/3457607 [Google Scholar] [Crossref]

10. Rouvroy, A. (2020). Algorithmic governmentality and the death of politics. Philosophy & Technology, 33(2), 157–171. https://doi.org/10.1007/s13347-019-00363-1 [Google Scholar] [Crossref]

11. Simon, J. (2022). Artificial intelligence and knowledge production: Epistemological challenges of algorithmic systems. AI & Society, 37(4), 1357–1368. https://doi.org/10.1007/s00146-021-01281-0 [Google Scholar] [Crossref]

12. Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 1–14. https://doi.org/10.1145/3442188.3445922 [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles