Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.
Emerging Technologies, Hidden Biases: Gender and the New Frontiers of Security
- Mahera Imam
- Prof N. Manimekalai
- 4130-4136
- Jan 23, 2025
- Gender Studies
Emerging Technologies, Hidden Biases: Gender and the New Frontiers of Security
Mahera Imam1, Prof N. Manimekalai2
1Research Scholar, Department of Women’s Studies, Khajamalai Campus, Bharathidasan University, Tiruchirappalli, Tamil Nadu-620023
2Director, Centre for Women’s Development Studies, New Delhi, 110001
DOI: https://dx.doi.org/10.47772/IJRISS.2024.8120342
Received: 17 December 2024; Accepted: 21 December 2024; Published: 23 January 2025
ABSTRACT
The rapid integration of digital technologies into communication, work, healthcare, and security has transformed modern life. However, these advancements often perpetuate or exacerbate existing social inequalities. Emerging technologies, designed within specific social, economic, and cultural frameworks, are imbued with hidden biases that disproportionately affect marginalized groups, particularly women and gender-diverse individuals. This paper firstly, it explores algorithmic biases, highlighting how machine learning systems trained on biased datasets reinforce harmful stereotypes and disadvantage underrepresented groups. Second, it addresses surveillance capitalism, drawing on Shoshana Zuboff’s framework to analyse how the commodification of personal data disproportionately impacts women, making them vulnerable to privacy violations and exploitation. Third, the paper investigates digital vulnerabilities, focusing on the risks women face in online spaces, such as cyberstalking, harassment, and non-consensual image sharing, illustrated by cases like the Sulli Deals and Bulli Bai apps. Grounded in feminist theory, the paper argues for inclusive and equitable technology design practices. It advocates for diverse datasets, transparent algorithmic decision-making, and robust digital privacy protections to ensure that technology does not reinforce societal inequities but instead serves as a tool for empowerment and equality. By reimagining the priorities of technological development, this analysis calls for a shift towards inclusivity, equity, and safety as foundational principles for the digital age.
Keywords: Algorithmic Bias, Gender Inequality, Surveillance Capitalism, Digital Vulnerabilities, Data Privacy, Technology and Society, Digital Empowerment
INTRODUCTION
Technology and Society—An Unequal Relationship
The impact of digital technology on communication, work, healthcare, and personal security has been transformative, revolutionizing how we connect, work, and access services. Yet, this transformation is not without its complications. Scholars argue that technology is rarely developed in isolation from social, economic, and cultural influences, which means it often reflects and sometimes even amplifies the biases and inequalities present in society. This foundational critique holds that digital tools, from search engines to AI algorithms, are designed within a social context, influenced by prevailing values and structures, often resulting in unequal effects for different groups. Safiya Noble’s work, Algorithms of Oppression, provides a compelling example of this argument. Noble examines how search engines, particularly Google, function not as neutral intermediaries of information but as tools that reflect and reinforce societal biases. Her research shows how search engine algorithms, by favouring certain types of content over others, can perpetuate harmful stereotypes. For example, a search for terms associated with Black women, in Noble’s studies, frequently returned results linked to negative stereotypes. Such outcomes reflect the data and keywords that algorithms rely on data often sourced from a biased society that marginalizes certain groups, particularly women of colour. By privileging specific associations and patterns over others, search engines amplify these social prejudices rather than mitigating them. This critique extends beyond search engines to other realms of digital technology. In machine learning and AI, for instance, systems are trained on vast datasets that may contain historical biases or incomplete representations of certain groups, particularly women, minorities, and gender-diverse individuals. When algorithms draw from these biased datasets, they are likely to replicate and even intensify existing social hierarchies and stereotypes, affecting everything from hiring decisions to access to healthcare.
To address the deeper implications of these biases, this paper explores three major areas in which digital technology has a gendered impact:
Algorithmic Biases: Algorithms shape our daily interactions with technology, from social media feeds to job applications. However, the data used to train these algorithms often reflects societal biases. This section will delve into how algorithmic design can disadvantage marginalized groups and reinforce harmful stereotypes, with examples from various domains.
Surveillance Capitalism: Shoshana Zuboff’s concept of surveillance capitalism underscores how the commodification of personal data for profit affects individual privacy, with a disproportionately negative impact on women. This section examines how personal data is harvested, used, and often sold, leaving women and marginalized groups vulnerable to exploitation and privacy violations.
Digital Vulnerabilities of Marginalized Groups: The third dimension highlights the specific risks that women and gender-diverse individuals face in digital spaces, such as online harassment, cyberstalking, and non-consensual image sharing. It examines cases like the Sulli Deals and Bulli Bai apps, where women’s images were used without consent, and discusses how current laws and tech policies fall short of adequately addressing such abuses.
By examining the limitations and potential harms embedded in these systems, this paper argues that a feminist framework is essential for developing technology that accounts for the needs and rights of all users. A feminist approach would involve more inclusive design practices, such as diverse datasets, transparency in algorithmic decision-making, and protections for digital privacy. Adopting such an approach can pave the way for a digital future where technology serves as a tool for equality rather than one that reproduces social inequities. This analysis ultimately calls for a shift in how we approach technology moving from a view that prioritizes efficiency and profit to one that places inclusivity, equity, and safety at its core.
Algorithmic Biases and Gendered Impacts
Algorithms are at the core of AI systems, shaping aspects of our digital experience such as content recommendations, job recruitment, and even law enforcement decisions. However, these algorithms are not neutral; they are trained on data that mirrors societal biases, including those based on race, gender, and class. As a result, algorithms can reproduce and even amplify these biases, creating outcomes that disproportionately harm marginalized groups, especially women and people of colour. Safiya Noble’s research in Algorithms of Oppression provides a striking example of how algorithms can reinforce harmful stereotypes. Noble examined Google’s search algorithms and found that they often associate derogatory stereotypes with terms related to Black women. For example, searches about Black women historically yielded results reinforcing negative stereotypes. Such outcomes not only harm the visibility of marginalized groups but also shape public perceptions, affecting how these groups are understood and treated in society. Noble argues that this reflects a deeper systemic issue: because algorithms are built on biased data, they replicate and spread these biases in digital spaces. Another significant study illustrating algorithmic bias is Joy Buolamwini’s Gender Shades project, which analysed racial and gender biases in facial recognition software. Buolamwini’s research demonstrated that these systems, widely used in public and private sectors, perform poorly in recognizing women and people of colour, particularly darker-skinned women. Error rates for recognizing darker-skinned women were as high as 34%, compared to near-perfect accuracy for lighter-skinned men. The consequences of such inaccuracies are profound, especially when these systems are used in high-stakes areas like surveillance, policing, and border security. Misidentification can lead to privacy invasions, wrongful arrests, and increased scrutiny for people who are already vulnerable to discrimination.
Virginia Eubanks and Automated Decision-Making
In Automating Inequality, Virginia Eubanks delves into the ways automated systems—those relying on algorithms and data-driven processes—affect public welfare, healthcare, and criminal justice. Rather than offering equal treatment, these systems often reinforce existing inequalities, disadvantaging the very people they are intended to serve. Eubanks’ research shows that automated decision-making in these sectors frequently prioritizes efficiency over empathy, failing to account for the nuanced socio-economic realities of vulnerable populations, particularly low-income women and women of colour. Eubanks highlights that these algorithms, often viewed as impartial tools, inherit and perpetuate biases present in the historical and social data on which they are built. For instance, in public welfare systems, algorithms designed to allocate resources like housing and financial assistance are often skewed against low-income women. Women of colour, in particular, face greater barriers to accessing services as automated eligibility criteria sometimes neglect factors such as social discrimination, caregiving responsibilities, and economic instability. The algorithms thus risk excluding those with the most need, making essential services less accessible to already marginalized groups. The result is a “digital poorhouse,” where those in need of help are met with barriers rather than support, intensifying their struggles instead of alleviating them. Eubanks’ critique serves as a powerful reminder that, while technology can streamline processes, it must be guided by principles of fairness and empathy to prevent harm. Systems intended to assist people in need should consider the broader social contexts of poverty, health disparities, and discrimination. When they don’t, they risk becoming tools of exclusion rather than support. To address these issues, Eubanks advocates for more transparent and accountable systems, emphasizing that human oversight, ethical design, and inclusive data practices are essential to creating technologies that truly serve all people.
Surveillance Capitalism and Privacy Violations
Surveillance capitalism, a term introduced by Shoshana Zuboff in her book The Age of Surveillance Capitalism, describes the practice by which corporations collect, analyse, and sell personal data to generate profit. Zuboff argues that this economic model is transforming the digital world into a vast marketplace where personal data has become a primary commodity. In this system, users’ actions, preferences, and even intimate aspects of their lives are continuously monitored, with little to no transparency or control over how this information is used. While surveillance capitalism affects everyone, its impacts are often more intrusive and harmful for women and marginalized groups, who face unique vulnerabilities regarding privacy. One area where this disparity is evident is in health-related data collection. Many apps, such as those tracking menstruation or mental health, collect highly sensitive information from users. Women’s health data, which includes details about menstrual cycles, pregnancy, or mental health consultations, is often shared or sold to third parties for targeted advertising or research without users’ explicit consent. This lack of privacy not only exposes personal aspects of users’ lives to commercial exploitation but also raises serious concerns about how such data might be misused. In many patriarchal societies, digital surveillance extends to familial or intimate relationships. Surveillance technologies can serve as tools for monitoring and controlling women’s movements, interactions, and online presence. Certain apps are designed to allow family members or partners to track women’s social media usage, location, and communications. This practice mirrors traditional forms of patriarchal control, but it now benefits from advanced technological capabilities that intensify the scope and ease of monitoring. Such forms of digital oversight restrict women’s autonomy and amplify their exposure to privacy invasions, placing them in situations where they may be continuously watched and judged based on their digital activities. Thus, surveillance capitalism, while ostensibly a profit-driven practice, often deepens existing gender-based inequalities. It enables corporations and individuals to exert unprecedented control over women’s lives, reducing their agency over personal information and heightening their exposure to harm. By allowing surveillance mechanisms to proliferate with minimal regulation, society risks creating a digital environment that disproportionately infringes on women’s rights to privacy and autonomy. Zuboff’s critique of surveillance capitalism calls for urgent change, advocating for regulations that protect individual data rights and impose limits on corporate data practices. She emphasizes that users should have the right to control their own data and be informed about how it is collected, stored, and used. Addressing these issues through policies and regulations, alongside fostering digital literacy, could mitigate the unequal impact of surveillance capitalism and help build a more equitable digital landscape where privacy and autonomy are protected for everyone.
Cyberviolence and Gendered Harassment
Gendered cyberviolence is one of the most disturbing forms of digital harm, disproportionately targeting women and impacting their mental health, social interactions, and willingness to engage online. Cyberviolence includes a range of abuses such as cyberstalking, doxing (the release of personal information without consent), and non-consensual image sharing, each of which can leave lasting emotional and psychological scars. Women are particularly vulnerable to these forms of digital harassment, which often intersect with misogynistic and patriarchal attitudes that manifest strongly in online spaces. A striking example of this is the Sulli Deals and Bulli Bai incidents in India, where Muslim women’s images were shared on platforms for virtual “auctioning” without their consent. These incidents reveal how digital spaces can be weaponized to target marginalized groups, often motivated by religious, ethnic, or gender-based hate. The acts not only exemplify a deep-seated misogyny but also highlight the failings of both technology platforms and legal frameworks in protecting women. In many cases, the perpetrators face few repercussions due to insufficient regulatory mechanisms or weak enforcement of existing laws.
The psychological impact of these attacks on women is profound. Victims of cyberviolence frequently experience anxiety, depression, and a loss of self-confidence. The fear of further harassment can drive women to withdraw from social media and other digital spaces, diminishing their opportunities for networking, professional advancement, and self-expression. This exclusion from digital life compounds the effects of offline gender discrimination, creating a new form of digital divide where women feel forced out of online spaces due to safety concerns. The inadequacy of current legal protections against cyberviolence reflects a broader lack of gender sensitivity in tech regulation. In many countries, laws do not adequately address gender-based online violence, or they fail to define it distinctly enough to enable effective prosecution. Some countries have limited cyber laws, often focused on general data protection or freedom of expression, without provisions specifically targeting the unique challenges women face online. Even where cyber laws do exist, enforcement can be inconsistent or weak, leaving victims without adequate recourse and emboldening offenders. To address these issues, comprehensive legal reform is needed. Laws must explicitly recognize and target gender-based online violence, incorporating robust definitions and enforcement mechanisms that hold perpetrators accountable.
THEORETICAL FRAMEWORK: FEMINIST TECHNOLOGICAL FRAMEWORKS
Donna Haraway’s Cyborg Manifesto is a foundational text in feminist thought, challenging us to view technology not as an inherently oppressive force but as a potential equalizer that could dissolve traditional boundaries and hierarchies. Haraway’s “cyborg” is a hybrid figure, transcending distinctions between human and machine, nature and culture, self and other. By blurring these boundaries, Haraway envisions a future where technology could bridge differences and foster inclusivity. This perspective is profoundly relevant in the digital age, as it implies that the development of technology can be approached with an intentional focus on equality, dismantling existing power structures, and creating inclusive digital spaces.
Haraway’s Vision of Cyborgs and Inclusive Technology
In Cyborg Manifesto, Haraway argues for a rethinking of technology as inherently neutral or merely functional. Instead, she posits technology as a medium for challenging the binaries and hierarchies deeply ingrained in society, such as those based on gender, race, and class. Her cyborg metaphor disrupts the notion that human identity and technology are distinct, suggesting that technological development could be harnessed to empower rather than oppress. Haraway writes, “The boundary between science fiction and social reality is an optical illusion” (Haraway, 1985), underscoring that our interactions with technology shape and redefine social realities. This idea aligns with the notion that technology’s design and deployment should address diverse user needs and uphold principles of equity. Haraway’s insights emphasize the importance of integrating gender sensitivity into every phase of technological development. This integration means considering who is involved in technology design, whose needs and experiences are prioritized, and what values are embedded in the tools we create. The practical implications are profound: an inclusive, feminist approach to technology could reshape digital environments to support users from all backgrounds and identities, with particular attention to those historically marginalized by power structures.
Feminist Tech Advocates and Privacy-Enhancing Technologies
Building on Haraway’s vision, feminist tech advocates today focus on privacy, consent, and user control critical components of an empowering digital environment. Feminist theorists like Shoshana Zuboff and Safiya Noble contribute to this discourse by showing how current technological systems often exploit rather than protect marginalized groups. Zuboff’s concept of Surveillance Capitalism critiques how corporations commodify personal data for profit, often disproportionately infringing on women’s privacy and autonomy. Zuboff argues that we need to “insist on the sanctity of individual experience” (Zuboff, 2019) in the face of corporate data exploitation. Her insights reveal that without an intentional focus on privacy, digital environments can become tools for control rather than empowerment. Safiya Noble’s work in Algorithms of Oppression similarly underscores the need for ethical frameworks in technology, especially around algorithmic bias. Noble argues that search engines and recommendation systems frequently perpetuate harmful stereotypes, particularly about women of color, rather than offering balanced or neutral information. By integrating feminist perspectives, tech development can move away from these biases and towards frameworks that value diversity and ethical accountability. Noble contends, “Data discrimination is a real social problem. The internet and digital technology do not inherently create equality but reflect existing social power relations” (Noble, 2018). This statement underscores the urgency for feminist frameworks in tech to counteract the biases and inequalities entrenched within digital systems.
Case Studies and Examples
Facial Recognition and Law Enforcement: The use of facial recognition software in public spaces has disproportionately affected women and people of colour. Instances in the UK and the U.S. show how women are often misidentified by these systems, leading to increased scrutiny and, at times, wrongful detainment. Studies have shown that the margin of error in identifying Black women can be as high as 35%, a significant disparity that has life-altering implications.
Employment and Algorithmic Bias: Several prominent companies have used AI-based hiring tools to streamline recruitment processes, yet these tools often exhibit biases against women and minorities. For instance, Amazon’s AI hiring tool was found to Favor male applicants by penalizing resumes containing the word “women” in any context, highlighting the implicit biases encoded within these systems. Health Data and Privacy Concerns: The commodification of women’s health data, especially concerning reproductive health, raises concerns about privacy and autonomy. For instance, period-tracking apps collect sensitive information about users’ menstrual cycles, which could be sold to third parties without user consent. This invasive data collection disproportionately affects women, compromising their autonomy over their personal health information.
POLICY IMPLICATIONS AND RECOMMENDATIONS
Implementing Gender-Sensitive Tech Design: Tech companies should incorporate gender-sensitive design principles that prioritize inclusivity and mitigate algorithmic biases. This could involve more representative datasets, regular audits, and interdisciplinary teams that include sociologists, psychologists, and gender studies experts.
Strengthening Legal Protections: Governments should enact laws that specifically address gender-based cyberviolence and enforce stringent data protection regulations. The European Union’s GDPR serves as a model for protecting personal data, but similar frameworks are urgently needed in other parts of the world, especially in countries where women’s digital rights are under-protected.
Promoting Digital Literacy and Safety Programs: Schools, workplaces, and community organizations should promote digital literacy, with particular attention to educating women and marginalized groups on data privacy and digital security. Digital literacy programs, like the Feminist Principles of the Internet, provide valuable guidance on creating safe online environments for women.
CONCLUSION: TOWARD AN INCLUSIVE DIGITAL FUTURE
The rapid integration of emerging technologies into daily life has undoubtedly brought about social and economic transformations, making communication, work, and information-sharing more accessible than ever. However, as with any powerful tool, these technologies carry risks that must be carefully managed. Without thoughtful oversight and regulation, digital technologies can inadvertently or even intentionally reinforce existing social inequalities, especially in relation to gender, race, and class. When these technologies are developed without consideration of diverse user experiences, they tend to reflect and magnify the biases inherent in the societies that created them. As a result, marginalized groups may face disproportionate risks, such as online harassment, surveillance, and data exploitation. Embedding feminist principles and gender-sensitive policies into technology development can counteract these risks, creating a digital environment where safety, inclusivity, and equity are not secondary concerns but core principles. Feminist scholars and advocates have highlighted ways to achieve this by encouraging inclusive and ethical tech development. Feminist principles in technology design emphasize transparency, accountability, and respect for users’ rights, aiming to develop systems that serve and protect all individuals, regardless of gender or other identity markers. Gender-sensitive policies address specific challenges faced by women and marginalized communities online, such as biased algorithms and lack of online safety mechanisms. These policies ensure that technological systems are scrutinized for potential biases and adjusted to mitigate harm.
In practical terms, addressing the hidden biases in digital systems means critically assessing each phase of technology development. For example, data collection practices should consider the representation of diverse groups to avoid reinforcing stereotypes. Algorithms, too, need regular auditing and testing to identify any biases in how they categorize or make decisions about people. Tech companies should also implement robust data privacy standards to protect individuals from exploitative data practices that disproportionately affect marginalized groups. By adopting these practices, we can move towards an inclusive digital future where technology empowers all users rather than perpetuating discrimination and inequality. An equitable digital landscape is one where people feel safe, respected, and free from fear of surveillance, discrimination, or harassment. When inclusivity and safety are embedded into the design of digital spaces, we create a technological ecosystem that is not only innovative but also just and humane, fostering a truly transformative impact on society. This vision requires a concerted effort from tech developers, policymakers, activists, and society as a whole. Together, we can build digital systems that are capable of supporting a fairer world, one that values the rights and dignity of every user. By recognizing and addressing biases in technology, we pave the way for a future where digital advancements uplift all of society, allowing each individual to engage in a secure, supportive, and inclusive digital environment.
ACKNOWLEDGEMENT
I am deeply honoured to have been awarded a Doctoral Fellowship by the Indian Council of Social Science Research (ICSSR). This publication is an outcome of ICSSR-sponsored doctoral research. However, I bear sole responsibility for the information presented, the views expressed, and the findings of this study. I am sincerely grateful to the ICSSR, Ministry of Education, Government of India, New Delhi, for their invaluable financial support, which made this work possible.
REFERENCES
- Buolamwini, J. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 77-83. https://doi.org/10.1145/3278721.3278728
- Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (pp. 149-159).
- Crawford, K. (2016). “Artificial Intelligence’s White Guy Problem.” New York Times. Link
- Dastin, J. (2018). “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters. Link
- Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press
- Friedman, B., & Nissenbaum, H. (1996). “Bias in Computer Systems.” ACM Transactions on Information Systems, 14(3), 330-347. DOI:10.1145/230538.230561.
- Gender & Technology. (2020). “Emerging Technologies and Gender: Exploring Opportunities and Risks.” IEEE Technology and Society Magazine.
- Haraway, D. J. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 149-181). Routledge.
- Kleinberg, J., Lakkaraju, H., Lehmann, S., Mullainathan, S., & D. Sand (2018). “Inherent Trade-Offs in the Fair Determination of Risk Scores.” Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (pp. 43-54).
- Mann, S., & Siegel, D. (2021). Gender and Security: Feminist Perspectives on Global Politics. Oxford University Press.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
- UN Women. (2021). The Gendered Impact of the COVID-19 Pandemic on Emerging Technologies. Link
- World Economic Forum. (2020). How Technology Can Help Address Gender Bias in the Workplace. Link
- Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.