INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8243
www.rsisinternational.org
The Last Warning: Predictive Surveillance or Community
Protection? AI Early Warning Systems and Environmental Justice
After Flint
Dr. Mario Desean Booker, Ph. D
School of Business and Information Technology, Purdue University Global
DOI: https://dx.doi.org/10.47772/IJRISS.2025.910000674
Received: 29 October 2025; Accepted: 04 November 2025; Published: 20 November 2025
ABSTRACT
When Flint switched its water source in April 2014, residents immediately complained about foul smelling,
discolored water. Officials dismissed these concerns for eighteen months while over 100,000 people consumed
lead-contaminated drinking water. Could AI early warning systems have prevented this disaster?
This question matters because cities across America face similar infrastructure crises. Using sociological analysis
combined with technical assessment, I examine whether predictive surveillance technologies could have detected
Flint's water contamination before it poisoned an entire community.
The technical capabilities exist. Sensor networks can monitor water quality continuously. Machine learning
algorithms excel at pattern recognition. Health surveillance systems can identify disease clusters within days
rather than months. But here's the problem: Flint's crisis wasn't caused by lack of information.
Emergency managers ignored mounting evidence because austerity politics prioritized cost savings over public
health. Community complaints were dismissed as "anecdotal." Regulatory agencies operated under corporate
influence. Environmental racism shaped which populations were deemed expendable.
My analysis reveals that technical solutions alone cannot address structural inequalities. AI systems risk
reproducing the same power dynamics that created the crisis. Without community control and democratic
governance, algorithmic early warning systems become sophisticated tools for maintaining existing hierarchies
rather than protecting vulnerable populations.
The implications extend far beyond Flint to questions of environmental justice and technological
governance in an era of increasing surveillance.
INTRODUCTION
Reframing Crisis as Sociotechnical Failure
The Flint Water Crisis: Timeline and Impact
On April 25, 2014, Flint's emergency manager pressed a button. Water stopped flowing from Detroit's system.
The Flint River became the city's primary source.
Within hours, residents called City Hall. The water tasted metallic. It smelled like chlorine. Some families
noticed rashes after bathing. Others complained about hair loss. City officials dismissed their concerns.
This wasn't dismissible material. The Flint River hadn't served as a municipal water source since 1967, when
industrial pollution forced the city to switch to Detroit's Lake Huron supply (Masten et al., 2016). Now,
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8244
www.rsisinternational.org
costcutting measures under Michigan's emergency management law brought the river back online without
adequate corrosion control treatment.
What happened next reveals how technological systems fail when divorced from democratic oversight. The water
department knew about corrosion problems. Residents documented health impacts. Environmental advocates
raised alarms. Yet official acknowledgment didn't come until September 2015—eighteen months of poisoning
the entire city.
The numbers tell a brutal story. Over 100,000 people consumed lead-contaminated water. Blood lead levels in
children under five doubled in some neighborhoods (Hanna-Attisha et al., 2016). At least twelve people died
from Legionnaires' disease linked to the water system (Zahran et al., 2018). Thousands more suffered
neurological damage that will last lifetimes.
But these statistics mask deeper patterns. Flint's population is 54% African American, with a poverty rate of
41.5% (U.S. Census Bureau, 2014). The emergency manager who made the fateful decision was appointed by
Michigan's governor—part of a broader pattern where state oversight falls heaviest on communities of color
struggling with economic disinvestment.
Environmental racism shaped every aspect of the crisis. As Robert Bullard documented in "Dumping in Dixie"
(1990), poor communities of color consistently bear disproportionate environmental burdens. Flint represents
this pattern at its most extreme—a deliberate policy choice that treated Black lives as expendable in service of
financial savings.
The health impacts continue cascading through generations. Lead exposure causes irreversible cognitive damage,
particularly in developing children (Lanphear et al., 2005). Flint's children will carry these neurological burdens
for decades. Their educational outcomes, economic opportunities, and life trajectories have been fundamentally
altered by decisions made in distant government offices.
Yet focusing solely on health consequences misses crucial dynamics. The crisis revealed how technological
systems embody political choices. Water infrastructure isn't neutral. The decision to switch sources, the choice
to skip corrosion control, the dismissal of resident complaints—each represented value judgments about whose
lives matter.
This is where traditional crisis narratives fail us. Most accounts frame Flint as bureaucratic incompetence or
regulatory failure. That analysis stays safely technical. It suggests better training, improved oversight, or updated
protocols could prevent future disasters. Wrong diagnosis, wrong cure.
Research Questions and Theoretical Framework
The central question driving this analysis cuts deeper: Could artificial intelligence early warning systems have
prevented Flint's water crisis? But asking this question properly requires abandoning techno-solutionist
assumptions.
AI enthusiasts argue that smart city technologies can optimize municipal services, predict infrastructure failures,
and protect public health through real-time monitoring (Kitchin, 2014). The promise sounds compelling. Imagine
sensor networks detecting lead contamination within hours. Machine learning algorithms predicting pipe failures
before they occur. Automated health surveillance systems identifying disease clusters immediately.
These capabilities exist today. The technology works. So why didn't it save Flint?
Here's where Science and Technology Studies become essential. STS scholars like Langdon Winner (1980)
demonstrated that technologies aren't politically neutral. They embody the values and power relations of their
creators. Automated systems can perpetuate discrimination just as effectively as human decision-makers—
sometimes more so, because algorithms provide a veneer of objectivity.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8245
www.rsisinternational.org
Ruha Benjamin's "Race After Technology" (2019) shows how predictive algorithms reproduce racial inequalities
while appearing colorblind. Virginia Eubanks's "Automating Inequality" (2018) documents how data-driven
systems punish poor families for seeking public assistance. These aren't bugs in the system. They're features.
Environmental justice theory provides the other crucial lens. Scholars like Jason Corburn (2005) argue that
technical expertise without community knowledge reproduces environmental racism. Top-down technological
solutions ignore local wisdom and democratic participation. They treat affected communities as data sources
rather than decision-makers.
My methodological approach combines counterfactual analysis with sociological critique. I examine where AI
systems could have intervened in Flint's timeline—but always through the lens of existing power structures.
Technical capability means nothing without political will to act on algorithmic warnings.
This framework reveals uncomfortable truths. Information wasn't the problem in Flint. Multiple warning systems
already existed. Residents complained constantly. Environmental advocates documented contamination. Public
health officials identified elevated blood lead levels months before official acknowledgment.
The crisis occurred because powerful actors chose to ignore available evidence. Emergency managers prioritized
cost savings over community health. State officials dismissed resident concerns as hysterical overreaction.
Corporate consultants provided cover for inaction through selective data interpretation.
Would AI systems have changed these dynamics? Or would algorithmic warnings have been dismissed just as
easily as human voices were?
That question demands serious analysis of how technological systems intersect with structural inequality. It
requires examining not just what AI can detect, but who controls the technology and who decides how to respond
to its outputs.
The stakes extend far beyond Flint. Cities across America face infrastructure crises driven by decades of
disinvestment. Climate change intensifies these pressures. AI-powered early warning systems will proliferate
whether we think critically about them or not.
The question isn't whether we'll use these technologies. The question is whether we'll deploy them in ways that
challenge or reproduce existing patterns of environmental injustice.
Literature Review: Technology, Power, And Environmental Health
Smart Cities and Algorithmic Governance
Smart cities promise efficiency. Sensors everywhere. Data flowing constantly. Algorithms optimizing traffic
lights, predicting crime, managing utilities. The vision sounds clean, rational, scientific.
IBM's "Smarter Cities" initiative launched this fantasy in 2008. Cisco followed with "Smart+Connected
Communities." Tech giants painted urban futures where data solves everything. No more traffic jams. No more
crime hotspots. No more infrastructure failures. Just smooth algorithmic management of messy human realities.
Rob Kitchin's "The Data Revolution" (2014) captured this enthusiasm while raising critical questions. Realtime
data streams could revolutionize urban governance, he argued. But who controls the algorithms? What
assumptions get built into the code? How do we maintain democratic accountability when black-box systems
make crucial decisions?
These questions weren't just academic. Cities started buying in. Barcelona implemented smart water
management systems. Amsterdam deployed predictive policing algorithms. Chicago launched predictive
analytics for restaurant inspections. The smart city market exploded dramatically over the past decade.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8246
www.rsisinternational.org
But early results proved messy. Predictive policing algorithms amplified racial bias rather than reducing it. Smart
parking systems created new digital divides. Sensor networks prioritized wealthy neighborhoods while ignoring
poor communities. The promise of neutral, efficient governance crashed against stubborn realities of power and
inequality.
Similar patterns emerge across domains. Automated welfare systems deny benefits to eligible families while
claiming algorithmic objectivity (Eubanks, 2018). Environmental monitoring systems ignore community
complaints while trusting sensor data exclusively.
The democratic deficit runs deeper than bias. Algorithmic governance removes crucial decisions from public
debate. When algorithms determine resource allocation, citizens lose meaningful input. Technical experts replace
elected officials as de facto decision-makers. Democracy becomes technocracy wearing a democratic mask.
Frank Pasquale's "The Black Box Society" (2015) shows how algorithmic opacity compounds these problems.
Citizens can't challenge decisions they can't understand. Appeals become impossible when the logic remains
hidden. Accountability disappears behind claims of trade secrets and algorithmic complexity.
Yet dismissing smart city technologies entirely misses potential benefits. Environmental monitoring can detect
pollution faster than traditional methods. Traffic optimization reduces emissions and commute times. Predictive
maintenance prevents infrastructure failures that harm vulnerable communities.
The question isn't whether to use these technologies. The question is how to deploy them democratically.
Environmental Justice and Technology
Environmental justice scholarship reveals how technology reproduces racial and class inequalities. Robert
Bullard's foundational work "Dumping in Dixie" (1990) documented systematic placement of toxic facilities in
communities of color. This wasn't accidental. It reflected deliberate decisions about whose neighborhoods were
deemed expendable.
Technology plays a central role in these patterns. Environmental monitoring systems focus on wealthy areas
while ignoring poor communities. Cleanup technologies get deployed rapidly in white suburbs but slowly in
Black neighborhoods. Infrastructure investments follow property values rather than human need.
Jason Corburn's "Street Science" (2005) challenges the technical expertise versus community knowledge divide.
Residents know their neighborhoods intimately. They notice patterns professionals miss. They understand local
conditions in ways sensors cannot capture. Yet environmental decision-making consistently privileges technical
data over community wisdom.
This creates what Corburn calls "environmental health disparities"—systematic differences in exposure based
on race and class. Poor communities of color face higher pollution levels, worse infrastructure, and slower
emergency response times. Technology could address these disparities. Instead, it often deepens them.
Consider air quality monitoring. The EPA operates far fewer monitors in low-income areas than in wealthy ones
(Clark et al., 2017). This creates data gaps that mask environmental injustices. Communities can't prove pollution
problems without monitoring data. But they can't get monitors without proving problems first. It's a perfect
catch-22.
Community-based participatory research offers alternative approaches. Residents collect their own data using
low-cost sensors. They document health impacts through neighborhood surveys. They map pollution sources
using local knowledge. This grassroots science challenges official narratives while building community power.
Technology isn't inherently racist. But it develops within racist systems. Unless we address underlying power
structures, new technologies will reproduce old inequalities with digital efficiency.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8247
www.rsisinternational.org
Public Health Surveillance
Public health surveillance expanded dramatically after 9/11. Biosecurity concerns drove massive investments in
disease monitoring systems. The CDC launched BioSense in 2003. States built syndromic surveillance networks.
Hospitals began real-time data sharing. The goal was detecting bioterror attacks, but the infrastructure enabled
broader population monitoring.
These systems proved valuable during natural disease outbreaks. H1N1 surveillance helped track the 2009
pandemic. Electronic health records enabled rapid response to foodborne illness clusters. Real-time monitoring
reduced outbreak duration and severity.
But surveillance expansion raised civil liberties concerns. Medical privacy eroded as health data flowed to
government agencies. Community trust declined when surveillance systems prioritized security over health
equity. Poor communities faced intensified monitoring while receiving reduced services.
Public health surveillance historically targeted marginalized communities. Contact tracing for sexually
transmitted diseases focused disproportionately on gay men and people of color. Tuberculosis monitoring
concentrated on immigrant neighborhoods. Disease surveillance became a tool for social control rather than
health protection.
Similar patterns persist today. COVID-19 surveillance relied heavily on digital tracking technologies. Contact
tracing apps monitored population movements. Health passes restricted mobility based on testing status. These
systems promised public health benefits while creating new forms of social stratification.
Community-based participatory research offers democratic alternatives. Rather than top-down surveillance,
communities control their own health monitoring. Residents identify priorities. Community members collect
data. Local organizations analyze results. This approach builds community capacity while generating actionable
health information.
These community-controlled approaches work. Community health workers detect disease outbreaks faster than
formal surveillance systems in many settings. Neighborhood organizations identify environmental health
hazards missed by official monitors. Resident-led research documents health disparities that government
agencies ignore.
The choice isn't between surveillance and public health. It's between surveillance systems that serve community
needs versus those that serve state power.
The literature reveals consistent patterns across smart cities, environmental justice, and public health
surveillance. Technologies promising neutral efficiency instead reproduce existing inequalities. Democratic
participation gets replaced by technocratic authority. Community knowledge gets dismissed in favor of
algorithmic objectivity.
Yet the same literature points toward alternatives. Community-controlled technologies can challenge rather than
reproduce injustice. Participatory design processes can democratize rather than concentrate power. The key lies
not in rejecting technology but in fundamentally restructuring how we develop and deploy it.
This sets the stage for examining how AI early warning systems might fit into these existing patterns. Will they
follow the trajectory of smart city disappointments and surveillance expansion? Or can they be designed to
genuinely serve environmental justice goals?
Technical Analysis: AI Capabilities and Potential
Water Quality Monitoring and Prediction
The technology exists. That's the frustrating part. IoT sensors can monitor water quality continuously. pH levels,
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8248
www.rsisinternational.org
chlorine residuals, turbidity, bacterial counts— all tracked in real-time. Companies like Hach and YSI
manufacture sensors that cost under $1,000 each. Deploy enough of them throughout a distribution system, and
you create a nervous system for water infrastructure.
Machine learning excels at pattern recognition in complex datasets. Feed it years of water quality data, and
algorithms can predict contamination events before they become crises. Researchers at MIT developed models
that forecast lead contamination with 80% accuracy using routine operational data (Olson et al., 2017). The
University of Michigan created algorithms that predict pipe failures weeks in advance using pressure, flow, and
quality measurements (Sattar et al., 2016).
Here's what makes it work: Water systems generate massive amounts of operational data. Treatment plants
monitor dozens of parameters hourly. Distribution networks track pressure and flow continuously.
Laboratories analyze samples daily. Most of this data sits unused in spreadsheets and databases. AI transforms
this dormant information into predictive intelligence. Algorithms spot subtle patterns humans miss. A slight pH
drop here, unusual chlorine demand there, minor pressure fluctuations in another zone—individually
meaningless, collectively predictive of system failure.
Consider lead contamination specifically. Lead doesn't appear randomly. It follows predictable patterns based on
pipe materials, water chemistry, and hydraulic conditions. Corrosive water strips lead from service lines and
household plumbing. Low pH accelerates the process. Stagnant water concentrates contamination. Temperature
fluctuations worsen everything.
Machine learning algorithms can model these interactions simultaneously. They incorporate water chemistry
data, pipe inventory information, hydraulic modeling results, and historical contamination measurements. The
result: predictive maps showing where lead contamination will likely occur next.
Integration with treatment plant operations adds another layer of protection. When algorithms predict corrosion
problems, operators can adjust chemical dosing automatically. If pH drops below safe thresholds, systems can
trigger corrosion inhibitor injection. Smart treatment systems could have prevented Flint's crisis by maintaining
proper corrosion control regardless of source water changes.
But technical capability means nothing without implementation. Flint's water department had access to
commercial water quality monitoring systems. They chose not to use them adequately. Emergency managers
prioritized cost cutting over public health protection.
The sensors work. The algorithms work. The integration works. What doesn't work is the assumption that better
technology automatically produces better outcomes.
Infrastructure Vulnerability Assessment
American water infrastructure is failing systematically. The American Society of Civil Engineers gives it a D+
grade. Six billion gallons leak from distribution systems daily. Pipes installed in the 1950s reach end-of-life
simultaneously. Climate change intensifies stress on aging systems.
AI could help prioritize replacement and repair. Predictive maintenance algorithms analyze multiple risk factors:
pipe age, material type, soil conditions, traffic loading, pressure variations, and failure history. Machine learning
models identify pipes most likely to fail within specific timeframes.
Geographic Information Systems (GIS) make this analysis spatial. Instead of treating each pipe independently,
algorithms consider neighborhood-level patterns. Clay soils cause different failure modes than sandy soils. Cast
iron pipes behave differently in acidic versus alkaline conditions. Traffic loading affects shallow pipes more than
deep ones.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8249
www.rsisinternational.org
Demographics add crucial context often ignored by purely technical approaches. Vulnerable populations suffer
disproportionately from infrastructure failures. Children, elderly residents, and people with chronic illnesses face
higher health risks from water disruptions. Low-income families can't afford bottled water during outages.
Smart vulnerability assessment integrates technical and social risk factors. Algorithms might prioritize pipe
replacement in neighborhoods with high concentrations of vulnerable residents, even if those pipes aren't the
most technically at-risk. This represents a departure from purely engineering-driven decision-making toward
equity-informed infrastructure management.
The technology for this integration exists today. Census data provides demographic information at block level.
Health department records identify vulnerable populations. GIS systems overlay social and technical risk factors
seamlessly. Machine learning algorithms can optimize replacement schedules considering both failure
probability and community impact.
Cities already use simplified versions of these approaches. Chicago's predictive analytics program identifies
buildings likely to have lead service lines using property records and water quality data (Potash et al., 2015).
The results guide targeted inspection and replacement efforts.
But scaling requires resources most cities lack. GIS systems need updated pipe inventories. Machine learning
models require clean data. Predictive maintenance programs demand dedicated staff and sustained funding.
Technical solutions crash against fiscal realities in cash-strapped municipalities.
Flint exemplifies this paradox. The city possessed detailed knowledge of its infrastructure vulnerabilities.
Officials knew about aging pipes and corrosive source water. Emergency managers chose short-term savings
over long-term system integrity. Better algorithms wouldn't have changed those priorities.
Health Surveillance Integration
Public health surveillance could integrate seamlessly with water quality monitoring. Hospital emergency
departments track symptoms that might indicate waterborne illness. Clinical laboratories process blood tests that
reveal lead exposure. Disease surveillance systems monitor outbreak patterns.
AI excels at connecting these disparate data streams. Machine learning algorithms can identify unusual clusters
of gastrointestinal illness that might indicate water contamination. Statistical models can detect elevated blood
lead levels before they become widespread. Predictive systems can forecast disease outbreaks based on
environmental conditions.
Electronic health records make this integration technically feasible. Most hospitals use digital systems that
capture diagnostic codes, laboratory results, and patient demographics. Public health departments operate disease
surveillance networks that monitor reportable conditions. The infrastructure exists for real-time health
monitoring.
Syndromic surveillance systems already demonstrate these capabilities. The CDC's National Syndromic
Surveillance Program monitors emergency department visits for patterns that might indicate bioterror attacks or
natural disease outbreaks. Similar systems could detect waterborne illness clusters within days rather than weeks.
Lead exposure surveillance offers another integration point. Blood lead testing generates data that could trigger
water system investigations. When multiple children in a neighborhood show elevated levels, algorithms could
automatically alert water utilities to investigate distribution system problems.
The key insight: health impacts often appear before water quality problems get officially recognized. Residents
notice symptoms before laboratories confirm contamination. Emergency departments see illness patterns before
epidemiologists identify outbreaks. Integrating health surveillance with environmental monitoring creates earlier
warning systems.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8250
www.rsisinternational.org
But privacy concerns complicate implementation. Health data enjoys strong legal protections. Patients expect
medical confidentiality. Sharing clinical information with water utilities raises legitimate privacy questions. Any
integration requires careful attention to consent, anonymization, and data security.
Technical Limitations
Technical solutions face fundamental constraints that technology alone cannot address. Data quality problems
plague under-resourced municipalities most severely. Small water systems lack resources for comprehensive
monitoring. Rural communities operate with minimal laboratory capacity. Poor cities delay equipment
maintenance and upgrades. The communities most vulnerable to water crises often have the worst data for
preventing them.
Algorithmic bias presents another challenge. Machine learning models trained on historical data reproduce past
inequalities. If previous infrastructure investments favored wealthy neighborhoods, algorithms will continue that
pattern unless explicitly corrected. If health surveillance focused on certain populations, predictive models will
perpetuate those biases.
Privacy and security concerns intensify with comprehensive monitoring. Water quality sensors generate location-
specific data that could reveal sensitive information about communities. Health surveillance integration raises
medical privacy questions. Comprehensive monitoring creates comprehensive surveillance opportunities that
might be misused.
Interpretability challenges complicate accountability. Complex machine learning models often function as "black
boxes" that produce accurate predictions without explaining their reasoning. Water utility operators need to
understand why algorithms recommend specific actions. Community members deserve explanations for
decisions affecting their health and safety.
Technical limitations reflect deeper structural problems. Data quality follows funding patterns. Algorithmic bias
mirrors social inequality. Privacy concerns arise from justified distrust of government surveillance.
Interpretability problems stem from technocratic decision-making processes that exclude affected communities.
The most sophisticated AI system cannot overcome these fundamental constraints. Technical capability without
democratic governance produces better surveillance, not better protection. Predictive algorithms without
community control enable more efficient oppression rather than more effective prevention.
This analysis reveals a crucial paradox: the technical capacity to prevent water crises like Flint's exists today, but
the political will to deploy it equitably does not. The problem isn't technological—it's sociological.
Sociological Analysis: Power, Knowledge, And Response
The Social Construction of Ignorance
They knew. That's what makes Flint so infuriating. The data existed. Water quality reports showed rising lead
levels months before official acknowledgment. Treatment plant operators documented corrosion problems
immediately after the source switch. Environmental consultants identified the lack of corrosion control as
dangerous. Residents complained constantly about water color, taste, and health effects.
Figure 2 reveals why AI early warning systems cannot solve environmental crises like Flint's without
fundamental political transformation. The visible technical capabilities above the waterline—sensors,
algorithms, monitoring systems—represent only a small fraction of the implementation challenge. The massive
underwater structure represents the structural barriers that undermine technical effectiveness: austerity politics
that prioritize cost reduction over health protection, environmental racism that devalues certain communities,
regulatory capture that serves industry interests, and corporate power that externalizes environmental costs. This
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8251
www.rsisinternational.org
iceberg metaphor demonstrates that focusing solely on technical improvements while ignoring structural
impediments ensures continued failure to protect vulnerable communities.
Yet officials claimed ignorance for eighteen months. Emergency managers insisted the water was safe. State
regulators dismissed early warning signs. Corporate consultants provided scientific cover for inaction. The crisis
wasn't caused by lack of information—it was caused by systematic refusal to act on available information.
Robert Proctor's concept of "agnotology"—the study of culturally-induced ignorance—explains this pattern
(Proctor & Schiebinger, 2008). Ignorance isn't natural. It gets manufactured through deliberate processes that
suppress inconvenient knowledge. Tobacco companies pioneered these techniques by funding research that
questioned smoking's health risks. Climate change deniers use similar strategies to manufacture doubt about
scientific consensus.
Flint officials deployed agnotological tactics unconsciously but effectively. They questioned data quality when
results were unfavorable. They demanded "more research" to delay action. They dismissed community
knowledge as unscientific. They privileged official expertise over lived experience. Each tactic individually
appeared reasonable. Collectively, they constructed a wall of ignorance around mounting evidence of crisis.
Emergency management structures amplified these dynamics. Michigan's emergency manager law suspended
democratic governance in favor of technocratic efficiency. Appointed managers answered to state officials rather
than local residents. Cost reduction became the primary performance metric. Public health considerations
became secondary concerns.
This creates what sociologist Charles Tilly called "durable inequality"—systematic differences in life chances
that persist across generations (Tilly, 1998). Emergency management didn't create Flint's vulnerabilities. It
activated existing patterns of racial and class exclusion through ostensibly neutral administrative procedures.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8252
www.rsisinternational.org
Regulatory capture compounded the problem. State environmental regulators developed close relationships with
water utilities they supposedly oversaw. Revolving door employment patterns created conflicts of interest.
Industry consultants wrote technical guidance that favored utilities over public health. Regulatory agencies
became advocates for regulated industries rather than protectors of public welfare.
The Michigan Department of Environmental Quality exemplified these dynamics during Flint's crisis. Regulators
initially defended the water switch despite obvious problems. They minimized lead contamination findings. They
attacked environmental advocates who raised concerns. They prioritized utility interests over community health
until federal intervention forced acknowledgment.
Community complaints got systematically dismissed as "anecdotal" despite their accuracy. Residents
documented health problems months before official recognition. They identified geographic patterns of
contamination before scientific studies confirmed them. They demanded action based on direct experience of
harm. Yet officials consistently privileged technical expertise over community knowledge.
This reflects deeper epistemological hierarchies about whose knowledge counts as legitimate. Scientific
knowledge enjoys higher status than experiential knowledge. Professional expertise gets valued over community
wisdom. Quantitative data trumps qualitative observation. These hierarchies aren't neutral—they systematically
exclude the knowledge of marginalized communities.
The irony is profound: the communities most affected by environmental hazards often know about them first.
Residents notice changes in water quality before laboratory tests confirm contamination. Parents observe
children's health problems before medical studies document population-level effects. Community members
identify environmental injustices before academic research proves discrimination.
Yet environmental decision-making consistently ignores this community knowledge in favor of technical
expertise. The result is delayed recognition of problems, inadequate response to emerging threats, and systematic
disregard for community concerns until crisis forces acknowledgment.
Environmental Racism and Algorithmic Reproduction
Environmental racism didn't start with Flint. It won't end there either. Systematic exclusion of communities of
color from environmental protection reflects centuries of discriminatory policy. Redlining in the 1930s
concentrated Black families in neighborhoods with industrial pollution. Urban renewal in the 1960s demolished
Black communities for highways and toxic facilities. Zoning decisions consistently placed hazardous land uses
in communities of color.
Robert Bullard's "Dumping in Dixie" (1990) documented these patterns across the South. Toxic waste facilities
were disproportionately located in Black communities regardless of income levels. Environmental cleanup
happened faster in white neighborhoods than Black ones. Environmental enforcement was weaker in
communities of color.
These patterns persist today. A 2017 study found that people of color are 38% more likely to live in areas with
poor air quality than white people (Clark et al., 2017). The disparity exists at every income level, indicating that
race rather than class drives environmental inequality.
Algorithmic systems risk reproducing these historical patterns through biased training data. If machine learning
models use past infrastructure investment patterns to predict future needs, they will perpetuate historical
discrimination. If health surveillance systems are trained on data from areas with good medical access, they will
miss problems in underserved communities.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8253
www.rsisinternational.org
Consider predictive policing algorithms that reproduce racial bias in arrest patterns. Police historically
overpatrolled communities of color, generating arrest data that algorithms interpret as higher crime rates.
Predictive models then recommend increased policing in those same communities, creating feedback loops that
amplify discrimination.
Environmental AI systems face similar risks. If historical water quality monitoring focused on wealthy areas,
algorithms will have better data for predicting problems there. If past infrastructure investments prioritized white
neighborhoods, predictive maintenance models will continue that pattern. If health surveillance concentrated on
certain populations, disease outbreak prediction will miss emerging problems elsewhere.
Cathy O'Neil's "Weapons of Math Destruction" (2016) shows how algorithmic bias becomes embedded in
seemingly objective systems. Mathematical models appear neutral while encoding discriminatory assumptions.
Automated decision-making scales bias more efficiently than human prejudice ever could.
The solution isn't abandoning algorithms—it's detecting and correcting bias systematically. Researchers have
developed techniques for algorithmic fairness that ensure equitable outcomes across demographic groups. But
implementing these techniques requires acknowledging that bias exists and committing to address it.
Environmental justice advocates argue for stronger approaches. Rather than correcting biased algorithms, they
demand community control over technological systems. Instead of debugging discriminatory code, they want
democratic governance of algorithmic decision-making. The goal isn't better surveillance of marginalized
communities—it's community power over the systems that affect their lives.
Safiya Noble's "Algorithms of Oppression" (2018) demonstrates how search engines reproduce racial and gender
stereotypes through biased training data and cultural assumptions. Similar dynamics operate in environmental
AI systems unless explicitly addressed through community-controlled design processes.
The challenge goes beyond technical bias detection to fundamental questions of power and control. Who
designs these systems? Who benefits from their deployment? Who bears the risks when they malfunction?
Answering these questions honestly reveals that environmental AI systems will likely reproduce existing
inequalities unless deliberately designed to challenge them. Democratic Governance versus Technocratic
Management
Democracy died in Flint long before the water crisis began. Michigan's emergency manager law suspended local
governance in favor of appointed technocrats. Elected officials lost decision-making power. Community input
became advisory rather than binding. Democratic accountability disappeared behind claims of fiscal emergency
and administrative efficiency.
This reflects broader trends toward technocratic governance that replace political debate with technical expertise.
Complex policy decisions get framed as engineering problems with optimal solutions rather than value choices
requiring democratic deliberation. Citizens become consumers of government services rather than participants
in collective decision-making.
Algorithmic governance accelerates these anti-democratic tendencies. When AI systems make crucial decisions
about resource allocation, infrastructure investment, or emergency response, democratic input gets displaced by
technical optimization. Community preferences become algorithmic parameters. Political choices get hidden
behind mathematical objectivity.
Frank Pasquale's "The Black Box Society" (2015) documents how algorithmic opacity undermines democratic
accountability. Citizens can't challenge decisions they can't understand. Representatives can't oversee processes
they can't access. Democracy requires transparency that algorithmic governance often prevents.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8254
www.rsisinternational.org
Environmental AI systems intensify these problems. Water quality algorithms might optimize technical
performance while ignoring community priorities. Predictive maintenance systems might minimize costs while
maximizing health risks for vulnerable populations. Disease surveillance networks might enhance security while
eroding medical privacy.
Yet technology could also strengthen democracy if designed with community control as the primary goal.
Participatory design processes could ensure that affected communities shape algorithmic systems rather than
merely accepting their outputs. Community ownership of technological infrastructure could democratize rather
than concentrate power.
The key distinction lies between technology deployed on communities versus technology controlled by
communities. Top-down algorithmic systems treat residents as data sources and decision targets. Bottom-up
technological development treats communities as designers and decision-makers.
Examples exist of democratic technology governance. Barcelona's "Decidim" platform enables participatory
budgeting through digital democracy tools. Taiwan's "vTaiwan" system facilitates consensus-building on
complex policy issues through online deliberation. Community land trusts provide models for collective
ownership of essential infrastructure.
Environmental justice organizations have pioneered community-controlled monitoring technologies. Residents
collect air quality data using low-cost sensors. Neighborhood groups map pollution sources through participatory
research. Community organizations analyze health data to document environmental hazards.
These approaches demonstrate that technology can serve democratic rather than technocratic purposes. The
crucial element is community control over technological design, deployment, and governance. Without that
control, even beneficial technologies become tools for technocratic management rather than democratic
empowerment.
Political Economy of Prevention
Follow the money. It explains everything. Flint's crisis emerged from austerity politics that prioritized short-term
savings over long-term public health. Emergency management aimed to reduce municipal costs regardless of
consequences. Infrastructure investment became an expense to minimize rather than a necessity to maintain.
This reflects broader patterns of fiscal austerity that have devastated public services since the 1980s. Tax cuts
reduced government revenue. Budget constraints limited infrastructure maintenance. Privatization shifted public
assets to private control. The result: systematic disinvestment in essential services that protect public health.
Water systems require massive ongoing investment. Pipes need replacement every 50-100 years. Treatment
plants demand constant maintenance. Distribution networks require continuous monitoring. These investments
don't generate visible returns—they prevent crises that might never occur.
Political incentives favor visible spending over invisible prevention. Politicians get credit for building new
facilities, not maintaining old ones. Emergency response generates media attention; routine maintenance doesn't.
Crisis management appears decisive; prevention appears wasteful.
Corporate influence shapes these priorities through lobbying, campaign contributions, and revolving door
employment. Engineering firms profit from emergency repairs more than preventive maintenance. Consultant
contracts multiply during crises. Equipment manufacturers benefit from system failures that require replacement.
The water industry exemplifies these dynamics. Private utilities maximize profits by minimizing infrastructure
investment. Public utilities face pressure to reduce rates by deferring maintenance. Regulatory agencies get
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8255
www.rsisinternational.org
captured by industry interests. The result: systematic underinvestment in essential infrastructure.
Neoliberal governance accelerates these trends through market-based reforms that treat public goods as
commodities. Water becomes a product to be sold rather than a right to be protected. Infrastructure becomes an
investment opportunity rather than a public responsibility. Citizens become customers rather than stakeholders.
Climate change intensifies these pressures by increasing infrastructure stress while reducing available resources.
Extreme weather damages aging systems. Sea level rise threatens coastal facilities. Drought strains water
supplies. Flooding overwhelms treatment capacity. Adaptation requires massive investment precisely when
austerity limits available funding.
The COVID-19 pandemic revealed similar patterns in public health infrastructure. Decades of budget cuts
reduced disease surveillance capacity. Hospital consolidation eliminated surge capacity. Supply chain
optimization created dangerous vulnerabilities. When crisis struck, the infrastructure needed for effective
response had been systematically dismantled.
Prevention requires long-term thinking that markets discourage and elections often punish. Infrastructure
investment pays dividends over decades while political cycles last years. Environmental protection prevents
diffuse future harms while imposing concentrated present costs. Public health measures benefit entire
populations while burdening specific industries.
These structural barriers to prevention operate regardless of individual good intentions or technological
capabilities. Even perfect early warning systems cannot overcome political economies that reward short-term
thinking and punish long-term investment. Even sophisticated AI cannot address fundamental conflicts between
profit maximization and public health protection.
The solution requires restructuring political economy, not just improving technology. Public ownership could
align infrastructure investment with community needs rather than profit maximization. Democratic planning
could prioritize prevention over crisis response. Community control could ensure that technological systems
serve public health rather than private profit.
Without addressing these underlying political economic structures, AI early warning systems will become
sophisticated tools for managing crises rather than preventing them. They will optimize emergency response
rather than eliminate emergency conditions. They will improve surveillance of vulnerable communities rather
than address the vulnerabilities themselves.
Case Study: Flint Timeline and Ai Intervention Points
Pre-Crisis Period (2011-2014)
Flint's crisis began years before anyone turned a valve.
In March 2011, Michigan appointed Michael Brown as emergency manager to control Flint's finances.
Democracy ended. Elected officials lost decision-making power. Cost reduction became the only priority. Public
input became irrelevant noise.
Brown inherited a water system already stressed by decades of disinvestment. Pipes installed in the 1920s
reached end-of-life. Treatment facilities needed major upgrades. Distribution networks leaked millions of gallons
daily. Yet infrastructure maintenance competed directly with debt service payments that emergency management
prioritized.
By 2013, plans emerged under Brown's successor, Darnell Earley, to switch water sources from Detroit's Lake
Huron system to the Flint River. The decision was framed as cost savings—$5 million annually. Officials claimed
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8256
www.rsisinternational.org
the river water would be safe with proper treatment. They were half right. Proper treatment could have made
river water safe. They chose not to provide it.
Here's where AI could have intervened decisively. Predictive modeling using existing data could have forecast
the catastrophic consequences of switching sources without adequate corrosion control.
The data existed. Detroit's water system maintained detailed records of Flint River water chemistry from decades
of monitoring. The river was more corrosive than Lake Huron water—higher chloride levels, lower pH, different
alkalinity. Historical treatment records showed the chemicals needed to prevent pipe corrosion when using river
water.
Machine learning algorithms could have modeled the interaction between corrosive source water and Flint's
aging distribution system. Input variables would include pipe materials (30% lead service lines), pipe age
(average 70 years), water chemistry parameters, and historical corrosion rates. Output predictions would show
lead leaching, iron oxidation, and bacterial growth patterns.
The modeling wouldn't require sophisticated AI. Standard water utility software could have predicted corrosion
problems weeks before the source switch. EPA guidance documents provided clear protocols for corrosion
control when changing source waters. The American Water Works Association published detailed technical
standards.
But predictive modeling assumes someone wants to prevent problems rather than ignore them. Earley knew
about corrosion risks. State regulators understood proper treatment requirements. Corporate consultants warned
about potential contamination. They proceeded anyway because preventing problems cost money they refused
to spend.
AI intervention at this stage would have required overriding emergency manager authority when algorithms
predicted health risks. Predictive systems would need legal mandates forcing protective action regardless of cost
considerations. Technical warnings would need enforcement mechanisms stronger than administrative
discretion.
Without community control over technological systems, AI predictions would have joined the pile of ignored
warnings that already existed. Earley dismissed environmental advocates, state regulations, and professional
recommendations. Algorithmic warnings would have suffered the same fate unless backed by community power
to demand action.
Crisis Emergence (April 2014 - September 2015)
April 25, 2014: The switch happened under Earley's authority. Problems started immediately.
Within hours, residents complained about water color, taste, and smell. Treatment plant operators noticed unusual
chemical demands. Environmental monitors detected rising contaminant levels. Yet officials insisted everything
was fine for eighteen months.
Real-time monitoring could have shortened this timeline dramatically. IoT sensors throughout the distribution
system would have detected lead contamination within days rather than months. Continuous water quality
monitoring would have shown pH drops, chlorine depletion, and bacterial growth immediately.
Figure 3 provides a stark visualization of missed opportunities for AI intervention during Flint's water crisis. The
upper timeline shows the actual progression from water source switch through eighteen months of official denial
to eventual acknowledgment. The lower timeline reveals multiple points where AI systems could have detected
problems and triggered protective responses within weeks rather than months. The red arrows highlight the
widening gap between technical possibilities and political realities. This timeline demonstrates that the crisis
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8257
www.rsisinternational.org
continued not due to information scarcity, but due to systematic refusal to act on available warnings—a pattern
that would likely persist even with sophisticated AI early warning systems under existing governance structures.
The technology existed. Hach Company sold online water quality analyzers that could monitor lead, pH, and
chlorine simultaneously. YSI manufactured multi-parameter sensors for distribution system monitoring. Utilities
in other cities already used these systems for routine operations.
Cost wasn't the barrier—Flint eventually spent over $400 million on crisis response and infrastructure
replacement. A comprehensive monitoring system would have cost under $1 million. The payoff would have
been enormous: early detection, rapid response, and prevented health impacts.
But monitoring systems only work when someone acts on their warnings. Flint's crisis continued not because
officials lacked information, but because they refused to acknowledge problems the information revealed.
Health surveillance integration could have forced earlier acknowledgment. Emergency departments saw unusual
patterns of skin rashes, hair loss, and gastrointestinal problems throughout 2014. Pediatric clinics noticed
developmental delays and behavioral changes. Laboratory results showed rising blood lead levels months before
official recognition.
AI systems could have connected these disparate health indicators to water quality problems. Machine learning
algorithms excel at identifying patterns across multiple data sources. Clustering analysis could have detected
geographic concentrations of health problems. Statistical models could have linked symptom patterns to water
contamination.
Dr. Mona Hanna-Attisha finally forced official acknowledgment in September 2015 by documenting elevated
blood lead levels in Flint children. Her analysis used simple statistical techniques comparing pre- and postswitch
health data. More sophisticated AI systems could have identified these patterns months earlier.
Yet health surveillance systems would have faced the same dismissal as other warning sources. Officials attacked
Dr. Hanna-Attisha's credentials and questioned her methodology even when faced with irrefutable evidence.
Algorithmic health warnings would have been dismissed as false positives or biased analyses.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8258
www.rsisinternational.org
The missed opportunities weren't technological—they were political. Real-time monitoring existed but wasn't
deployed. Health surveillance systems operated but weren't integrated. Data analysis tools worked but weren't
applied. The crisis continued because people in power chose to ignore available information.
Lessons for AI Implementation
Flint teaches harsh lessons about technological solutions in contexts of structural inequality. First: Information
without power changes nothing. Flint's crisis occurred despite abundant warning signs from multiple sources.
Residents complained constantly. Environmental advocates documented problems. Public health professionals
identified health impacts. Technical experts warned about infrastructure failures. None of this information
mattered because the people who possessed it lacked power to force action.
AI early warning systems would have added to this chorus of ignored voices unless accompanied by mechanisms
forcing official response. Algorithmic predictions need legal mandates requiring protective action. Technical
warnings need enforcement mechanisms independent of administrative discretion. Community control over
technological systems becomes essential for ensuring appropriate response.
Second: Community oversight must be built into system design rather than added afterward. Flint's emergency
management structure excluded community input by design. Residents couldn't vote out appointed managers
like Earley. Citizens couldn't appeal administrative decisions. Democratic accountability disappeared behind
technocratic authority.
AI systems risk reproducing these exclusions through technical complexity that obscures decision-making
processes. Algorithmic authority can become even less accountable than administrative authority when
communities can't understand how systems reach conclusions or challenge their recommendations.
Community control requires more than consultation or transparency. It demands genuine power over
technological design, deployment, and governance. Communities need authority to override algorithmic
recommendations when they conflict with local knowledge or community priorities.
Third: Integration with existing advocacy organizing provides the political infrastructure necessary for effective
response. Environmental justice organizations had documented Flint's vulnerabilities for years before the crisis.
Community groups possessed local knowledge that complemented technical monitoring. Advocacy networks
provided organizing capacity for demanding change.
AI systems should strengthen rather than replace these existing advocacy structures. Technology should amplify
community voices rather than substituting algorithmic authority for democratic participation. Early warning
systems should provide tools for organizing rather than excuses for technocratic management.
The goal isn't building better surveillance systems—it's building community power to respond effectively when
warnings occur. AI can provide useful information for advocacy efforts, but it cannot replace the political
organizing necessary for forcing protective action.
Fourth: Mandatory response protocols must be built into system design. Flint's crisis continued because officials
like Earley possessed discretionary authority to ignore warnings. Emergency managers could dismiss resident
complaints. State regulators could minimize contamination findings. Corporate consultants could provide cover
for inaction.
Effective AI early warning systems need automatic triggers that remove human discretion from protective
responses. When algorithms detect contamination above certain thresholds, systems should automatically
implement emergency protocols. When health surveillance identifies disease clusters, public health responses
should activate immediately.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8259
www.rsisinternational.org
But mandatory protocols only work within governance structures committed to public health protection.
Emergency management in Flint prioritized cost reduction over health protection. Automatic responses would
have been overridden or disabled when they conflicted with fiscal priorities.
The lesson isn't that AI early warning systems couldn't have helped Flint—it's that they would have helped only
within fundamentally different political structures that prioritized community health over fiscal austerity and
administrative efficiency.
Technology can't solve political problems, but appropriate technology deployed within democratic governance
structures could prevent future Flints. The key lies in building community power first, then deploying technology
To Serve Community Needs Rather Than Technocratic Management.
Lessons Learned and Future Applications
The Insufficiency of Information Without Power
Information is worthless without power to act on it.
Flint's crisis demolished the myth that better data automatically produces better outcomes. Multiple warning
systems already existed. Residents complained constantly about water quality. Environmental advocates
documented contamination. Public health professionals identified elevated blood lead levels. Technical experts
warned about infrastructure failures.
None of it mattered. Emergency managers dismissed resident concerns as uninformed hysteria. State regulators
minimized contamination findings as statistical anomalies. Corporate consultants provided scientific cover for
continued inaction. Officials possessed all the information needed to prevent disaster—they simply chose to
ignore it.
This pattern repeats across environmental crises. Love Canal residents documented health problems for years
before officials acknowledged toxic contamination. Warren County citizens protested PCB dumping that state
agencies had already approved. Cancer Alley communities mapped disease clusters that regulatory agencies
consistently dismissed.
The problem isn't information scarcity—it's power inequality. Communities affected by environmental hazards
often know about problems first. They notice changes in air quality, water taste, and neighborhood health
patterns. They document impacts through lived experience. Yet environmental decision-making systematically
privileges official expertise over community knowledge.
Structural power analysis reveals why information fails to trigger protective action. Economic incentives favor
short-term cost reduction over long-term health protection. Municipal budgets prioritize immediate savings over
infrastructure investment. Political careers advance through visible spending rather than invisible prevention.
Emergency management in Flint exemplified these dynamics. Appointed managers answered to state officials
rather than local residents. Performance metrics focused on fiscal targets rather than public health outcomes.
Success meant reducing municipal costs regardless of community consequences.
AI early warning systems cannot overcome these structural barriers alone. Algorithmic predictions will join the
pile of ignored warnings unless accompanied by fundamental shifts in power relations. Technical solutions
require political solutions to become effective.
The key insight: information without enforcement mechanisms becomes another tool for blame-shifting rather
than problem-solving. When crises occur despite available warnings, officials claim they needed better data
rather than acknowledging they ignored existing evidence.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8260
www.rsisinternational.org
Community Knowledge as Essential Data Source
Residents know their neighborhoods better than any algorithm ever will.Flint families identified water quality
problems within hours of the source switch. They documented geographic patterns of contamination months
before official studies confirmed them. They connected health symptoms to water exposure when medical
professionals dismissed their concerns as coincidental.
This community knowledge wasn't anecdotal—it was systematically accurate. Resident observations of water
discoloration corresponded directly to areas with lead service lines. Parent reports of children's health problems
predicted neighborhoods with highest blood lead levels. Community complaints about rashes and hair loss
mapped onto areas with bacterial contamination.
Yet officials consistently dismissed community knowledge as unscientific compared to technical expertise.
Laboratory data enjoyed higher credibility than lived experience. Professional opinions outweighed
neighborhood observations. Quantitative measurements trumped qualitative descriptions.
Jason Corburn's "Street Science" (2005) documents how this epistemic hierarchy reproduces environmental
injustice. Communities of color possess detailed knowledge of local environmental conditions through daily
experience. But environmental decision-making privileges technical expertise that systematically excludes
community wisdom.
AI systems risk amplifying these exclusions by treating community knowledge as noise rather than signal.
Machine learning algorithms trained on official data sources will miss patterns that residents recognize
immediately. Predictive models optimized for technical accuracy may ignore community insights that prove
prophetic.
Democratic AI governance requires treating community knowledge as essential data rather than irrelevant input.
Residents should participate in algorithm design rather than merely accepting algorithmic outputs. Community
observations should train machine learning models rather than being filtered out as bias.
Participatory design approaches offer frameworks for community-controlled AI development. Rather than
topdown technological deployment, bottom-up processes ensure that affected communities shape algorithmic
systems. Community priorities guide technical specifications rather than technical constraints determining
community options.
Examples exist of successful community-controlled monitoring. Environmental justice organizations use
lowcost sensors to document air pollution that official monitors miss. Community health workers track disease
patterns that formal surveillance systems ignore. Neighborhood groups map environmental hazards through
resident knowledge rather than professional assessment.
These approaches demonstrate that community knowledge can enhance rather than replace technical monitoring.
Residents identify problems that sensors miss. Community observations provide context that algorithms lack.
Local knowledge guides data interpretation in ways that improve rather than compromise scientific accuracy.
The goal isn't choosing between community knowledge and technical expertise—it's integrating both through
democratic processes that respect community wisdom while leveraging technological capabilities.
Applications for Other Cities
Flint isn't unique. Similar infrastructure crises threaten cities across America, creating opportunities to apply AI
early warning systems more equitably.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8261
www.rsisinternational.org
Detroit: Post-Industrial Infrastructure Challenges
Detroit faces documented infrastructure problems stemming from massive population decline. The city's
population dropped from approximately 1.8 million in 1950 to around 670,000 today. This demographic shift
left extensive water infrastructure serving far fewer customers than originally designed, creating severe
maintenance and funding challenges.
AI applications could potentially include infrastructure triage algorithms that help prioritize limited maintenance
resources. Predictive modeling might identify which neighborhoods to prioritize for continued infrastructure
investment versus areas where managed decline becomes unavoidable. Environmental health mapping could
detect areas with highest contamination risks.
However, such technological applications would need careful design to avoid reproducing existing racial and
class inequalities. If algorithms optimize based on property values or tax revenue, they might recommend
abandoning predominantly Black neighborhoods regardless of resident preferences. Community control over
algorithmic design becomes essential for ensuring equitable outcomes.
Jackson, Mississippi: Water System Failures
Jackson experienced a major water crisis in 2022 when its treatment facilities failed, leaving 180,000 residents
without safe drinking water. The crisis highlighted how infrastructure problems in state capitals can affect entire
metropolitan regions through interconnected systems.
Potential AI applications might include regional infrastructure modeling that treats water systems as integrated
networks rather than isolated municipal utilities. Predictive algorithms could identify cascade failure risks that
cross jurisdictional boundaries. Climate resilience planning could model how extreme weather might stress aging
infrastructure across multiple communities.
But regional technological solutions risk reproducing the same power imbalances that contributed to Jackson's
problems. State intervention through AI systems might override local democratic control just as emergency
management did in Flint. Community participation in regional planning becomes crucial for preventing
technocratic solutions that ignore affected residents.
Newark, New Jersey: Lead Service Line Replacement
Newark has documented widespread lead contamination from service lines installed before federal lead bans.
The city distributed bottled water to residents while working to replace thousands of lead connections throughout
its distribution system.
AI could potentially support these efforts through predictive lead line mapping that uses property records,
construction dates, and water quality data to identify probable lead locations. Chicago has already demonstrated
these approaches through predictive analytics that improved efficiency of lead line identification (Potash et al.,
2015).
Community engagement platforms could democratize these technological tools by allowing residents to
contribute local knowledge about plumbing materials and renovation history. Community organizations could
use predictive maps to advocate for accelerated replacement in high-risk areas.
Yet technology alone cannot address the political and economic barriers that slow lead line replacement. Property
owners resist disruption and cost-sharing requirements. Municipal governments delay expensive infrastructure
projects due to budget constraints. Federal funding remains inadequate for comprehensive replacement
programs.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8262
www.rsisinternational.org
Lessons for Implementation
These potential applications share common requirements for equitable implementation:
First, community control over technological design and deployment. Affected residents must participate in
algorithm development rather than merely accepting technological solutions imposed by outside experts.
Second, integration with existing community organizing rather than replacement of democratic participation. AI
tools should amplify community voices in infrastructure debates rather than substituting algorithmic authority
for political engagement.
Third, explicit attention to environmental justice implications. Algorithms must be designed to challenge rather
than reproduce existing patterns of racial and class inequality in infrastructure investment.
Fourth, mandatory response protocols that remove official discretion to ignore algorithmic warnings. Technical
predictions need enforcement mechanisms that ensure protective action regardless of political or economic
pressures.
The fundamental lesson from Flint applies to all these potential applications: AI early warning systems can
provide useful information for preventing infrastructure crises, but only within governance structures that
prioritize community health over cost reduction and democratic participation over technocratic management.
Without these political foundations, sophisticated early warning systems become tools for managing crises more
efficiently rather than preventing them from occurring. They enable better surveillance of vulnerable
communities rather than elimination of the vulnerabilities themselves.
Framework For Equitable Implementation
Community-Centered Design Principles
Technology serves power. The question is whose power it serves.
Most AI systems get designed in corporate labs by engineers who never experience the problems their algorithms
claim to solve. Smart city technologies emerge from tech companies selling efficiency to municipal managers.
Public health surveillance systems reflect security priorities rather than community health needs. The result:
technologies that optimize for metrics that matter to powerful institutions while ignoring impacts on vulnerable
populations.
Community-centered design flips this process. Instead of imposing technological solutions on affected
communities, democratic participation shapes algorithmic systems from initial conception through ongoing
governance. Residents identify priorities. Community members define success metrics. Local organizations
control deployment decisions.
This requires more than consultation or transparency. Genuine democratic participation demands
decisionmaking power over technological design. Communities need authority to reject algorithmic
recommendations that conflict with local knowledge or community values. Residents must control the data
collection processes that feed machine learning systems.
Environmental justice integration becomes essential rather than optional. AI early warning systems must
explicitly address historical patterns of environmental racism rather than reproducing them through supposedly
neutral optimization. Algorithms should prioritize vulnerable communities for protection rather than treating all
neighborhoods equally despite unequal baseline conditions.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8263
www.rsisinternational.org
Concrete implementation means building environmental justice principles directly into algorithmic code.
Predictive models should weight health impacts in communities of color more heavily than cost considerations.
Infrastructure investment algorithms should prioritize areas with highest environmental burdens regardless of
property values. Health surveillance systems should detect disparities rather than normalizing them.
Community veto power provides the ultimate accountability mechanism. When algorithmic recommendations
conflict with community preferences, residents should possess legal authority to override technological systems.
This reverses typical power relations where communities must appeal technical decisions rather than controlling
them.
Examples exist of community-controlled technology governance. Community land trusts provide models for
collective ownership of essential infrastructure. Participatory budgeting platforms enable democratic
decisionmaking about public investments. Cooperative governance structures demonstrate alternatives to both
corporate and state control of technological systems. Figure 4 contrasts two fundamentally different approaches
to AI governance in environmental protection. The technocratic model concentrates decisionmaking authority in
expert institutions, maintains agency control over data, relies on administrative discretion for responses, and
limits community input to consultation. This approach reproduces the power dynamics that enabled Flint's crisis.
The community-controlled model redistributes authority to affected residents, ensures community ownership of
data, mandates community approval for responses, and centers direct participation in governance. This
comparison illustrates that the choice isn't between AI and no AI, but between AI systems that serve existing
power structures versus those designed to democratize environmental protection.
The key principle: technology should amplify community power rather than concentrating it in expert
institutions. AI systems should strengthen democratic participation rather than replacing it with algorithmic
authority.
Institutional Accountability Mechanisms
Community control requires institutional structures that enforce democratic governance rather than merely
declaring it.
Legal mandates for responses to AI alerts remove discretionary authority that enables official inaction. When
algorithms detect contamination above specified thresholds, automatic protocols should trigger emergency
response regardless of cost considerations or political preferences. Public health warnings should activate
immediately when surveillance systems identify disease clusters.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8264
www.rsisinternational.org
But automatic responses only work within governance structures committed to public health protection. Flint's
emergency managers would have overridden or disabled algorithmic triggers that conflicted with fiscal priorities.
Legal mandates need enforcement mechanisms independent of the officials they're designed to constrain.
Community enforcement provides one solution. Residents should possess legal standing to sue when officials
ignore algorithmic warnings. Environmental justice organizations should have authority to trigger emergency
protocols when government agencies fail to respond appropriately. Community groups should control oversight
mechanisms rather than depending on self-regulation by government institutions.
Protected funding for infrastructure maintenance addresses the economic barriers that enable crises like Flint's.
AI early warning systems become meaningless when municipalities lack resources to respond to algorithmic
predictions. Predictive maintenance recommendations require guaranteed funding for implementation.
Constitutional amendments in some countries establish rights to water, housing, and environmental protection
that create legal obligations for government provision. Similar approaches could establish infrastructure
maintenance as a legal requirement rather than a political choice. Protected funding mechanisms could insulate
essential services from austerity politics.
Community enforcement mechanisms must include both legal and political tools. Residents need court access
for challenging official decisions. Community organizations require political power for influencing policy
implementation. Environmental justice movements need resources for sustained advocacy around technological
governance.
Institutional design should assume that powerful interests will attempt to capture technological systems for their
own benefit. Accountability mechanisms must anticipate and prevent such capture rather than responding to it
after the fact.
Implementation Challenges
Building equitable AI systems faces predictable obstacles that require strategic responses.
Political resistance emerges from institutions that benefit from current arrangements. Municipal managers prefer
technocratic authority over democratic accountability. Corporate interests oppose community control over
profitable technologies. State agencies resist oversight mechanisms that constrain administrative discretion.
This resistance reflects deeper conflicts between democratic governance and technocratic management.
Algorithmic authority appears more efficient than community participation. Expert decision-making seems more
rational than democratic deliberation. Technical optimization looks superior to political negotiation.
But efficiency for whom? Rationality according to whose values? Optimization toward what goals? These
questions reveal that technocratic approaches embed political choices while denying their political character.
Community control makes these value judgments explicit and democratic rather than hidden and technocratic.
Economic obstacles compound political resistance. Community-controlled technology development costs more
initially than corporate solutions. Democratic participation slows implementation compared to administrative
decree. Environmental justice requirements increase expenses compared to efficiency optimization alone.
Yet these apparent costs reflect accounting that ignores community benefits and long-term sustainability.
Corporate AI systems generate profits for tech companies while externalizing social costs onto affected
communities. Technocratic efficiency produces disasters like Flint that cost far more than prevention would have
required.
Technical limitations create additional barriers. Community members may lack coding skills for algorithm
development. Participatory design processes require time and resources that communities often lack. Democratic
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8265
www.rsisinternational.org
governance of complex technical systems demands expertise that gets systematically excluded from affected
neighborhoods.
But technical complexity shouldn't determine political outcomes. Communities can hire technical expertise while
retaining decision-making authority. Participatory design methods exist for democratizing technological
development. Community education can build local capacity for technological governance without requiring
every resident to become a programmer.
Social barriers may prove most challenging. Decades of exclusion from technological decision-making leave
many communities suspicious of AI systems regardless of governance structures. Environmental racism creates
justified distrust of government promises about protective technology. Past failures of community engagement
create skepticism about participatory design.
Building community power requires addressing these historical betrayals through concrete demonstration that
technological systems can serve community needs. Success stories from other cities can provide models for
replication. Community organizing builds political capacity for demanding democratic control over algorithmic
systems.
The implementation strategy must recognize that equitable AI requires political transformation, not just technical
innovation. Community-centered design principles need social movements to enforce them. Institutional
accountability mechanisms require organized communities to utilize them. Technical solutions need political
solutions to become effective.
This means integrating AI governance into broader environmental justice organizing rather than treating it as a
separate technical issue. Community control over algorithmic systems becomes part of building community
power more generally. Democratic technology governance serves the larger goal of democratic society rather
than vice versa.
CONCLUSION: BEYOND TECHNO-SOLUTIONISM
Limits of Technological Approaches
Technology can't solve racism. This basic truth undermines most discussions of AI solutions to environmental
injustice.
Flint's water crisis wasn't caused by inadequate sensors or primitive algorithms. It resulted from systematic
devaluation of Black lives through emergency management that prioritized corporate profits over community
health. Lead contamination reflected centuries of discriminatory housing policy, industrial siting decisions, and
infrastructure disinvestment that concentrated environmental hazards in communities of color.
AI early warning systems cannot address these root causes. Sophisticated algorithms might detect contamination
faster, but they won't change the political economy that makes contamination profitable. Machine learning
models might predict infrastructure failures more accurately, but they won't generate funding for maintenance
in poor communities. Predictive surveillance might identify health disparities earlier, but it won't eliminate the
structural inequalities that create those disparities.
The danger lies in technological fixes that obscure political problems. When officials deploy AI systems as
solutions to environmental injustice, they suggest that better algorithms can substitute for redistribution of power
and resources. Technical interventions become excuses for avoiding structural transformation.
This depoliticization serves powerful interests that benefit from current arrangements. Corporate polluters prefer
technological monitoring over regulatory enforcement. Municipal managers favor algorithmic optimization over
democratic accountability. State agencies choose technical solutions over community control because
technology preserves existing power relations while appearing to address problems.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8266
www.rsisinternational.org
Environmental racism requires political solutions: community control over land use decisions, democratic
governance of industrial development, public ownership of essential infrastructure, and redistribution of
environmental benefits and burdens. Technology can support these political changes, but it cannot substitute for
them.
Social movement strategy becomes essential for ensuring that technological interventions strengthen rather than
undermine organizing for environmental justice. AI systems should provide tools for community organizing
rather than replacements for political engagement. Algorithmic governance should enhance democratic
participation rather than concentrating power in technical institutions.
Socially-Embedded Algorithmic Governance
The alternative to techno-solutionism isn't rejecting technology—it's embedding technological development
within democratic social processes.
Participatory design offers frameworks for community control over algorithmic systems. Instead of experts
designing AI for communities, affected residents participate in algorithm development from initial conception
through ongoing governance. Community priorities guide technical specifications rather than technical
constraints determining community options.
This approach treats technology as a social process rather than a neutral tool. Algorithmic systems embody the
values and power relations of their creators. Democratic participation in technological design becomes a
mechanism for democratizing those embedded values and relations.
Community-controlled technology development requires resources, skills, and institutional support that
marginalized communities often lack. But examples exist of successful democratic technology governance.
Community land trusts provide models for collective ownership of essential infrastructure. Cooperative
enterprises demonstrate alternatives to corporate control of technological systems. Participatory budgeting
platforms enable democratic decision-making about public investments.
Technology should support rather than replace community organizing. AI early warning systems can provide
useful information for advocacy campaigns demanding infrastructure investment. Predictive algorithms can
identify vulnerable communities that need prioritized protection. Health surveillance systems can document
environmental injustices that communities experience but struggle to prove.
But information only becomes powerful when connected to organized communities capable of acting on it. The
most sophisticated AI system cannot overcome structural barriers to environmental protection without social
movements building political power for change.
Structural transformation remains the ultimate requirement. Environmental justice demands fundamental
changes in how societies organize production, distribute resources, and make collective decisions. Technology
can facilitate these transformations, but it cannot create them independently of political organizing.
Implications for Environmental Justice Movement
Technology governance has become a core environmental justice issue rather than a technical side concern.
Algorithmic systems increasingly shape decisions about infrastructure investment, environmental monitoring,
and emergency response that directly affect community health and safety. Corporate control over these
technologies reproduces patterns of environmental racism through supposedly neutral optimization processes.
Democratic governance of AI systems becomes essential for environmental justice.
Community-controlled technology can serve as an organizing tool for building broader political power.
Campaigns for democratic AI governance connect immediate concerns about algorithmic bias to larger questions
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8267
www.rsisinternational.org
about corporate control and community self-determination. Technology governance becomes a vehicle for
advancing environmental justice rather than a distraction from it.
But success requires treating technology as means rather than ends. The goal isn't better algorithms—it's better
democracy. Community control over AI systems serves the larger purpose of community control over decisions
affecting environmental health and safety.
The path forward demands political transformation that goes far beyond technological innovation.
Environmental justice requires redistributing power from corporations to communities, from experts to residents,
from profit maximization to life protection. AI systems can support this redistribution, but only within social
movements committed to fundamental change.
The Moment of Decision
Corporate and government interests are racing to lock in algorithmic control over environmental governance.
Every day that passes without community intervention strengthens technocratic institutions that will be nearly
impossible to democratize once fully established. Smart city contracts are being signed. AI surveillance systems
are being deployed. Predictive algorithms are taking control of infrastructure decisions.
Environmental justice communities have exactly one opportunity to seize democratic control of these systems.
That opportunity is now. Not next year when the technology becomes more mature. Not next decade when the
problems become more obvious. Now, while AI governance structures remain fluid enough for community
intervention.
The corporate sector has unlimited resources and political connections. Government agencies have legal
authority and technical expertise. Environmental justice movements have only one decisive advantage: the moral
clarity that comes from fighting for survival rather than profit. But moral clarity without organized power
becomes meaningless sentiment.
Every environmental justice organization must immediately prioritize technology governance as central to their
mission. Every community facing environmental threats must demand democratic control over AI systems being
deployed in their neighborhoods. Every resident of a frontline community must understand that algorithmic
governance will determine whether their children live or die from preventable environmental disasters.
There will be no second chance. The next five years will determine whether AI serves private interests or
community protection, technocratic efficiency or democratic accountability, environmental racism or
environmental justice. After that window closes, the power relations embedded in algorithmic systems will
become as durable as the infrastructure they govern.
The choice is simple: organize now or live forever under algorithmic authority designed by and for your
oppressors. The technology exists to prevent future Flints. The political will does not. Create it. Demand it.
Fight for it. Before it's too late.
REFERENCES
1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
2. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Crow. Polity Press.
3. Bullard, R. D. (1990). Dumping in Dixie: Race, class, and environmental quality. Westview Press.
4. Clark, L. P., Millet, D. B., & Marshall, J. D. (2017). National patterns in environmental injustice and
inequality: Outdoor NO2 air pollution in the United States. PLOS ONE, 12(4), e0177629.
https://doi.org/10.1371/journal.pone.0177629
5. Corburn, J. (2005). Street science: Community knowledge and environmental health justice. MIT
Press.
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025
Page 8268
www.rsisinternational.org
6. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor.
St. Martin's Press.
7. Hanna-Attisha, M., LaChance, J., Sadler, R. C., & Champney Schnepp, A. (2016). Elevated blood
lead levels in children associated with the Flint drinking water crisis: A spatial analysis of risk and
public health response. American Journal of Public Health, 106(2), 283-290.
https://doi.org/10.2105/AJPH.2015.303003 Kitchin, R. (2014). The data revolution: Big data, open
data, data infrastructures and their consequences. SAGE Publications.
8. Lanphear, B. P., Hornung, R., Khoury, J., Yolton, K., Baghurst, P., Bellinger, D. C., Canfield, R. L.,
Dietrich, K. N., Bornschein, R., Greene, T., Rothenberg, S. J., Needleman, H. L., Schnaas, L.,
Wasserman, G., Graziano, J., & Roberts, R. (2005). Low-level environmental lead exposure and
children's intellectual function: An international pooled analysis. Environmental Health Perspectives,
113(7), 894-899. https://doi.org/10.1289/ehp.7688
9. Masten, S. J., Davies, S. H., & McElmurry, S. P. (2016). Flint water crisis: What happened and why?
Journal AWWA, 108(12), 22-34. https://doi.org/10.5942/jawwa.2016.108.0195
10. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York
University Press.
11. O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens
democracy. Crown.
12. Olson, A., Boehmke, B., Lacombe, V., Schwing, C., & Seeger, B. (2017). Machine learning approach
for real-time lead exposure risk assessment in drinking water. Environmental Science & Technology,
51(14), 7835-7843.
https://doi.org/10.1021/acs.est.7b01497
13. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information.
Harvard University Press.
14. Pauli, B. J. (2019). Flint fights back: Environmental justice and democracy in the Flint water crisis.
MIT Press.
15. Potash, E., Ghani, R., Walsh, J., Jorgensen, E., Lohmann, C., Prachand, N., & Mansour, R. (2015).
Predictive modeling for public health: Preventing childhood lead poisoning. Proceedings of the 21st
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2039-2047.
https://doi.org/10.1145/2783258.2788629
16. Proctor, R. N., & Schiebinger, L. (Eds.). (2008). Agnotology: The making and unmaking of
ignorance. Stanford University Press.
17. Sattar, A. M., Ertuğrul, Ö. F., Gharabaghi, B., McBean, E. A., & Cao, J. (2016). Prediction of timing
of watermain failure using gene expression models. Water Research, 90, 434-448.
https://doi.org/10.1016/j.watres.2015.12.040
18. Tilly, C. (1998). Durable inequality. University of California Press.
19. U.S. Census Bureau. (2014). American Community Survey 5-year estimates.
https://www.census.gov/programs-surveys/acs/ .
20. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136.
21. Zahran, S., McElmurry, S. P., Kilgore, P. E., Mushinski, D., Press, J., Love, N. G., Sadler, R. C., &
Swanson, M. S. (2018). Assessment of the Legionnaires' disease outbreak in Flint, Michigan.
Proceedings of the National Academy of Sciences, 115(8), E1730-E1739.
https://doi.org/10.1073/pnas.1718679115