Why AI Built to Help Gets Shelved
A blog post on the structural barriers preventing AI for Social Impact projects from reaching deployment. Based on our FAccT 2026 paper.
Between 2018 and 2023, the number of AI for Social Impact (AI4SI) projects tripled from 170 to over 600. These are projects applying machine learning to problems like HIV prevention, maternal health, food insecurity, and disaster response — work explicitly oriented toward communities that large tech companies have little incentive to serve. Yet a striking majority of these projects never reach the communities they’re designed for.
For our FAccT 2026 paper, we interviewed 26 active AI4SI researchers covering 38 projects across public health, conservation, social justice, and agriculture. Almost all of them had a project that never made it out of the lab. We set out to understand why — not from the perspective of the nonprofits or communities being served, but from the researchers themselves.
Despite rapid growth in AI for Social Impact, most projects stall before reaching the communities they intend to serve.
What we found was that deployment failure is rarely about the technology. It’s about everything surrounding it.
The Gauntlet
Through the analysis of our interviews, we identified four interconnected categories of challenges. Think of them as a gauntlet: a technically excellent project has to survive all four.
Four challenge categories identified through thematic analysis of 26 researcher interviews covering 38 AI4SI projects.
The structural challenges hit hardest and earliest. Academia runs on a “publish or perish” logic that rewards frequent, technically novel output. AI4SI work is almost the inverse: slow, iterative, engineering-heavy, and often producing exactly one paper after two years of fieldwork. One participant put it directly: “As a PhD student, you may be forced into engineering work for months with no clear roadmap to getting a paper. This is a true blocking factor.”
But even researchers willing to absorb that cost still face the partner organization side of the equation. Nonprofits and government agencies typically operate with small teams, limited budgets, and no mandate for multi-year AI research collaborations. And when a senior administrator says “this sounds great, let’s do it,” that enthusiasm doesn’t always reach the frontline. One participant, reflecting on a wildlife conservation project, recalled:
“A high-level secretary said, ‘sounds like an awesome idea, let’s do it!’ But then, when we went to the rangers on the ground, they said, ‘we just need better shoes. We just need better guns. We don’t need AI.’”
This captures something the broader AI discourse consistently misses: the problem of misaligned incentives isn’t just between academia and its partners — it exists within organizations. Leadership enthusiasm can mask real frontline resistance, and no amount of technical sophistication bridges that gap.
The Slow Work of Building Trust
Beneath the structural and organizational barriers lies something harder to quantify: trust. Partner organizations have often been burned before. A university team shows up, runs interviews, publishes a paper, and disappears. As one researcher put it: “They don’t trust you because they feel like you just come and get a paper, and then you’re gone.”
The researchers who had actually managed to deploy something shared a consistent strategy: deliver something useful early, before the heavy technical work begins. A simple data visualization. A regression model answering a question the partner actually had. Something tangible that demonstrates commitment — not just with leadership, but with the people who will eventually use the system.
Data, too, is its own obstacle course. One participant spent six months helping a partner organization figure out what data they even had — only to discover that the data needed for the project had never been collected. Another spent a full year building sufficient trust before a partner would share sensitive records, eventually embedding a student as an intern just to make data access logistically possible.
The Maintenance Problem Nobody Talks About
Even the projects that reach deployment face a final, underappreciated challenge: staying deployed. Academic labs are not built for software maintenance. Students graduate. Grants expire. The one person who understood the system moves on.
“If funding is not figured out, it may not fail immediately — but it will fail eventually. Because at the end of the day, somebody has to pay for it.”
The gap between what research grants provide and what sustainable deployment actually costs is rarely acknowledged.
What Actually Helps
Our participants weren’t only documenting failure. A few strategies emerged consistently across interviews.
The most universally cited was the “quick win”: in the early weeks of a collaboration, prioritize delivering something immediately useful to the partner before any heavy technical work begins. It builds trust, sharpens the problem definition, and signals that you’re not just there to extract data for a paper.
Working through intermediaries — researchers from social work, medicine, or public health who already hold long-standing relationships with partner organizations — dramatically accelerated deployment in several cases. The tradeoff is real: this creates an access problem for early-career researchers who don’t yet have those networks.
At the institutional level, participants pointed to structural reforms: recognizing deployed systems and open-source tools as legitimate research outputs, building university consortia to manage data agreements and partner matching, and developing training programs that give AI researchers genuine exposure to fieldwork and community-engaged methods.
Why This Matters
The communities that AI4SI research aims to serve are not going to be served by large technology companies. Those communities are not profitable markets. AI4SI is, in a real sense, the only part of the AI ecosystem explicitly oriented toward them. That makes getting this right urgent.
By naming the gauntlet clearly, we hope to make it slightly less lethal — and to help ensure that the AI systems built to help people actually get to do so.
Based on: Majumdar, Zhang, Prawal & Yadav. “The Hardness of Achieving Impact in AI for Social Impact Research.” FAccT ‘26, Montreal. doi.org/10.1145/3805689.3812387. Supported by NSF Grant #2427737.