When AI Leaves Communities Behind: What We Miss and Why It Matters


With so much talk around AI and the products and services built around it, the community aspect—who these tools are ultimately meant to serve—is often overlooked. In my opinion, AI’s impact on community is equally, if not more important than the technology itself.
Too often, organizations adopt a “build it and they will come” mindset: build the product or service, roll it out, and assume the people will receive it. That’s why, when I talk to people about AI and its intersection with community-level realities, I often bring up a key concept: top-down organizational strategy meeting bottom-up implementation.
Top-down strategy refers to leadership and stakeholders reaching consensus on a direction—what they believe to be the most strategic approach to deliver a product or service. Bottom-up implementation is about the people, the public, the community. They’re not just the end-users. They’re a critical part of the process. But this is where many organizations fall into symbolic inclusion, where community opinions are collected but not truly factored into the design or direction. This disconnect is at the heart of what I’ll share throughout this blog post.
Symbolic inclusion often starts with good intentions.
A survey is sent out, a focus group is convened, a community liaison is hired. On paper, these efforts suggest engagement. But in practice, the insights gathered rarely alter the trajectory of the initiative. The strategy has already been locked in. The decisions have already been made. The community is being invited to react, not to shape.
This is more than a missed opportunity. It’s a signal. It tells communities that their voices matter only as long as they don’t challenge the original blueprint. And over time, that erodes trust. It creates fatigue. People learn to disengage—not because they don’t care, but because they’ve seen how little their input moves the needle.
If the goal is to build AI systems that truly serve people—not just in theory, but in practice—then inclusion has to be more than symbolic. It calls for structures that make participation real. Feedback loops shouldn’t just exist for show. They should shape the work.
One global example of participatory design in action comes from The Global Fund. While not an AI organization, its governance model offers a compelling case for what it means to embed community voice meaningfully, beyond symbolism.
The Global Fund was created to fight AIDS, tuberculosis, and malaria, but what sets it apart is how it structures participation from the communities most affected by these diseases. Its Country Coordinating Mechanisms (CCMs) are multi-stakeholder committees that include representatives from government, civil society, the private sector, and, crucially, people living with the diseases themselves. These CCMs are not advisory, instead they are decision-making bodies.
To receive funding, countries must demonstrate that affected communities were meaningfully involved in the proposal process. And even after funds are granted, continued engagement is monitored. The Global Fund conducts periodic reviews to ensure community voices aren’t just consulted once but remain active throughout implementation. If engagement lapses, funding can be suspended.
It’s a model that offers some thoughtful possibilities. In this approach, community-based organizations are given access to oversight tools, including ways to raise concerns or request independent evaluations. The intention is not just to check the box on inclusion but to build in meaningful accountability. Some parts of this structure, like giving community members a say in how things move forward, can help the work stay connected to what is really happening in people’s lives.
It’s not a perfect comparison, but the lesson applies: when community voice is structured into the process, communities are no longer treated as a recipient of innovation but as a co-creator of outcomes. This puts more of us on a path to building systems and technology that people actually trust.
Offering a Different Path Forward
There’s no one-size-fits-all approach to community participation in AI projects, but there are some promising questions leaders can start asking:
What would it look like to treat community feedback as a condition of progress, not a courtesy?
What systems would we need to monitor, not just community satisfaction, but continued involvement?
How might we build accountability into the way we work with communities, not just into how we measure results?
By shifting from symbolic inclusion to shared authorship, organizations can move beyond outreach and toward real partnership. That change does take time. It also takes humility. And it takes structure. But if the goal is to build tools and systems that truly serve, then the people they’re meant for must be part of that journey—not as spectators, but as co-authors or participants of what’s possible.
This is where many efforts start to strain. Too often, technical milestones are treated as proof that a system is working. But in practice, that idea breaks down the moment you ask: Whose reality defined “success”? Who set the metrics? Who got to name the problem? And who was told to adopt something they had no part in shaping?
So let’s push a bit further and consider the following scenario.
Imagine a city agency rolling out an AI-powered eligibility system for public housing benefits. From the top-down view, it's efficient, faster processing, fewer bottlenecks, lower fraud. But here’s what gets buried or overlooked:
What would it look like to treat community feedback as a condition of progress, not a courtesy?
What systems would we need to monitor not just community satisfaction, but continued involvement?
How might we build accountability into the relationship itself, not just into the metrics?
This kind of disconnect isn’t rare. And it won’t be solved by better code alone. That’s why this closing isn’t just a summary; it’s an invitation … an invitation to pause. To consider the following questions:
Where in your work have people, or the community, been left without a real say?
Who shaped the questions that guided the project?
Who interpreted the data and made meaning from it?
Where has public input been limited to a quick survey or a Q&A after major decisions were already made?
Where might your outreach look inclusive on the surface, but fall short of true participation?
Too often, what’s called community engagement amounts to a brief check-in after decisions are already made. But real engagement invites people in early, builds trust over time, and leaves space for shared influence.
Thinking about implementing AI or building a community? I’d love to help or answer any questions you have. I also offer workshops and strategy support—learn more on my website!
Subscribe to my newsletter
Read articles from Nick Norman directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
