The Hidden First Step in AI Adoption: Alignment, Then Domain Focus


When organizations start exploring AI, the hardest part often isn’t the technology—it’s knowing where to begin. More often than not, the real challenges are about alignment, clarity, and timing.
In a recent talk, "AI Strategy Implementation – Transforming the Enterprise," Eric Lamarre of McKinsey & Company said something that stuck with me: don’t start with the tech. He walked through how Air Canada approached AI adoption not by rushing into models, but by starting with leadership, strategy, and structure. It’s a refreshingly honest look at what early AI adoption actually requires and how much of it has nothing to do with code.
For anyone wondering where to begin, in my opinion, this talk is worth unpacking. That talk got me thinking, and in this blog post, I’ll share a few takeaways that stood out. Not just from a systems and research lens, but from my work helping organizations figure out where AI fits and how to make it work.
Start with Alignment, Not the Tech
One of the most overlooked phases in AI adoption is alignment, especially among leadership. Lamarre emphasized that Air Canada took its entire management team through what he called a “learning journey” before touching any code. That meant taking time to define roles like data engineer vs. data scientist, to understand the tech stack, and to clarify what data management actually involves.
"You should take the two or three months to actually align the top team. What is AI, what is a data engineer, how is that different from a data scientist?"
From what I’ve seen—across institutions, companies, and community-facing organizations—there’s often a wide range of familiarity when it comes to AI. Some people are experimenting on their own time, trying to understand how it fits into their work. Others are waiting on leadership or governing bodies to hand down policies or frameworks. The result is a mixed bag of understanding. So when it finally comes time to make decisions or move something forward, people aren’t always on the same page—and the conversation can quickly feel chaotic or misaligned.
That’s why I believe one of the most important first steps isn’t just talking about AI as a technology, but agreeing on what we even mean by it. Artificial intelligence is an umbrella term. And for many teams, it’s not clear what falls under it. Is it machine learning? Is it a chatbot? Is it automation? Is it agents? Getting clarity on the language is the starting point. Because until that happens, it’s hard to build shared understanding, let alone strategy.
That kind of upfront investment doesn’t always feel urgent, but it becomes a foundation. Without that shared understanding, even the best AI tools won’t benefit an organization. Everyone needs to be on the same page about what’s possible, what matters, and what the organization is truly trying to achieve.
When understanding or that shared mental model is missing, two things tend to fall apart: alignment and strategic coherence.
While this post leans more toward alignment, it’s worth briefly touching on strategic coherence within the context of AI implementation.
I like to think of alignment and strategic coherence as being two sides of the same coin. Alignment is the people work: creating space for listening, reflection, and connection across teams. Strategic coherence is the structure underneath, making sure that what we’re aligning to actually fits together. But what does that look like?
Consider this analogy: Strategic coherence is like building a house where the electrician, plumber, and carpenter are all working at the same time—off the same blueprint. The electrician wires for the smart home system because it’s clearly in the design. The plumber installs pipes that fit the spa bathroom layout. The carpenter chooses materials that complement both the wiring and the plumbing. Everything clicks because each team’s work fits together, not just individually, but as part of the bigger design. That’s strategic occurrence.
On the other-hand, a lack of strategic occurrence means no one is making sure the work stays aligned with the blueprint. Consequently, you don’t end up with a well-built house. You get rooms that don’t connect, wiring that doesn’t match the layout, plumbing in wrong parts of the house, and finishes that look like they belong in different homes.
Furthermore, strategic coherence isn’t just about initial alignment. It’s about staying in sync as the system evolves. Everyone might agree on building a house, but the moment the plan shifts—say, toward smart home systems—that shared understanding can splinter.
In my own work, I’ve seen this play out in very different ways.
One of the most impactful early experiences I had was working with the founder of a company focused on education and mechanics for transportation devices. I walked alongside him in a mentorship capacity, and he approached his AI initiative with a deep commitment to alignment. For nearly three months, he focused entirely on exploring the strategy, asking questions, and bringing others into that early conversation. At the time, I didn’t realize how rare or important that step was. But when the project launched, it worked smoothly. The clarity paid off.
Not long after that experience, I found myself working inside a much larger institution where there wasn’t time or space for alignment. Everyone was busy, and I thought maybe we could just dive into the tech and figure it out as we went. That didn’t work. Without that initial alignment, the work got stuck before it could begin.
A third moment came while serving as the communications lead in a high-level fellowship for Open Library, the world’s largest open source digital library, now reaching over 10 million readers.
That fellowship experience began with something unexpected: a pause. The project’s lead emphasized the importance of spending time up front just talking and aligning. He made it clear he’d rather spend more time in dialogue than rush into building and risk having to undo misaligned work later. That early alignment was foundational. The project itself—building a communications program from scratch—was a success. (For those interested, you can read the full case study: Co-Designing the Communications Program at Open Library—The Largest Open-Source Digital Nonprofit Library—from the Ground Up)
Those three experiences shaped how I see AI adoption. Alignment isn’t extra. It’s essential. If a team or organization doesn’t have the time to do that first, it might not be the right time to start at all.
Think in Domains, Not Just Use Cases
Another insight from Lamarre’s talk that stood out to me was the idea of organizing AI efforts by domains rather than individual use cases. At Air Canada, the team created a domain map that reflected how the business naturally operates. Each domain contained a portfolio of related use cases, allowing for more focused transformation.
"Start with domains, not use cases... A single use case rarely has transformative power."
Before going further, it helps to clarify what we mean by "domains." In this context, a domain isn’t an entire industry, nor is it always the same thing as a department. Think of domains as ecosystems within the business, each with its own challenges, workflows, and levers for impact.
For example, in an airline, domains might include cargo, customer support, flight operations, or loyalty programs. Each of these domains could contain multiple use cases—like optimizing cargo loading, predicting call center demand, or improving delay communication. What makes a domain powerful is that it gives structure. It lets you group related efforts and move toward transformation with focus and intent.
If you try to implement use cases randomly across departments, they may never connect. But if you begin within a defined domain, the work becomes more organized and measurable. You can build momentum, learn faster, and scale with purpose.
This idea resonated with me because I’ve seen the opposite happen: organizations chasing isolated wins that never add up. When leaders think in terms of domains—functions, journeys, or operating units—they’re more likely to structure AI adoption in ways that align with how their business actually runs.
One example Lamarre highlighted, in terms of domains, was cargo. Rather than launching scattered use cases across the airline, Air Canada chose cargo as a starting domain and focused their efforts there. Within that single area, they developed around ten coordinated use cases: from predicting cargo no-shows to optimizing how space was used on passenger flights.
Because the work was concentrated, it didn’t just improve individual tasks, it changed how cargo operated as a whole. That’s the power of domain-level focus. It creates enough critical mass to transform operations, not just tweak them.
If there’s one thing this talk drove home, it’s that AI adoption isn’t about chasing technology. It’s about making choices. Intentional ones: Choosing to align before you build. Choosing a domain, not a scatter of experiments. Choosing to start where you can actually create momentum, not just check a box.
Whether your team is just getting started or already deep in exploration, the lesson is the same: real transformation starts with how you think, not what you deploy.
Thinking about implementing AI or multi-agent systems? I’d love to help or answer any questions you have. I also offer workshops and strategy support—learn more on my website!
Subscribe to my newsletter
Read articles from Nick Norman directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
