How to Lead When the Smartest One in the Room Is a Model


Five research-backed skills that define leadership in the AI era
You now lead a team where half the value is generated by code, not people.
Your reports include humans, chatbots, and dashboards that never sleep.
If that’s not true for your team yet, it soon will be.
The job isn't just to manage output anymore. It's to orchestrate clarity.
That’s where future-back leadership begins.
Before our latest advisory session, Mike Bennetts paused us with a deceptively simple prompt:
“Picture the outcome you want—then work backward toward it.”
The shift was immediate. The room moved from polishing slides to clarifying purpose.
My own success test crystallized:
“I understand what an executive perspective means for this iteration of the presentation. The Advisory Conversation, including Mike's feedback, has offered new perspectives and opportunities on what is working and what we might improve.”
That single move—anchoring the session with a future-back sentence—modeled the first of five leadership skills needed in the AI era. These skills emerge from The Future of Work & Leadership in the AI Era, a 2025 internal research report by Orion Group, and are reinforced by findings from McKinsey, Gallup, the World Economic Forum, and real-world leadership transformations across industries.
These aren't soft skills. They're survival skills.
Especially when your team includes humans, models, and the occasional hallucination.
1. Facilitative Presence
Think conductor, not commander
“Facilitation of distributed, AI-enabled collaboration is a core emerging competency.”¹
Why it matters
Command-and-control collapses when your team spans time zones and the software thinks faster than you. Traditional decisiveness often becomes a bottleneck when data and insights flow from multiple sources, both human and digital.
What happens when it's missing
Meetings meander. Models hijack the agenda. Humans disengage. According to Gallup’s 2024 workplace research, only 21% of employees globally feel engaged at work, with poor meeting structures being a key contributor.²
Try this
Start every meeting with one future-back sentence: the observable outcome you want. For example: “By the end of this hour, we’ll have three key initiatives using AI insights, with clear owners and success metrics.”
Assign the model a single role (e.g. option synthesizer or pattern identifier). Be specific about what you want the AI to do—and what you don’t.
Rotate human facilitation weekly to build collective muscle. Document who excels at what type of facilitation.
Real-world example: At Salesforce, CEO Marc Benioff implemented a facilitation technique called V2MOM (Vision, Values, Methods, Obstacles, Measures) to provide structure in an increasingly AI-augmented workplace. Meeting leaders must articulate these elements before launching initiatives, ensuring human intention drives the technology, not vice versa.³
2. Sense-Making at Speed
Turning model output into momentum
Leaders must “interpret and give meaning to the data floods produced by AI systems.”⁴
Why it matters
AI delivers options. Humans deliver meaning. MIT leadership researcher Deborah Ancona identifies “sensemaking” as one of four critical leadership capabilities—especially as information volume multiplies.⁵
What happens when it's missing
Dashboards pile up. Progress stalls. Teams drown in data while starving for insight. A PwC study found that 59% of executives say information overload has negatively impacted decision quality.⁶
Try a two-minute sense-making ritual
What did the model deliver? (e.g. “The AI analyzed 200,000 customer interactions and identified three sentiment patterns.”)
Why does it matter? (e.g. “This directly affects our customer satisfaction scores.”)
What happens next? (e.g. “We’ll pilot a response protocol for the negative sentiment cluster.”)
Real-world example: Microsoft’s Azure AI team runs “insight-to-action loops” after major model runs. Leaders host short sense-making sessions where technical findings are translated into business implications—ensuring outputs drive real decisions.⁷
Narrative—not data—is what actually moves a team forward.
3. Systems & Second-Order Thinking
If you pull this lever, what moves next?
“Holistic, second-order decision-making separates adaptive organizations from fragile ones.”⁸
Why it matters
Automating the wrong task doesn’t just waste time. It breaks trust. Peter Senge’s research shows organizations with systems thinking recover faster from disruption and make fewer critical errors.⁹
What happens when it's missing
Efficiency here creates burnout there—or bias, or backlash. Forrester documented how one retailer’s AI scheduling system reduced costs but triggered a 22% spike in turnover due to unpredictable shifts.¹⁰
Try this before you automate
Name the first-order benefit: “This will reduce processing time by 40%.”
List second-order effects: “Customer service agents will need escalation training.”
Map ripple effects: Use a 2×2 matrix of stakeholders (internal/external) and impact (positive/negative).
Only proceed if the upside beats the risk—and mitigation plans are in place.
Real-world example: Mayo Clinic mapped out how their new AI diagnostic assistant would impact radiology workflows, patient experience, IT, and education—avoiding rollout backlash that other hospitals faced.¹¹
4. Emotional Intelligence & Trust
AI can simulate tone. Only humans build trust.
“As AI absorbs technical tasks, emotional intelligence shifts from personal asset to leadership pillar.”¹²
Why it matters
Psychological safety doesn’t travel well over Wi-Fi. Google’s Project Aristotle found it’s the #1 predictor of team performance—and it’s harder to sustain in hybrid or AI-enhanced settings.¹³
What happens when it's missing
Silence becomes the norm. Trust drains away. One HBR study found low-safety teams were 74% less likely to raise concerns about AI—leading to avoidable, costly failures.¹⁴
Try this
Begin 1-on-1s with an emotion check-in: “How are you feeling about our new AI tools?”
Track emotional patterns across the team.
Set clear norms for questioning AI and rewarding human overrides.
Use “I’ll take silence as disagreement” to prompt real responses.
Real-world example: Satya Nadella’s Microsoft prioritized empathy during GitHub Copilot's rollout. Devs participated in “AI apprenticeships” where they flagged where human judgment should override automation—leading to better adoption.¹⁵
5. Ethical Stewardship of Algorithms
Don’t outsource your conscience
“Inclusive and ethical AI leadership must be operationalized, not espoused.”¹⁶
Why it matters
Trust, once lost, is expensive to rebuild. Edelman’s 2024 survey shows 71% of employees expect their employer to use AI responsibly—and 68% would leave if that trust is broken.¹⁷
What happens when it's missing
Brand equity erodes—quietly, then all at once. The average cost of an AI ethics crisis? $5.5 million and a 23% brand value hit.¹⁸
Every quarter, check these fundamentals
Data provenance: Can you trace the source data?
Fairness testing: Has it been tested with diverse users?
Explainability: Can you explain model decisions to non-experts?
Human veto: Is there a clear override path?
Real-world example: IBM's Ethics Board runs quarterly “ethical health checks” on all AI products. Apps that fall below score thresholds are paused for remediation.¹⁹
If any box fails, the launch waits. Period.
The Implementation Challenge: What Gets in the Way
Even with the right intentions, implementing these five skills runs into predictable barriers:
The Frozen Middle
Middle managers often lack the skills or space to integrate AI. McKinsey reports 73% of failed AI initiatives trace back to this gap.²⁰
Fix: Carve out time for manager upskilling and create a transformation council” to voice operational needs.
The Competency Gap
Many leaders were promoted for pre-AI strengths. The World Economic Forum estimates 60% lack the tech fluency to lead AI integration.²¹
Fix: Use mutual mentoring pairs—junior AI-literate talent matched with experienced leaders.
The Cultural Inertia
Legacy habits resist change. Deloitte found change-resistant firms take 2.6× longer to realize AI value.²²
Fix: Launch low-risk pilot projects to show results before scaling.
What It Really Means to Lead Now
When your team includes humans and models:
You’re no longer the smartest one in the room.
You’re the one who frames the right questions.
You don’t outrun the algorithm.
You orchestrate clarity around it.
Leadership hasn’t vanished. It’s just moved—
Upstream, where orchestrating clarity beats issuing commands.
The AI era won’t erase leadership.
It just rewrites what good leadership looks like.
To request your copy of the full Orion Group report at no cost, The Future of Work & Leadership in the AI Era, DM me directly on LinkedIn.
Footnotes
¹ Orion Group, The Future of Work & Leadership in the AI Era, (2025), p. 14
² Gallup, State of the Global Workplace (2024), p. 8
³ Benioff, Marc, Trailblazer (2023), pp. 112–118
⁴ Orion Group, The Future of Work & Leadership in the AI Era, (2025), p. 14
⁵ Ancona, Deborah, “Sensemaking,” Harvard Business Review Leadership Handbook (2022), pp. 3–19
⁶ PwC, Global Data and Analytics Survey (2024), p. 27
⁷ Microsoft, AI Transformation Playbook (2024), pp. 42–48
⁸ Orion Group, The Future of Work & Leadership in the AI Era, (2025), p. 15
⁹ Senge, Peter, The Fifth Discipline (2023 Ed.), pp. 73–91
¹⁰ Forrester Research, The Hidden Costs of AI (2024), pp. 12–15
¹¹ Mayo Clinic, Journal of Healthcare Management, Vol. 69, pp. 214–228
¹² Orion Group, The Future of Work & Leadership in the AI Era, (2025), p. 16
¹³ Google, Project Aristotle (2023), pp. 7–11
¹⁴ Edmondson, Amy C., Harvard Business Review (Jul–Aug 2024), pp. 98–107
¹⁵ Nadella, Satya, Wired Interview (June 2024)
¹⁶ Orion Group, The Future of Work & Leadership in the AI Era, (2025), p. 23
¹⁷ Edelman, Trust Barometer: AI & Work (2024), pp. 14–16
¹⁸ IBM, The Cost of AI Ethics Failures (2024), pp. 3–9
¹⁹ IBM, Operationalizing Responsible AI (2024), pp. 17–25
²⁰ McKinsey, The State of AI (2024), pp. 42–51
²¹ World Economic Forum, Future of Jobs Report 2025, pp. 78–83
²² Deloitte, AI Culture Index (2024), pp. 5–12
Subscribe to my newsletter
Read articles from Chris Rosato directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
