Beyond Transactions

Chris RosatoChris Rosato
7 min read

The Collaboration Gap

If you’re treating AI like a fancier search engine, you’re wasting it.

Typing "What's the capital of Portugal?" into ChatGPT is like using a Ferrari to deliver pizzas. Technically it works. But you’re missing the whole point.

Most people approach AI with the wrong expectations: they want clean answers, quick tasks, and easy wins. But that’s not collaboration. It’s vending machine logic—insert prompt, expect snack.
Of course, transactional AI use has its place. Not every interaction needs to be a deep collaboration. Sometimes you just need an answer, a draft, a shortcut—and that's fine.
But if that's all we ever expect, we miss the real transformative potential.

The real opportunity isn’t getting faster answers. It’s getting better thinking. And right now, most people aren’t even scratching the surface.

Beyond the transactional

When new technologies emerge, we usually understand them through the lens of what came before. We categorize, compare, and constrain them based on familiar patterns. It's natural, and limiting.

Spreadsheets were treated as fancy calculators. The early web looked like a stack of digital brochures. Smartphones were just tiny computers at first. Each time, we missed the bigger potential because we were stuck seeing through the wrong frame.

The same mistake is happening with AI. We’re using it like a faster writer, a more convenient assistant, a slightly cleverer search bar. Practical, yes. Transformative? Not even close.

What if we're missing something much more profound?

Breaking fixedness

In the 1930s, psychologist Karl Duncker introduced the idea of functional fixedness—our tendency to see objects only in their usual roles. In his famous Candle Problem, participants struggled to mount a candle to a wall because they couldn't imagine using a thumbtack box as anything but a container. The solution—to empty the box and tack it to the wall—felt invisible because their assumptions blinded them.

Functional fixedness isn't just about objects. It applies to technologies, processes, and relationships too, including our relationship with AI.
Of course, the way technologies are adopted—and the metaphors we use to frame them—vary widely across cultures. But whatever the starting point, the challenge remains: expanding our mental models to meet new possibilities.

Right now, we see AI mostly as an answer machine or a task monkey. It's efficient. It's reliable. It's boring.

And ironically, that same transactional mindset has seeped into how we deal with human relationships too: minimizing people into roles, reducing complex interactions into one-off exchanges. It's convenient, but it’s also shallow. If we can't break the transactional habit, we won’t just misuse AI—we'll misuse each other.

Moving beyond consumption

So how do we move beyond treating AI like a vending machine?

Imagine two different approaches to the same task.

Approach 1
"Write me a marketing strategy for my new product."

Approach 2
"I'm developing a marketing strategy for a product that helps remote workers manage digital overload. My audience analysis suggests three potential customer groups, but I'm unsure which to prioritize. Can we explore the pain points and messaging opportunities for each?"

The first approach expects a deliverable. The second invites a conversation. It’s the difference between asking for fish and learning to fish better. Real collaboration doesn’t happen by accident. It has to be designed.

Designing collaborations

Working with AI should feel like working with a brilliant, slightly alien teammate. You don’t just fire off orders. You establish context. You define roles. You iterate together.

Prompt engineering isn’t about tricking the AI into giving better outputs. It’s about setting the stage for real collaboration—like being a film director and actor at the same time. When you provide background and goals, you’re setting the scene. When you assign roles, you're casting characters. When you break down big challenges into smaller parts, you’re storyboarding the journey.

The best AI collaborations create tension between human intuition and machine exploration—like two jazz musicians improvising together, following structure but staying open to surprise. If you’re only using AI to do what you already know how to do faster, you’re missing the real music.

Before moving forward, it’s worth asking:
How often do you treat AI—or even human collaborators—as true partners in thinking, versus vending machines for quick results?

Learning through interaction

Design thinking emphasizes rapid prototyping and iteration: fail fast to succeed sooner. The same mindset makes AI interactions vastly more powerful.

Instead of asking for perfect answers, ask AI to show you different ways of seeing the problem. Try rotating perspectives.
"How would a behavioral economist view this pricing challenge?"
"What if we analyzed this product launch like an anthropologist studying rituals?"

Or introduce wild constraints.
"What if we had to solve this with half the budget and no internet access?"
"What if our customers were all 70+ years old?"

Every experiment trains you to think more flexibly, more creatively—not just when using AI, but when navigating real life too.

For example, a team designing healthcare outreach didn't just ask an AI for tips. They iteratively co-created new messaging strategies by challenging the cultural assumptions the AI surfaced—discovering blind spots none of the human team had initially seen.

When AI becomes a thinking partner, not just a task-doer, surprising insights emerge.

The partnership paradox

The great irony is this: by treating AI as a way to outsource thinking, we diminish not just the technology’s potential but our own. Real collaboration demands more from us, not less. It demands engagement, critical thinking, and creative direction.

Using AI well doesn’t mean thinking less. It means thinking differently. When you treat AI as a substitute, you stay in control. When you treat AI as a partner, you give yourself permission to evolve. The difference is like hiring someone to drive you around versus learning to race cars yourself: one gets you places safely. The other changes what you’re capable of.

The shadow side

Beyond missed personal opportunities, there’s a larger risk: inequality. The divide between transactional and transformational AI use is already opening—and it’s following the same old socioeconomic lines.

Quick-answer skills are easy to pick up. Deep collaboration skills (clear thinking, nuanced articulation, comfort with ambiguity) are harder. They’re also unevenly taught, and unevenly valued.

If we’re not careful, AI could widen the gap between those who command machines and those who are commanded alongside them. It’s not just about technology access. It’s about who has the habits and frameworks to think with technology—and who doesn’t.

Human flourishing

The future of AI isn’t just technical. It’s cultural.

Education systems need to teach collaborative intelligence, not just rote memorization. Interfaces need to encourage exploration, not just extraction. Organizations need to reward augmented thinking, not just automated production.

The question isn’t whether AI will take over more tasks. It will. The real question is whether that shift will diminish human potential—or expand it in new directions. And that answer depends not on the machines, but on us.

I've spent more than two decades working as a multidisciplinary knowledge professional—long before "range" became a buzzword. (David Epstein’s Range captures this idea well.) Along the way, there were confused looks and cautious remarks. I still remember being told, "In New Zealand, we prefer specialists," even when it was clear I could do the job. Others called me a "generalist specialist"—a title that sounded more like a shrug than a compliment.

But I don’t think it was hostility. It was a lack of better words.
People defaulted to categories they understood, even when the reality didn’t fit.

And in a way, that’s exactly the kind of thinking constraint this essay is about: when we can't imagine outside the categories we already have, we limit what’s possible—whether we’re talking about people or technology.

In a future shaped by AI, the ability to move across disciplines, reframe problems, and stay mentally flexible won’t just be valuable. It will be essential.

And anyone willing to rethink old assumptions can build it.

Beyond the transaction trap

The most important insight about AI collaboration is simple.

The limiting factor isn’t the capability of our machines. It’s the smallness of the frameworks we bring to them.

AI’s impact won't be determined by what it can do. It will be determined by how we choose to live and work alongside it.

At Orion Group, we’ve built much of our business from the AI-on up—integrating collaborative AI agents deeply into our workflows. These agents—supporting functions like financial strategy, copywriting, product development, customer experience, and personalized learning—allow us to automate the routine and free up human energy for what matters most: creativity, leadership, and growth.

Of course, using AI tactically for automation has real value. But treating it only as a bolt-on efficiency tool misses the deeper strategic opportunity: building ecosystems where humans and AI truly learn, adapt, and create together.

That belief is why we built spaces like the Leadership Gym, where creativity, resilience, and collaboration aren’t treated as rare traits but as muscles that can—and must—be trained.

This isn't about chasing technology. It's about strengthening what makes us human—on purpose, and at scale.

If you care about shaping that future too, you're not alone.
We're building a community for people who believe leadership, creativity, and human connection are not optional in an AI world—they're the foundation.

Because in the world we're building, the difference won’t be academic.
It will decide whose creativity—and whose future—thrives.

0
Subscribe to my newsletter

Read articles from Chris Rosato directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Chris Rosato
Chris Rosato