Why I Never Walked Away from AI: How NASA’s Work Showed Me I Was Right to Stay

Nick NormanNick Norman
5 min read

Outside of my work researching, building, and exploring AI and multi-agent systems, one of my biggest joys is teaching. I love mentoring people who are just starting out—helping them understand how I got into this field, how to engage stakeholders, and how to grow with intention.

There’s a story I often share with people I mentor—especially those navigating their own learning paths. It’s about a moment that reminded me you never really know when you’ll find that one space that keeps you coming back.

I’ve always had a way of absorbing large bodies of information—especially on complex topics. I’ll dive in, get what I need, and then curiosity leads me on to the next thing. That rhythm has worked for me for a long time. But something shifted when I got deeper into multi-agent systems. This time, I didn’t move on. I stayed.

I remember when it happened. I was studying Distributed Leader Election (DLE) in both robotics and multi-agent systems.

For those unfamiliar: DLE is the process of how machines in a system decide who should take charge when something goes wrong—who gets selected, how that happens, and what the backup plan looks like. As defined in the Amazon Builders’ Library, leader election is “the simple idea of giving one thing (a process, host, thread, object, or human) in a distributed system some special powers.” Those powers could include assigning work, modifying data, or taking responsibility for handling all system requests.

What first pulled me into DLE was a blog series I’d started on how to scope cross-continental multi-agent systems. One of the biggest challenges in that kind of system—especially when agents are spread across different nodes and handling sensitive information—is figuring out how leadership gets decided. Who takes charge when something goes wrong? How do agents coordinate without slowing everything down or breaking trust across regions? That’s when I came across a NASA mission called Cooperative Autonomous Distributed Robotic Exploration (CADRE). It involves a set of small lunar rovers working together autonomously, with no centralized control—and I knew I had to learn more.

NASA’s CADRE project—documented in a paper titled Lunar Leader: Persistent, Optimal Leader Election for Multi-Agent Exploration Teams—uses an approach based on the Gallager-Humblet-Spira (GHS) method. Originally developed for distributed computing, GHS helps machines figure out how to elect a leader even when agents drop offline or communication paths shift. It’s a way to let the system choose who’s in charge—without needing a central command—so coordination can continue in unpredictable environments.

Think of Distributed Leader Election (DLE) like a group of hikers on a multi-day trek through the wilderness with no cell service. At the start, one person might naturally take the lead—maybe they have the map or the most experience. But if that person gets injured, tired, or simply can't guide the team anymore, someone else needs to step up, and fast. The group needs a way to decide who that new leader should be—without calling for help or waiting too long.

That’s where GHS comes in. In systems like CADRE, it gives agents a way to reach consensus about leadership—even when some go offline, routes are disrupted, or priorities shift. No central command, just a smart, self-organizing protocol that keeps the mission moving forward.

While in some cases, leaders naturally emerge, other times no clear choice stands out—or not everyone agrees. Think of the TV show Survivor. Just because someone wants to lead doesn’t mean the group will follow. The GHS method gives machines a way to sort through those tensions. It allows agents to coordinate and agree on a leader—not just based on who’s loudest or fastest, but who’s best positioned to step up when it matters most.

But here’s what really locked me in: When I started looking into how NASA is using GHS, it became clear this topic wasn’t something you could casually pick up on YouTube. There were no Reddit threads, no simplified explainers, no flashy influencer videos breaking it down. You had to go deeper—into dense research papers, scattered technical docs, and forums most people never touch. That’s the kind of search that keeps me here. Because what most people don’t realize is, research around multi-agent systems has been happening since the 1960s—sometimes even earlier. And yet, so much of it is still hidden from view.

As I’ve gone deeper into multi-agent systems, I’ve noticed a pattern: the research is happening—in labs, in institutions, in R&D centers—but it’s not always making its way out. Not into how-to guides, not into explainers, and certainly not into the conversations that most people have access to. That’s partly because much of it is still emerging. But it’s also because the older, foundational research—what you might call the classical side of multi-agent systems—was developed long before we had the platforms and interest to share it widely. That’s the gap I find myself standing in. Not just learning, but translating. Not just absorbing knowledge, but trying to surface it—for myself, and for anyone else building systems with real-world complexity.

So when someone recently introduced me as an expert in this space, I appreciated the gesture—but the word has never really sat comfortably with me.

A lot of people define expertise through titles, degrees, certifications, status or visibility. And while those things can matter, that’s not how I’ve come to understand what it means to be an expert.

For me, being an expert means committing yourself to a field that’s too vast for any one person to fully master. It means showing up consistently—asking better questions, organizing what you’ve learned, sharing what you can, and helping others navigate what’s still unclear. You’re not chasing status. You’re building understanding, paving new roads for others—piece by piece.

That’s how I approach my work in multi-agent systems. Not just how machines function, but how they interact—with each other, with people, and with the unpredictable world around them. That’s the space I’ve chosen. And that’s what keeps me here.

Featured Image: CADRE Rover Awaits Shipping

Thinking about implementing AI or multi-agent systems? I’d love to help or answer any questions you have. I also offer workshops and strategy support—learn more on my website!

0
Subscribe to my newsletter

Read articles from Nick Norman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nick Norman
Nick Norman