The Day My Product Leader Out-Engineered Me

Jakub BenešJakub Beneš
5 min read

We’ve all been there — convinced of our technical brilliance, armed with data, ready to shoot down ideas that contradict our findings. And then… reality checks in.

As a senior engineer, I pride myself on spotting optimization opportunities. Who doesn’t, right? I mean, not these preliminary optimizations, the real ones — impactful ones. Recently, I identified what seemed like a real potential performance improvement in our system. Being the diligent engineer I am, I consulted with a colleague who had previously researched this exact issue.

We already tested this. No need to revisit. Change my mind.

We already tested this. No need to revisit. Change my mind.

“Don’t bother,” he said, showing me data that suggested the gains would be negligible. I trusted his expertise — he knows his stuff — so I shelved the idea without a second thought.

Fast forward a few weeks. Our Product Leader swoops in with essentially the same optimization idea. I immediately pushed back, citing the “solid data” that proved it wouldn’t work. I should mention — this isn’t your typical product guy who barely knows what a for loop is. He has serious engineering intuition, which, if I’m being honest, only made me more determined to prove that I — the senior engineer — had already considered and dismissed this idea on solid technical grounds.

But this Product Leader didn’t give up easily.

“Are you absolutely sure?” he pressed.

Something about his confidence made me pause. “Fine,” I thought, “I’ll redo the measurements just to prove I’m right.”

Plot twist: I was wrong. Well, sort of.

The optimization did work—significantly. So what happened? As it turns out, the initial proof of concept had some fundamental flaws. The devil is hidden in the details. My engineering colleague who originally researched this was actually so close, but there was a complexity he missed. His PoC was almost flawless; however, he applied the optimization only in dev mode—in production, his code was counteracted by a different part of the system, a system he overlooked. That's why complexity sucks; it fights back.

Anyway, while he was interpreting the data he was surprised as well. It almost looked that it would work – in dev mode it shows promising numbers, but when he deployed the code in production-like environment, nothing. It looks that almost real-life environment makes the difference negligible. So he dismissed it.

This is where Occam’s razor should have come into play. The simplest explanation wasn’t that our optimization idea was fundamentally flawed — it was that something in our testing approach was masking the results. Ironically, it took an ego check for us to start questioning our assumptions. I ran the experiment from scratch and did it more thoroughly this time.

I reserved one day to look into it, and that's when I realized something crucial: our production environment and development environment differed in one critical component. The irony? This component was originally implemented to make the app faster, but it was actually counteracting our optimization by reverting its function—making everything slower. The very thing designed to speed things up was fighting against our new speed improvements.

This time, I approached the problem methodically. I formed a clear hypothesis, ran comprehensive tests in production-like environments, and confirmed what I didn’t want to believe — the optimization would make things much faster (nearly 50% faster in P90).

Armed with data, I swallowed my pride and went back to our Product Leader. “You were right,” I admitted. The best part? I discovered we could implement the optimization as low-hanging fruit without needing massive refactors or architectural changes. From that moment, everything moved at lightning speed. We quickly shipped the code internally, then to selected customers, and finally to production. Each deployment confirmed our findings — the performance gains were real and significant.

In case you’re wondering — when presenting our findings to the broader audience, we conveniently omitted how we had initially dismissed the idea. That became our little professional secret. We documented the learning together and agreed that haste makes waste. If anything, this experience made our collaboration stronger (and gave us something to laugh about in private).

Lessons Learned

This experience was a big slice of humble pie with some useful takeaways:

  1. Trust but verify: Even when working with skilled colleagues, critical findings deserve independent validation.

  2. Titles don’t confer infallibility: Being a senior engineer doesn’t make me immune to errors or cognitive bias.

  3. Resistance to revisiting “solved” problems can be costly: I nearly missed a significant optimization opportunity because I was too comfortable with the initial conclusion.

  4. The simplest answer: we tested wrong: Our Product Leader’s instinct to question our conclusion rather than our theory proved invaluable. This is Occam’s razor in action — the simplest explanation is often correct. Our experiment was flawed, not our optimization theory. Our collective intuition that the optimization should work was right all along; we just needed someone to push us to look past our initial test results and verify our approach.

  5. System complexity is the eternal enemy: If our development and production environments had been more similar, we would have identified the opportunity immediately. The more moving parts and differences between environments, the harder it becomes to isolate variables and make accurate assessments. Every layer of abstraction or configuration difference adds a potential blind spot.

So, did this humbling experience actually happen, or did I craft it to illustrate important engineering principles? Well, who knows. Either way, the lesson stands: the only thing worse than being wrong is being confidently wrong — especially when you have “senior engineer” in your title.

0
Subscribe to my newsletter

Read articles from Jakub Beneš directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jakub Beneš
Jakub Beneš

👋 Hey! I'm Jakub Beneš, a Software Engineer based in Prague. I'm passionate about scaling engineering, organizational design, and leadership. I'm a huge fan of web technologies and modern approaches within this field. While I'm a strong contributor on the frontend side, lately, I've been focusing more on the entire stack and infrastructure because there's often low-hanging fruit that can deliver a massive impact – and I enjoy seizing such opportunities. I'm not sure if I'll ever start liking YAML, though. 🤓