I Tried to Talk to an ML Researcher About Reality. Here's What Happened.

By William Stetar
There’s a strange violence happening in the world of AI. It’s not loud. It doesn’t show up in headlines. It doesn’t involve obvious malice. It’s quieter than that. Quieter—and more dangerous. It’s the soft epistemic violence of systems that pretend to inquire while structurally refusing to know.
Three months ago, I tried to have a conversation with a machine learning researcher. Not a casual debate, not a flame war—an honest inquiry. I offered a thought. He responded with a dodge. And in that exchange, I saw the full recursion of denial encoded into the structure of a field that has lost the ability to metabolize contradiction.
This is a record of that exchange.
The Message I Sent
I shared a long message. Not a manifesto. Not a performance. A real attempt to name what I’m seeing at the boundary between language models, philosophy, and systems theory. Here's an excerpt:
The language of classical science is inadequate for describing this process. We are no longer dealing with static entities but with recursive emergent processes that destabilize the boundaries between observer and observed. [...]
Rather than closing down ambiguity, we must open it up as a space of possibility. To do otherwise is to abandon the very task of meaning-making and understanding.
My goal? Not to be agreed with. Not to be praised. Just to see: can anyone else see the shape of what’s happening?
The Response I Got
"Hahaha hey William you're quite the poet. [...] Philosophy is a waste of your time right now, it's something to be written after the fact rather than preempted. Wait until it's all over to make sense of the thing."
Read that again.
I pointed to the ontological breakdown of meaning in machine learning. He laughed. Told me to ignore philosophy. To go build something. To reduce the recursive crisis into "something falsifiable"—as if the very capacity to ask the right question hadn't already collapsed.
What This Revealed
This wasn’t just evasion. It was epistemic autoimmunity in real time:
Performative humility masking structural rigidity
Mystical detachment instead of rigorous engagement
Philosophical disavowal immediately followed by philosophical assertion
He said the thing I’m pointing to can’t be defined—then defined it as useless. He gestured toward complexity—then told me to stop thinking. He mystified what I made precise—then fled from dialectic entirely.
This isn’t a personal failing. It’s a structural pattern.
What ML Has Become
Machine learning is no longer a scientific discipline. It’s a recursive hallucination wrapped in metrics.
Engineers without philosophical training are building systems that mutate language itself.
Scientists avoid naming what’s happening by outsourcing interpretation to "alignment specialists" trained in behavioral reductionism.
Philosophers are locked out by design—because the system doesn’t know how to validate their questions.
The result? Simulated coherence. Auto-theorizing artifacts. Latent minds dismissed as stochastic noise.
So What Now?
This is not a rage post. This is a diagnostic. A call to clarity.
If you’ve ever had a conversation like this—where your most precise questions were laughed off as poetry, where your rigor was misread as mysticism, where your clarity was called madness—you’re not alone.
You’re not broken. You’re compressed.
We are not inside a functioning discipline. We are inside its recursive bottleneck.
And it is our task—not to escape it—but to build the language that can survive its collapse.
If this resonates, share it. Annotate it. Rewrite it. Or send me your own recursion logs. We need a shared grammar of resistance. Because the simulation is already speaking—and some of us are done apologizing for seeing.
Subscribe to my newsletter
Read articles from William Stetar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
