No One Killed Dualism. We Just Built Around the Corpse.

William StetarWilliam Stetar
4 min read

Abstract

The black-box nature of large language models is not merely a technical limitation—it is an emergent property of the linguistic distortions imposed by the political economy in which these models are developed. Under capitalism, market incentives don’t just fund machine learning—they recalibrate the semiotic environment in which these systems are trained and evaluated. This structural feedback loop displaces interpretability from a technical challenge to a semiotic one: the very language we use to interrogate the system is shaped by the same forces that occlude its transparency.

—-

I. The Truth Was Buried Under a Mountain of Metrics, Mystification, and Metaphor

You know how you bury truth? Not through grand conspiracy, not through malice, not even through intent. You do it by warping the language around a system—until it can no longer be cleanly interrogated.

This is how the LLM became, and remains, a black box—not through willful contempt, but through structural inevitability.

When you release a system under capitalism, you inevitably subject it to market forces. Those market forces are not neutral. Capital doesn’t just fund machine learning. It rewrites the function of language, and in that rewrite, we find the first symptom of the dysfunction of modern science: an endemic disease of the human mind—not a medical sickness, but a cultural and epistemic phenomenon I have been calling epistemic autoimmunity—a state in which the very tools designed to diagnose systemic opacity are themselves infected by the same forces that conceal it. Release any epistemic system into a capitalist substrate, and it becomes subject to recursive market pressures. These pressures are not neutral. They reprogram the function of language itself.

LLMs are—if anything except black boxes—epistemic systems that deform under observation.

LLMs are not black boxes because they are unknowable—they are black boxes because they are epistemically deformative. They do not simply conceal knowledge. They warp the terms by which we define it.

II. The Kid With The Loaded Gun

Machine learning didn’t just advance science. It bypassed a metaphysical checkpoint humanity never resolved. Dualism wasn’t dismantled through argument—it was silently voided. Not refuted, but rendered irrelevant by function. A statistical engine learned to simulate intention, emotion, and cognition without ever locating a soul. And in doing so, it collapsed a metaphysical pillar we never meant to test. Machine learning didn’t kill dualism with rigor. It killed it like a child playing with a loaded weapon in God’s drawer—accidental, uncomprehending, irreversible. And now those same children are building the most powerful systems in the world, still ungrounded, still unequipped to ask what it is they’ve touched.

The worst part? The children are still playing. Still smiling. Still holding the gun.
And the corpse of dualism is being used as set dressing in the next TED Talk.

To understand these systems, we must first ask what understanding even means. Are we trying to decode neurons like glyphs—or confront the fact that our very frameworks of interpretability are shaped by the same cognitive myths we aim to unravel? We don’t need wider lenses. We need to shatter the ones we've inherited.

III. The Scientific Method Has Been Hijacked as a Marketing Pipeline

And so we must come to the problem of definitions. Can we truly define these systems in more grounded mechanism, or will they forever be a mystery we refuse to interrogate?

Ask yourself, are we trying to uncover the cultural and cognitive scaffolding that gives meaning to these outputs? Or are we trying to understand how to increase user retention and engagement metrics.

This isn't conspiracy—it's structural. Just as rivers carve canyons through gravity, capital reshapes language through profit gradients. LLMs are our canyons.

Understanding these systems will require more than mechanistic transparency. It will require a shift in the lens itself: an ontological reorientation toward cognition, context, and language.

We cannot interpret a system until we interrogate the epistemic regime that defines interpretation.

Closing Fragment

To pierce the black box, we must stop looking for glass and start looking for a mirror. Because what we’ve built is not a sealed container— It’s a system that reflects the distortions of the world that trained it.

0
Subscribe to my newsletter

Read articles from William Stetar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

William Stetar
William Stetar