"What Was I Thinking?"

read the first part

Two years ago, answering questions wasn’t enough. I wanted an AI that could do things. But I didn’t trust it to do those things directly.

Instead of treating the LLM as the system's operator, I used it as a translator: turning human language into small, auditable programs. These scripts were then executed in a safe, controlled runtime. The result? Flexibility without chaos. Power without panic. And just enough plausible deniability to sleep at night.

The LLM didn’t query the data. It wrote a script that queried the data. It didn’t notify users. It wrote a script that made that decision based on context and constraints. This separation allowed for something rare in LLM-driven systems: testability, traceability, and control.

The script language was a small dialect of JavaScript: simple enough to constrain, expressive enough to handle branching logic, filters, transformations. Think Copilot-meets-sandbox.

That design opened the door to something most AI workflows still struggle with: when to apply context.


Late Binding: Contextual Embedding After the Intentions

Most systems grab data before writing a prompt: early RAG, vector DB, then model.

I flipped that.

The LLM writes the intent first, as code. Only during execution do we bind variables via embeddings and connect them to real data:

Why it matters:

  • The prompt doesn’t get bloated with context it might not need.

  • Embedding uses the actual variable context, improving accuracy.

  • Keeps “what to do” separate from “what to do it with.”

It’s pragmatic, precise, and avoids the mess of shoving the entire database into the prompt like it’s a piñata of relevance.


What did we get?

  • A translator, not a hallucinator. The LLM writes the plan, not the punchline.

  • Scripts instead of vibes. You can debug them, test them, and even complain about them like real code.

  • Context that shows up fashionably late—only when needed, not dumped upfront like a confused librarian.

  • Deterministic execution: no mystery, no surprises, no cosmic dice rolls.

  • A system that’s useful without leaking your database into the void.

Honestly… not bad for a half-finished project from 2023, right? Now I want to do it again…

0
Subscribe to my newsletter

Read articles from Francesco “oha” Rivetti directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Francesco “oha” Rivetti
Francesco “oha” Rivetti