The Day I Explained Vector Embeddings Over Chai

"Mom, imagine this…"

You know how when we go to the market, you remember things by their features?
Like if I say “the big red mango with a sweet smell”, you can picture it in your head — even without seeing it right now.

That’s kind of what vector embeddings are for computers.
They’re a way to turn information into a list of numbers so that computers can remember what something means, not just what it looks like in text.

How it works

  • First, you take a word, sentence, or even an image.

  • Instead of just keeping it as plain text, you describe it in numbers that represent its meaning.

  • These numbers are placed in a kind of coordinate system — like a map where similar things are close together.

Example:
If you had the words:

  • Dog 🐶

  • Cat 🐱

  • Banana 🍌

On this “meaning map,” Dog and Cat would be close together (because they’re animals), but Banana would be far away (because it’s fruit).

Why do this?

Because computers don’t understand language like we do — they’re just really good at math.
By turning meaning into numbers (vector embeddings), they can:

  • Find similar things

  • Recommend related items

  • Search smarter (like Google knowing what you meant, not just what you typed)

  • Power AI chatbots (so they can find relevant info fast)

Quick Analogy

Think of vector embeddings like storing everyone’s “interests” in a notebook using numbers:

  • Mom: [Cooking: 0.9, Gardening: 0.8, Cricket: 0.1]

  • Me: [Coding: 0.95, Cricket: 0.8, Gardening: 0.2]

Now, by just looking at the numbers, you can tell who’s more similar — even if their names are completely different.

In short:
Vector embeddings are like GPS coordinates for ideas, so computers can measure how “close” two meanings are.

0
Subscribe to my newsletter

Read articles from Piyush Priyadarshi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Piyush Priyadarshi
Piyush Priyadarshi