Vector Embeddings

harshit shuklaharshit shukla
4 min read

Vector Embeddings — Explained to My Mom

A beginner-friendly explanation of vector embeddings with everyday examples — no math degree required!


Let’s Pretend We’re in the Kitchen

“Harshit, why are you always talking about AI and these… embeddings?” my mom once asked me while chopping onions.
So I thought — why not explain it with something she understands?

Imagine we’re arranging recipe cards on the kitchen table. All the cake recipes go near each other, soup recipes on one side, spicy dishes together, and so on. Similar recipes end up close to each other.

That’s exactly what vector embeddings do — but instead of recipe cards, they arrange meanings of words, images, or even products in a special space so that similar things are close together.


What’s a Vector? (No scary math!)

A vector is simply a list of numbers that tell you where something is in space.

If I say [3, 1]It could mean:

  • 3 steps to the right (x-axis)

  • 1 step up (y-axis)

If we had more directions — say “sweetness,” “sourness,” and “size” — a vector could look like [6, 2, 4]. The numbers are just ratings along those directions.


What’s an Embedding?

An embedding is just a vector that represents the meaning of something.

The computer takes something — like a word, a sentence, or a picture — and turns it into a vector. But here’s the magic:

  • If two things are similar, their vectors are close together in this special “embedding space.”

  • If two things are different, their vectors are far apart.

Think of it as giving every item a unique “address” based on its meaning.


An Everyday Example — Fruits

Let’s give fruits three qualities:

  1. Sweetness (0–10)

  2. Tartness (0–10)

  3. Size (0–10)

FruitVector
Apple[6, 3, 4]
Lemon[2, 8, 2]
Banana[7, 2, 6]

If you plot these, apple and banana are close together because they’re both sweet and not too tart. Lemon is far away because it’s very tart and less sweet.


Another Example — Words

In language models, embeddings are used to capture meaning:

  • The words “doctor” and “nurse” appear in similar contexts, so their embeddings are close.

  • The word “banana” appears in very different contexts from “doctor,” so they’re far apart.

This means a search engine can understand that if you search for “physician,” it should also find results for “doctor” — even if you never typed that word.


Why Vector Embeddings Matter

Vector embeddings are everywhere in modern AI and search systems. Here are a few real-world uses:

  1. Search Engines — Instead of matching exact words, embeddings let Google or Bing find results based on meaning. Search for “how to fix my sink,” and it can find articles about “plumbing repair.”

  2. Recommendations — Netflix, Spotify, and Amazon use embeddings to place you close to items you might like. If your watch history is near “mystery thrillers,” Netflix suggests more of them.

  3. Chatbots & AI Assistants — When you ask ChatGPT a question, embeddings help it understand what you mean, not just the words you used.

  4. Image Search — Google Images can find pictures similar to one you uploaded — embeddings make that possible.


Breaking the Myth — You Don’t Need to See the Math

Most people think embeddings are complicated math — and yes, under the hood they involve things like neural networks and high-dimensional spaces. But at the concept level, you only need to remember:

Embeddings are coordinates in a space where meaning decides closeness.

If you can imagine plotting points on a map, you already understand the idea.


A Quick Analogy Recap

  • Recipe Table: Similar recipes are near each other — like embeddings place similar meanings together.

  • Fruit Ratings: Numbers describe qualities — embeddings use numbers to describe meaning.

  • Maps: Just like nearby cities are close in distance, similar concepts are close in embedding space.


Why This Was Worth Explaining to My Mom

When I told my mom that embeddings are just “putting similar things near each other,” she smiled and said,
“Oh, like how I keep all the masalas together in the kitchen!”

And that’s really it. If you can organize your kitchen, you can understand vector embeddings.


Conclusion

Vector embeddings are the invisible glue behind smarter search, personalized recommendations, and AI that actually understands you. They take messy, unstructured data — words, pictures, products — and arrange them so the computer can “see” what’s similar.

Next time you search for something online or get a great Netflix suggestion, remember: somewhere in the background, embeddings made that happen.


Your Turn: If you found this simple explanation helpful, drop a comment telling me what you want explained next — maybe “neural networks to my grandpa”?


0
Subscribe to my newsletter

Read articles from harshit shukla directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

harshit shukla
harshit shukla