Understanding GPT

Arun ChauhanArun Chauhan
5 min read

Why Everyone’s Talking About GPT

A few years ago, it sounded like science fiction: a computer you could talk to, ask questions, and get back answers that felt like they were written by a person. Now, tools like ChatGPT are being used to write emails, explain tricky topics, draft stories, even help with coding.

Behind all this magic is something called GPT - short for Generative Pretrained Transformer.
If that name makes your eyes glaze over, don’t worry. This article will break it down into everyday language so you’ll know:

  • What GPT actually is

  • How it works (without the tech headaches)

  • Why it’s a big deal

  • Where it shines and where it stumbles

  • What it might mean for our future

First, Let’s Decode the Name

Before we understand how GPT works, let’s unpack its fancy title - Generative Pretrained Transformer

Generative

  • Means it can create things - not just repeat stored facts.

  • If you ask it to write a bedtime story about a dragon who loves gardening, it will generate something new based on what it has learned.

Pretrained

  • Before you even talk to it, GPT has already read a huge amount of text — books, articles, websites — so it starts with a general understanding of language.

  • Think of it like a student who has read an entire library before showing up to class.

Transformer

  • This is the name of the special type of neural network (a kind of computer brain) that GPT uses to understand and produce language.

  • “Transformer” tech is like the secret recipe that lets GPT handle complex sentences, remember context, and respond naturally.

How GPT Works

To understand GPT, we need to cover three big ideas — but don’t worry, we’ll use analogies.

(a) The Training Phase

Imagine training a chef:

  • You give them every cookbook in the world.

  • You have them taste thousands of dishes.

  • You let them observe other chefs cooking.

That’s what “pretraining” is for GPT — except instead of food, it’s language. GPT reads massive amounts of text and learns:

  • How words tend to follow each other (“peanut butter and jelly” vs. “peanut butter and batteries”).

  • The meaning of words in different contexts.

  • Patterns of conversation, storytelling, explanation.

(b) The Prediction Trick

GPT doesn’t “think” like a human. Instead, it’s like a master of guessing the next word.

  • If you say, “Once upon a…”, GPT predicts “time” is likely next.

  • It repeats this process, word by word, until it forms full sentences and paragraphs.

The amazing part is that - when trained on enough text, these “guesses” start to look like understanding.

(c) The Transformer Magic

Transformers is the architecture GPT uses to have a clever way of paying attention to all parts of a sentence at once.
For example:

I made a hot cup of chai this morning, but it went cold before I could finish it.

Humans know “it” refers to the chai, not the cup.
Transformers have a mechanism called attention that helps GPT figure this (context) out.

Why GPT Feels So Human

The reason GPT feels human-like is because:

  • It understands context: It remembers the flow of a conversation.

  • It’s versatile: It can shift tone — formal, casual, funny, poetic.

  • It has broad knowledge: Thanks to its training data, it can talk about history, science, pop culture, and more.

But here’s the key: GPT doesn’t have feelings, beliefs, or consciousness. It’s mimicking patterns of language — extremely well, but still mimicry.

The Strengths of GPT

  • Speed: Generates answers instantly.

  • Versatility: Can switch between topics effortlessly.

  • 24/7 Availability: No coffee breaks needed.

  • Language Skills: Handles multiple languages and styles.

  • Idea Generation: Great for brainstorming sessions.

The Limitations You Should Know

GPT isn’t perfect — and it’s important to know where it falls short.

Hallucinations (Making Things Up)

Sometimes GPT will confidently state something that’s false — not because it’s lying, but because it’s guessing based on patterns.

Lack of Real Understanding

It doesn’t truly “know” facts — it only predicts words based on training data.

Bias

Since GPT learns from human-written text, it can also pick up human biases.

Knowledge Cutoff

Unless updated or connected to the internet, GPT doesn’t know events after a certain date.

The Future of GPT

We’re just scratching the surface of what GPT-like models can do.

  • Smarter assistants: Helping doctors summarize medical research.

  • More personalized tools: Tailoring learning materials to individual students.

  • Creative collaboration: Working alongside artists, writers, and designers.

But with great power comes great responsibility — and debates are ongoing about:

  • Ethical use

  • Data privacy

  • Impact on jobs

  • Preventing misuse

How to Use GPT Effectively as a Non-Tech Person

  • Be Specific: Clear prompts get better results.

  • Double-Check Facts: Especially for important info.

  • Experiment: Try different wording to see how it responds.

  • Use It as a Partner, Not a Replacement: It’s a helper, not a human brain.

Summary

GPT is like a super-talented language mimic. It has read more than any human could in a lifetime and can produce text that feels natural and intelligent. But at its core, it’s still just making educated guesses based on patterns it has learned.

Understanding how it works helps us use it better — and keeps us from falling for the myth that it’s anything more than a very sophisticated tool.

0
Subscribe to my newsletter

Read articles from Arun Chauhan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Arun Chauhan
Arun Chauhan