Simplified Guide to Understanding AI Terminology

Sandip DeshmukhSandip Deshmukh
5 min read

Artificial Intelligence (AI) has introduced a lot of new words that can sound complex or intimidating. If you’re someone who is curious about AI but don’t know much about it, this article is for you. Let’s break down the most common AI jargons in simple terms, using relatable examples and code snippets in Python.


Note: The code snippet used in this article requires some initial setup. A complete Google Colab notebook link is provided at the end of this article.

1. Transformers

Transformers are models that understand language by looking at all the words in a sentence at once. They understand the context by paying attention to relationships between every word. Used in tools like ChatGPT.

Example:

from transformers import pipeline

model_name = "google/gemma-3-1b-it"
generator = pipeline(task="text-generation", model=model_name)
result = generator("Once upon a time", max_length=20)
print(result[0]['generated_text'])

The code snippet above returns

2. Encoder

Encoders convert input data (like a sentence) into a format that a machine can understand. Think of it as a translator from human language to numbers.

For example, when using a language model, an encoder might take the sentence "how are you ?" and convert it into a series of numbers that represent the sentence's meaning.


3. Decoder

Decoders do the opposite of encoders: they take machine-readable data and turn it back into human-readable language. This is used when the model replies to a query.

Understand encoders and decoders with simple code examples.

import tiktoken

encoder = tiktoken.encoding_for_model('gpt-4o')
text = "how are you"
tokens = encoder.encode(text)

print("Tokens", tokens) # [8923, 553, 481]

my_tokens = [8923, 553, 481]
decoded = encoder.decode(my_tokens )
print("Decoded", decoded) # how are you

4. Vectors

Vectors are just lists of numbers. Each word or sentence is turned into a vector so the model can do math to understand similarities.

For example, the sentence "how are you" might be represented as a vector like [0.1, 0.2, 0.3].


5. Embeddings

Embeddings are special vectors that capture the meaning of words. Words with similar meanings have similar embeddings.

Example:

#https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#usage-sentence-transformers
from sentence_transformers import SentenceTransformer

model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode("Artificial Intelligence")
print("First 5 values:", embeddings[:5])

this should return


6. Positional Encoding

Transformers don't know the order of words unless you tell them. Positional encoding gives each word a sense of its position in the sentence.

Example: The sentence "I love pizza" vs "Pizza love I" — without position, the model wouldn’t know the difference.

Both sentences have different meanings, so they occupy different positions in vector space.

Refer to the previous code snippet for generating vector embeddings.

I love pizza

First 5 values: [-0.09892618 0.03360059 0.01034007 0.05320545 -0.08048925]

Pizza love I

First 5 values: [-0.12867121 0.08613202 0.05863576 0.04221272 -0.04131214]


7. Semantic Meaning

This is about the actual meaning behind the words. For example, "He kicked the bucket" means "he died". AI models try to understand the true meaning, not just the words.


8. Self-Attention

This allows a model to figure out which words in a sentence are important and related to each other.

Example: In "The dog chased the cat because it was hungry", self-attention helps understand that "it" refers to "the dog".


9. Softmax

Softmax is a mathematical function used to convert raw model predictions (called logits) into probabilities that sum to 1.
It helps us understand how confident the model is about each possible class — the higher the score, the higher the probability after softmax.

Example:

import numpy as np

# Raw model predictions (logits)
logits = np.array([2.0, 1.0, 0.1])  # These are unnormalized scores

# Softmax function to convert logits to probabilities
def softmax(x):
    # Subtract max for numerical stability
    e_x = np.exp(x - np.max(x))  
    return e_x / e_x.sum()

# Convert logits to probabilities
probabilities = softmax(logits)

# Output
print("Raw model predictions (logits):", logits) # [2.  1.  0.1]
print("Probabilities after softmax:", probabilities) # [0.65900114 0.24243297 0.09856589]

10. Multi-Head Attention

Instead of looking at one thing at a time, the model looks at different parts of a sentence simultaneously. Each "head" pays attention to different patterns or words.


11. Temperature

Temperature controls how creative the model is.

  • Low temperature = more accurate, safe answers.

  • High temperature = more creative, diverse responses.

Example:

from transformers import pipeline

generator = pipeline(task="text-generation", model=model_name)
# story creativity will changes when we tweak temperature parameter.
result = generator("A story about a dragon", max_length=30, temperature=1.2)
print(result[0]['generated_text'])

12. Knowledge Cutoff

This is the latest date until which the model was trained. If the cutoff is April 2023, it won’t know anything that happened after that.** (by using tools like web search , we can provide model access to real-time data)


13. Tokenization

Tokenization breaks down sentences into smaller pieces (tokens) so the model can understand them.

Example:

from transformers import GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokens = tokenizer.tokenize("How are you?")
print("Tokens:", tokens) # ['How', 'Ġare', 'Ġyou', '?']

14. Vocab Size

This is the number of unique tokens (words or pieces of words) a model understands. Bigger vocabulary = better understanding of language.

from transformers import AutoTokenizer

# Specify the model name
model_name = "google/gemma-3-1b-it"

# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Get the vocabulary (token-to-id mapping) from the tokenizer
vocab = tokenizer.get_vocab()

# Print the vocabulary dictionary
print("Vocabulary size:", len(vocab))
print("Sample tokens:", list(vocab.items())[:10])  # Print first 10 tokens as a sample

#Vocabulary size: 262145
#Sample tokens: [('<unused2683>', 258585), ('みました', 154419), ('👞', 254933), ('おそらく', 229830), ('<unused5188>', 261090), ('جون', 121201), ('▁getWorld', 176836), ('▁没有', 151050), ('<unused255>', 256157), ('▁Summer', 18943)]

15. Inferencing

Inference is when you use the trained model to answer questions or generate content. It’s the part where you actually interact with AI.

Example:

response = generator("Explain machine learning", max_length=30)
print("Response:", response[0]['generated_text'])

https://colab.research.google.com/drive/1qqN07BOEjzPrgCuMWZP1velT5bAvpDY_?usp=sharing

Conclusion

AI is full of terms that might sound confusing, but once you break them down, they're not so scary. With the help of simple examples and small bits of code, we can start to see how machines understand and generate human language.

0
Subscribe to my newsletter

Read articles from Sandip Deshmukh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sandip Deshmukh
Sandip Deshmukh