Understanding Number Representation in Computers

Table of contents
- 1. Introduction to Number Representation
- 2. Number Limits
- 3. Integer Representation
- 4. Floating-Point Representation
- 5. Roundoff Errors
- 6. Overflow
- 7. Practical Examples and Code Snippets
- Integer Representation (C)
- Simulating Integer Limits (Python)
- Roundoff Error Demonstration (Python)
- 8. Key Differences and Summary
- 9. Suggestions for Further Exploration
- Conclusion

You see, this article is my very own comprehensive guide on number representation in computers. In my other article, I’ll be discussing how hackers use this to their advantage, so stay tuned. Although this article might look like others you’ve seen, the difference is that it covers everything.
In my journey of becoming a cybersecurity expert, I came across this topic. As usual, I like learning from the very core, so it took me time in understand the full concept. I present to you my own experience in understanding number representation.
1. Introduction to Number Representation
Computers store and process numbers using binary (base-2) systems, relying on bits (0s and 1s) to represent data. Unlike human counting in base-10 (decimal), computers use a finite number of bits, which imposes limits on the range and precision of numbers they can handle. This blog explores two primary types of numbers in computing:
Integers: Whole numbers (e.g., -10, 0, 3, 42) with no fractional or decimal parts.
Floating-point numbers: Numbers with decimal or fractional parts (e.g., 3.14, -2.75), used for precise or large-scale calculations.
Each type has specific representation methods, storage constraints, and potential errors like overflow or roundoff.
2. Number Limits
Definition
Number limits refer to the maximum and minimum values a computer can store for a given data type, determined by the number of bits allocated. Each bit is a binary digit (0 or 1), and the total number of bits defines the range of possible values.
How It Works
Think of bits as storage boxes. The more bits you have, the more values you can represent. For example:
4-bit variable: Can store 2⁴ = 16 values.
Unsigned: 0 to 15.
Signed: -8 to +7.
8-bit variable: Can store 2⁸ = 256 values.
Unsigned: 0 to 255.
Signed: -128 to +127.
16-bit variable: Can store 2¹⁶ = 65,536 values.
Unsigned: 0 to 65,535.
Signed: -32,768 to +32,767.
32-bit variable: Can store 2³² ≈ 4.29 billion values.
- Signed: -2,147,483,648 to +2,147,483,647.
256-bit variable: Can store 2²⁵⁶ values, used in cryptography (e.g., Bitcoin keys), with an unsigned range up to approximately 10⁷⁷.
Analogy
Imagine a shelf with a fixed number of slots. A 4-bit variable is like a shelf with 4 slots, limiting you to 16 possible arrangements. A 256-bit variable is like a massive warehouse, allowing for an astronomically large number of combinations.
Suggestion: To deepen understanding, explore how different programming languages (e.g., C, Python) enforce number limits or simulate bit constraints in code.
3. Integer Representation
Definition
Integers are whole numbers without decimal or fractional parts, including positive numbers, negative numbers, and zero (e.g., …, -3, -2, -1, 0, 1, 2, 3, …). In computers, integers are stored in binary using a fixed number of bits.
Types of Integers
Integers are categorized into two types based on how they handle negative numbers:
Unsigned Integers:
Represent only non-negative numbers (0 and positive).
All bits are used for the number’s magnitude.
Example (8-bit unsigned):
Binary 00000000 = 0
Binary 11111111 = 255
Range: 0 to 2⁸ - 1 = 255
Signed Integers (Two’s Complement):
Represent both positive and negative numbers.
Use the leftmost bit as a sign bit (0 = positive, 1 = negative).
Negative numbers are stored using the two’s complement method for efficient arithmetic.
Example (8-bit signed):
Binary 00000101 = +5
Binary 11111011 = -5
Range: -2⁷ to 2⁷ - 1 = -128 to +127
Two’s Complement Method
To represent a negative number:
Start with the positive number’s binary form.
Invert all bits (0 → 1, 1 → 0).
Add 1 to the result.
Example: Represent -5 in 8-bit signed format:
Positive 5: 00000101
Invert bits: 11111010
Add 1: 11111011 (this is -5)
Why Two’s Complement?
Simplifies arithmetic (addition and subtraction work the same for positive and negative numbers).
Provides a single representation for zero.
Ensures a symmetric range (e.g., -128 to +127 for 8-bit).
Practical Example
In C, you can see signed vs. unsigned behavior:
#include <stdio.h>
int main() {
signed char a = -5; // Stored as 11111011
unsigned char b = 251; // Stored as 11111011
printf("Signed: %d\n", a); // Output: -5
printf("Unsigned: %u\n", b); // Output: 251
return 0;
}
The same binary (11111011) is interpreted differently based on whether the variable is signed or unsigned.
Suggestion: I would like you to experiment with binary conversions using small bit sizes (e.g., 4-bit or 8-bit) to visualize how signed and unsigned integers differ.
4. Floating-Point Representation
Definition
Floating-point numbers represent real numbers with decimal or fractional parts (e.g., 3.14, -2.75). They are essential for scientific calculations, graphics, and financial computations where precision matters.
IEEE 754 Standard
The IEEE 754 standard is the most common format for floating-point numbers, dividing bits into three parts:
Sign Bit: 1 bit (0 = positive, 1 = negative).
Exponent: 8 bits (single precision) or 11 bits (double precision), with a bias to handle negative exponents.
Mantissa (Fraction): 23 bits (single precision) or 52 bits (double precision), storing the significant digits.
Example: Storing 3.14 (32-bit Single Precision)
Convert to binary:
3 = 11
0.14 ≈ 0.001001 (approximate)
So, 3.14 ≈ 11.001001
Normalize (like scientific notation):
- 11.001001 = 1.1001001 × 2¹
Encode in IEEE 754:
Sign bit: 0 (positive)
Exponent: 1 + 127 (bias) = 128 = 10000000
Mantissa: Drop leading 1, use 1001001... (padded to 23 bits)
Final: 0 | 10000000 | 10010010000111111011011
Precision Limits
32-bit float: Can represent numbers up to ~3.4 × 10³⁸ with ~7 decimal digits of precision.
64-bit double: Can represent numbers up to ~1.8 × 10³⁰⁸ with ~15-17 digits of precision.
5. Roundoff Errors
Definition
Roundoff errors occur when a computer’s floating-point representation approximates a number due to limited precision, resulting in a small difference between the actual and stored value.
Why It Happens
Many decimal numbers (e.g., 0.1, 0.7) cannot be represented exactly in binary.
Example: 0.7 might be stored as 0.69999999999999996.
This leads to errors in calculations, such as:
print(0.7 - 0.2) # Output: 0.49999999999999994, not 0.5
Where It Matters
Scientific simulations: Small errors can compound.
Financial calculations: Precision is critical for accuracy.
Graphics: Rounding errors can cause visual artifacts.
How to Reduce Roundoff Errors
Use double precision (64-bit) instead of single precision (32-bit) for better accuracy.
Avoid subtracting nearly equal numbers (causes catastrophic cancellation).
Use numerically stable algorithms.
Suggestion: Write a Python script to demonstrate roundoff errors (e.g., 0.1 + 0.2) and compare results using float vs. decimal.Decimal.
6. Overflow
Definition
Overflow occurs when a calculation produces a result that exceeds the maximum or minimum limit of a data type.
Types of Overflow
Integer Overflow:
Occurs when an arithmetic operation exceeds the integer’s range.
Example (8-bit unsigned):
250 + 10 = 260, but max is 255.
Result wraps to 4 (260 - 256 = 4).
Real-world case: NASA’s Ariane 5 rocket failure due to a 64-bit to 16-bit conversion overflow.
Buffer Overflow:
Occurs when data exceeds a memory buffer’s capacity, overwriting adjacent memory.
Example in C:
char buffer[8]; strcpy(buffer, "This is too long!"); // Overflows buffer
Why It Matters
Bugs: Incorrect results or program crashes.
Security: Buffer overflows are exploited in hacking to execute malicious code.
Reliability: Overflow errors can disrupt critical systems.
Suggestion: Simulate integer overflow in Python or C to observe wraparound behavior, or explore buffer overflow vulnerabilities using tools like GDB.
7. Practical Examples and Code Snippets
Integer Representation (C)
#include <stdio.h>
int main() {
signed char a = -5; // 11111011
unsigned char b = 251; // 11111011
printf("Signed: %d\n", a); // -5
printf("Unsigned: %u\n", b); // 251
return 0;
}
Simulating Integer Limits (Python)
def to_signed(n, bits):
if n & (1 << (bits - 1)): # Check sign bit
return n - (1 << bits)
return n
num = -5
unsigned_val = num & 0xFF # 8-bit unsigned (251)
signed_val = to_signed(unsigned_val, 8) # -5
print(f"Unsigned: {unsigned_val}, Signed: {signed_val}")
Roundoff Error Demonstration (Python)
print(0.1 + 0.2) # Output: 0.30000000000000004
from decimal import Decimal
print(Decimal('0.1') + Decimal('0.2')) # Output: 0.3
Suggestion: Run these snippets to observe how computers handle numbers and errors in practice.
8. Key Differences and Summary
Concept | Key Point |
Number Limits | Determined by bit size (e.g., 8-bit, 32-bit); limits range of storable values. |
Integer Representation | Stored as signed (two’s complement) or unsigned; affects negative numbers. |
Floating-Point | Uses IEEE 754 (sign, exponent, mantissa) for decimals; limited precision. |
Roundoff Error | Approximation errors in floating-point due to binary representation limits. |
Overflow | Exceeding data type limits; integer overflow wraps, buffer overflow risks hacks. |
Visual Analogy
Integers: Like counting steps on a number line (whole numbers only).
Floats: Like marking points on a ruler with decimal precision.
Number Limits: The length of the number line or ruler you’re allowed to use.
Overflow: Stepping beyond the end of the line or ruler.
Roundoff: Measuring a point but having to round it slightly because the ruler’s marks aren’t exact.
9. Suggestions for Further Exploration
Binary Conversion Practice: Convert numbers like 42, -10, or 3.14 to binary manually or using code.
Programming Experiments:
Write code to trigger integer overflow (e.g., adding to INT_MAX in C).
Compare floating-point precision using Python’s float vs. decimal.Decimal.
Security Focus: Study buffer overflow exploits using tools like GDB or pwndbg to understand hacking risks.
Visualization Tools:
Use online IEEE 754 converters to see floating-point representations.
Create binary charts for 4-bit, 8-bit, and 16-bit integers to compare ranges.
Real-World Applications:
Explore how 256-bit numbers are used in cryptography (e.g., Bitcoin, AES-256).
Investigate historical bugs caused by overflow or roundoff (e.g., Ariane 5, Patriot Missile).
Conclusion
Understanding number representation is fundamental to programming, computer science, and cybersecurity. Integers and floating-point numbers each have unique storage methods, constraints, and potential pitfalls like roundoff errors and overflow. By mastering these concepts, you can write more robust code, anticipate errors, and appreciate the intricacies of how computers handle numbers.
Subscribe to my newsletter
Read articles from Yemi Peter directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Yemi Peter
Yemi Peter
I’m Yemi, an ethical hacking and cybersecurity enthusiast on a mission to master the art of hacking—legally and ethically. This blog is my open journal: • Breaking down technical concepts in simple terms • Sharing tools, exploits, and walkthroughs • Documenting my learning journey from binary to buffer overflows Whether you’re a beginner or just curious about hacking, this space is built to help us grow together. Read. Learn. Hack. Connect with me: • Coding Journey: yemicodes.substack.com • Personal Growth Blog: affirmative.substack.com • Medium Writings: medium.com/@yemipeter