What makes the digital clock tick?


PART 1 – INTRODUCTION TO COMPUTER SCIENCE AND THE ART OF PROGRAMMING
Good day to everyone reading this post, QUANTUM WHISPER here; Ever wondered what magic makes our digital world tick? From the apps on your phone to the movies you stream and the games you play, it all comes down to a fascinating blend of computer science and the art of programming. In this post, we're pulling back the curtain to explore the fundamental ideas behind how computers "think," how we communicate with them, and why understanding these concepts is more exciting and empowering than you might imagine. Get ready to dive into the core of the digital universe!
What Is Computer Science?
To put it simply, it’s the study of information, how to represent it and how to process it. But more fundamentally, what we’re being taught is computational thinking which is the application of computer science ideas to problem solving.
At the end of the day, computer science is really about problem solving. And what we mean by problem solving is very simple:
INPUT → PROCESS → OUTPUT
The input is the problem. The output is the solution. And the most interesting part? The process, turning that input into output. To really do this, we have to agree on a way to represent both input and output, especially if we want to do it in a standardized, scalable way.
Representing Information:
There are different ways to represent information. For example, if we want to take the attendance in a small room and we say “1, 2, 3, 4, 5……”, with this we’re using something called UNARY NOTATION which is known mathematically as BASE 1 because we’re literally counting in ones. But using unary, one human hand can only count up to 5 or as high as 5. But there are several clever ways to count higher than that by recognizing patterns in the fingers. Say we agree that raising one hand is 1, and raising another hand means 2, and so on, with this we have already increased our range.
No let us consider the two states that our fingers can be in: up or down. This is leads us to two different possibilities for one finger, and this is BINARY-also known as BASE 2. Computers speaks in 0s and 1s. A bit is a single binary digit-either a 0 or a 1.
This is in contrast to the number system we typically use, called DECIMAL or BASE 10, which allows us 10 different digits.
Why Binary Matters:
This is crucial in computer science because the actual components of a computer, transistors — are either ON or OFF. We represent these states as 1s and 0s. If we want to store a 1, we switch on a transistor. If we want to store a 0, we switch it off.
By manipulating these ON and OFF states, we can count higher and higher. It's in the patterns of 0s and 1s that all the ability of a computer is embedded.
Decimal vs Binary: Same Concept, Different Base:
Our computers do the same thing we do with decimal numbers. For example, take the number 123. Why is it 123?
Grade school tells us:
The 3 is in the 1s place
The 2 is in the 10s place
The 1 is in the 100s place
So doing the math:
(1 × 100) + (2 × 10) + (3 × 1) = 123
This system is called base 10 because for each place value there are only 10 possibilities (0–9).
Computers, on the other hand, have only 2 possible values per place, so their place values increase in powers of 2:
- 1s, 2s, 4s, 8s, 16s…
So we can say that computers count just like we do, they just have fewer digits available per position but can still count to infinity.
Take 000 in binary , that represents decimal 0. And using the same math we use in decimal, we can convert binary values to decimal.
What’s a Byte?
Remember our 3-bit representation. That can only count as high as 7. If we want to go higher, we need more bits.
So we use a byte, which is just 8 bits. This gives us up to 256 unique combinations (including 0), which is enough to represent:
The alphabet
Numbers
Symbols
Also, 256 is a power of 2, which is very useful electronically.
There’s no need for ranges in this system. Computers like extremes,
it’s simpler. When we start splitting values, it becomes harder to be exact.
REPRESENTING LETTERS:
Let’s say we want to represent the letter A for a computer. How would we do that?
Simple: we assign a number to it. Turns out, a bunch of people in a room once decided that A = 01000001, which is 65 in decimal.
B = 66, C = 67, and so on.
Lowercase letters start from 97 (a), 98 (b), etc.
This mapping system is called ASCII (American Standard Code for Information Interchange). It uses 8 bits total, allowing us to represent 256 characters. It was enough to represent English and some other characters.
Try This: What Does This Represent in ASCII?
01001000 01001001 00100001
72 73 33
From ASCII to Unicode:
ASCII wasn't enough to represent every language. So, computers started using more than 8 bits, sometimes even 32 bits, which gives us over 4 billion unique characters.
This system is known as Unicode. It was built to be backwards compatible with ASCII, and it can represent nearly every symbol, language, or emoji.
Fun fact:
4036991106(decimal equivalent of the binary pattern that represents the emoji) = the most popular emoji in the world 😂 (Face with Tears of Joy).
Representing Colors
Computers typically use the RGB system; Red, Green, Blue.
To represent a pixel, computers use 3 bytes (24 bits), one byte for each RGB component. Mixing different amounts of red, green, and blue creates every color on your screen.
So, image file sizes? That’s just the total number of bits used to represent the color of each pixel.
Representing Videos
A video is just a bunch of images displayed fast enough for our eyes to perceive motion.
When we say 30fps, we mean 30 images per second. So it makes sense that video files are large, they contain a lot of images, and each image has its own color data.
Representing Sound
To represent sound, we could assign each note, sharp, flat, pitch, or frequency a pattern of 0s and 1s, like we did with ASCII.
Think of it like a digital piano. Each key press, how hard it's pressed, and the pitch is just another binary pattern.
Context Is Everything
So all computers ever deal with is 0s and 1s. But how do they know whether a binary pattern is text, color, sound, or something else?
It depends on the context.
Open a pattern with a calculator → you see numbers.
Open it in a text editor → you see characters.
Open it with an image viewer → you see a picture.
As a programmer, you get to define this context. You tell the computer what a certain pattern means. That’s your power.
Algorithms
Remember our Input → Output model? The part that transforms input into output is an algorithm, a step-by-step plan for solving a problem or performing a task that is sufficiently detailed for any information processing agent to follow.
Let’s use a phonebook as an example. Say you’re looking for the name “Daniel Ademoye”:
Option 1: Flip through each page one by one → an algorithm, but not efficient.
Option 2: Flip through the book, two pages at a time, an algorithm, but not efficient and also incorrect. Take a minute to think about why it’s incorrect?
Option 3: Open the book to the middle. If the name is before, search the left half. If after, search the right half → repeat. You’re cutting the problem in half each time.
That’s a classic binary search.
The Goal of Programming
As a programmer, your job is to write efficient code. Can you solve the problem faster? Using fewer steps?
That’s the whole point of computer science, developing smarter algorithms for more complex problems.
Pseudocode
Pseudocode is basically English.
There’s no single correct way to write it, just write a clear, finite, and precise set of steps:
Pick up phone book
Open to middle of book
Look at page
If person is on page
- Call person
Else if person is earlier
- Open middle of left half
Go back to step 3
Else if person is later
- Open middle of right half
Go back to step 3
Else
Quit
Bugs, Functions, Conditionals, and Loops
If a programmer forgets to account for a certain situation, the program might break. These unanticipated situations are called bugs.
From our pseudocode:
The actions are what we’d call functions in code.
The decision points are conditionals (if/else).
The yes/no checks are boolean expressions (true/false).
The go back to lines are basically loops.
These are the four core components of most code you’ll write.
A Bit on Artificial Intelligence
Say you want to build a chatbot. You could write conditionals for common questions and hardcode responses.
But what about unexpected questions?
You’d need infinite if-statements. Not practical.
Instead, we train the chatbot with tons of data, and let it learn patterns. This is where machine learning comes in — powered by large language models and neural networks (inspired by biology).
Neural networks use interconnected nodes (like neurons) and connections to calculate the most probable right answer based on the input.
PART 2 - “Scratch: Learning to Code Without Writing Code”*
A deep dive into how Scratch teaches the core ideas of computer science.*
🧱 What Is Scratch, Really?
Scratch is a visual programming language developed by MIT that lets you build programs by snapping together colorful blocks like digital LEGO.
No syntax. No semicolons. No weird compiler errors.
Just pure logic and creativity.
That might sound too simple, but don’t be fooled, Scratch is real programming.
You’re learning:
Functions
Conditionals
Loops
Events
Variables
Abstraction
But you’re learning them in a way that makes sense visually, even if you've never written code before.
🖥️ The Scratch Interface: A Quick Tour
Head to scratch.mit.edu and you’ll see three main areas:
Blocks Palette – All the code blocks grouped by color (Motion, Looks, Control, etc.)
Script Area – Where you snap blocks together to build your program.
Stage Area – Where your characters (called sprites) act out your code.
You also get:
A green flag to start the program
A red stop sign to end it
Tools to add sprites, sounds, and backdrops
🧠 Programming Concepts in Scratch
Let’s break down how real coding concepts show up in Scratch:
✅ Events
when green flag clicked
This is your entry point. It’s basically the Scratch version of main() in other programming languages.
💬 User Input
ask [What’s your name?] and wait
This asks the user a question and stores the answer in a special variable called answer.
🧩 String Operations
say (join [Hello ] (answer))
You're combining "Hello" with the user’s response. That’s string concatenation—a basic operation in most languages.
♻️ Loops
repeat (10)
move (10) steps
end
Repeats a block of code 10 times. This is a classic for loop.
🧪 Conditionals
if <touching color [#000000]> then
say [Ouch!]
end
This is an if statement—only runs the code inside if the condition is true. Scratch also supports else and nested ifs.
🌀 Forever Loops
forever
move (10) steps
end
This is basically while(true)—an infinite loop, used a lot in games.
🧠 Variables
set [score v] to [0]
change [score v] by (1)
You can create your own variables like score, lives, timer, and use them however you want.
💡 Broadcast & Receive
broadcast [game over]
when I receive [game over]
say [Thanks for playing!]
This is how you create events between sprites. It’s Scratch’s version of function calls or pub-sub messaging.
🎮 Why Scratch Matters
It teaches computational thinking in the clearest possible way:
You visually see how logic flows.
You instantly see the result of your actions.
You debug by dragging and rearranging blocks.
It’s fun. And fun means you'll actually learn.
Even professional programmers are amazed at how well Scratch conveys the basics. It teaches the foundations you’ll use in Python, JavaScript, C, or whatever you go on to learn next.
🔄 From Scratch to Real Code
Scratch is like the training wheels of programming, but it’s not childish. It introduces:
Decomposition – Breaking problems into steps
Abstraction – Hiding complexity in custom blocks
Reusability – Creating blocks (functions) you can reuse
Debugging – Catching logical errors and fixing them
And if you want to take things further, you can look at Snap!, Blockly, or MIT App Inventor — all inspired by the same philosophy.
🔚 Final Thought on Scratch
You might think, “Scratch is too simple for me.”
But Scratch teaches something deeper: thinking like a programmer.
No matter the language, that's the skill you’ll use the most.
Ultimately, computer science boils down to elegant problem-solving, and programming is the art of teaching machines how to solve them. By understanding the journey from input to process to output, and the underlying language of binary, you've gained insight into the very essence of digital technology. Whether you choose to dive deeper into Python, JavaScript, or continue experimenting with visual tools, remember that the most valuable skill you've cultivated is the ability to think like a programmer. And that's a skill that transcends any single language or tool.
Subscribe to my newsletter
Read articles from Adeoluwa Ademoye directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
