Let’s Decode Google I/O 2025

Some days you dive into code. Other days, you sit back, sip chai (or coffee if you must), and try to make sense of everything Sundar Pichai just casually dropped like it’s no big deal. Today’s one of those days. Google I/O 2025 was less of a keynote and more of a full-blown “AI is eating the world” performance. And honestly? I’m here for it.

Let’s break it down!


🚢 “Shipping at a Relentless Pace” – and they mean it

Gone are the days when tech giants waited for I/O to launch their shiny toys. In this Gemini era, Google just yeets state-of-the-art models on a random Tuesday. The highlight? Gemini 2.5 Pro now sweeps LMArena like it’s cleaning house. They’ve cranked their Elo scores up by over 300 points since the OG Gemini Pro. Casual.

Also, the new Ironwood TPUs? 42.5 exaflops per pod. That’s not a typo. That’s just obscene compute power casually packed into a machine. Apparently, inferential AI is the new GPU arms race – and Google’s lifting heavy.


🌍 480 Trillion Tokens Later…

Let’s talk scale. Last year they were processing 9.7 trillion tokens a month. Now it’s 480 trillion. That’s not growth. That’s exponential puberty. And it’s not just devs geeking out.

  • 400M+ monthly Gemini users

  • 7M+ developers building with Gemini

  • Usage on Vertex AI? Up 40x.

Google has clearly figured out how to get their models in – in your apps, in your systems, and soon, in your cereal.


👋 Project Starline → Google Beam: Holograms, But Real

Remember Starline – that futuristic 3D calling concept? It just got real. Now called Google Beam, it uses six cameras, real-time head tracking, and a new AI-first video model to give you full-on 3D presence.
Think Zoom… but if Zoom went to the gym, studied optics, and came out with an HP partnership.

Also – real-time speech translation in Google Meet is here. Matching your voice, tone, and facial expressions. English ↔ Spanish to start. So yes, your meetings might finally be both global and understandable.


👁️ Project Astra → Gemini Live

Now this is where it gets wild. Gemini Live is giving full-on Black Mirror vibes – in a good way (hopefully). Your phone camera becomes the eyes of the assistant. People are already using it to prep for interviews, plan marathons, and probably ask it if their outfit slaps.

Screen sharing, file uploads, and real-time assistant magic — all baked into your phone. Rolling out on Android now, iOS catching up.


🕹️ Project Mariner → Agent Mode

This one is huge. Google’s building the agent economy.
Agent Mode can now use a computer like a person – click things, search Zillow, filter listings, and even schedule a tour. With "teach and repeat", you only have to show it once.

It’s like having an intern. A very competent, tireless, slightly sentient intern.

Bonus points to Google for backing interoperability. Their Agent2Agent protocol is now playing nice with Anthropic’s Model Context Protocol. In plain English? Agents can now talk to each other like it’s the beginning of an AI Avengers crossover.


✉️ Personal Context – Smart Replies That Actually Sound Like You

Gemini will soon dig through your Docs, Drive, and Gmail (with permission) to craft hyper-personalised smart replies.
Your friend asks for road trip tips. Gemini will find that chaotic itinerary from 2018, capture your tone, and maybe even sneak in your usual “cheers, bro” signoff.

Gmail replies that actually sound like you wrote them? My inbox might finally stand a chance.


🔍 AI Mode in Search – A Full Redesign

We’ve seen AI Overviews. But now, AI Mode is a full-on tab in Google Search.

  • You can ask longer, more complex queries

  • You can follow-up naturally

  • You’ll actually want to scroll down

It’s now live in the U.S. and coming soon elsewhere. Google is effectively turning Search into a conversation — but with the world’s most overqualified librarian.


⚡ Gemini 2.5 Pro + Flash + Deep Think

We’re now entering boss-level model mode:

  • Gemini 2.5 Flash: Fast, cheap, and nearly as good as Pro.

  • Gemini 2.5 Pro: Getting a turbo boost called Deep Think — a new reasoning mode using parallel thinking.

It’s like Gemini got a brain upgrade and now thinks in multiple tabs simultaneously.


🎨 Media Models Go Full Hollywood

Enter Veo 3 (AI video with sound) and Imagen 4 (top-tier AI images). These are already in the Gemini app. And there’s Flow – a new filmmaker tool that lets you stitch scenes and extend clips.

If you’re creative, this is your playground. If you’re not, Flow might make you one.


💡 The Big Picture

What really stood out to me wasn’t just the firehose of features. It was about how personal this AI wave is getting.

  • From personalization in Gmail

  • To immersive video calls

  • To AI assistants that know your context, tone, and tasks

  • And models that think better, faster, deeper

Google’s clearly aiming for AI that doesn’t just work — it works for you. And maybe that’s the ultimate unlock: AI that understands your world, your files, your tone, your mess — and helps you get through the day like a silent, brilliant partner.


Final Thought

Sundar ended his talk with a sweet anecdote about his dad being wowed by Waymo. It’s easy to forget that the stuff we build — the tech, the models, the hype — eventually lands in the hands of real people. People who are just trying to get home, or reply to an email, or call their family from another city.

And when that tech makes life a little easier, a little more magical — that’s when it hits different.

Google I/O 2025 wasn’t just about product updates. It was a quiet declaration:
The future is here. It just wants to be useful.


If you're still reading, go play with the Gemini app — and maybe tell it to write your next email. You might be surprised by how much it sounds like… well, you.

PS – If you haven’t played the I/O game yet, just go and check it out here.

0
Subscribe to my newsletter

Read articles from Sirsho Chakraborty directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sirsho Chakraborty
Sirsho Chakraborty

Graduated from KIIT, Bhubaneswar in 2023 with a B.Tech in CS. Did my majors in AI and Computational Mathematics. For me, Covid was a blessing in disguise. I got plenty of time, staying at home, tinkering and building stuff. Tried IoT, App Development, Backend, Cloud. Did a few internships in Flutter in my second year of college. Moved to Full stack, majorly focussing on backend. Single-handedly build a Whatsapp-like video calling solution for a CA based social media company. Teaching was also a passion. So, started up an ed-tech platform with a friend, Sridipto. That's our first venture together - Snipe. Raised some capital from a Bangalore based VC during 3rd year of college. Came to Bangalore. Scaled Snipe to around a million users. But, monetisation was a challenge, downfall of ed-tech making it worse. Had to pivot. Gamification was our core. Switched to B2B model and got some early success. Few big names onboarded - Burger King, Pedigree, Saffola - few of them. Cut to 2024 September, we're team of 20+ team. Business is doing well. But realised scaling is problem. We can't just remain as a Gamification Service company. We thought, let's build something big. Let's Build the Future of Computing. The biggest learning, if you have a big problem, break it up into smaller problems. Divide and Conquer. It becomes a lot easier.