OneStart

They only let me pick nine: looking back at Google I/O 2025 AI Announcements

(Who wants to create an AI product to help me keep up with all these AI products?)

If you use Google products, especially if you really love Google products, you might know about its yearly IO conference that showcases the latest from both the company as well as developers working inside the Google ecosystem: Chrome, Android, etc. Lots of vision and imagination and plenty of positive vibes, but we’ll get to those in a bit.

But before we jump in, a reminder that announcing a product at a developers’ conference doesn’t mean it always ends up as an application on your device, making these announcements an announcement of commercialized technology and hopeful roadmaps.

No surprise: AI Everywhere

While I/O 2025 wasn’t all about AI, everybody understands we’re in the middle of an AI moment – and honestly, after 20 years of the cloud cloud cloud it’s a nice change of pace. It’s a long list of innovations, and ideally there should be something for everybody.

But I’ve also talked to business decision-makers who get wary when vendors brag about AI first and solution second, especially in bigger, more risk-averse orgs. These same people have said the best way to convert an AI skeptic is to let them get their hands on the technology so they can experience and envision for themselves.

That’s the whole point of this list: to pull out cool ideas that will appeal to the AI-curious but also help convert AI skeptics who want to innovate but are concerned about risk. We’re going to blow past all the fancy video and audio generators you’ve probably already seen; those are old AI news by now. We’re also not going to get super nerdy; you don’t even have to know what IO stands for. (It’s input/output, by the way.)

Expanding on the Gemini AI Model

Now, you probably should understand what Gemini 2.5 is. It’s Google’s foundational AI model, and lots of folks, including maybe your developers, use it to build AI applications and services. A lot of these announcements are Gemini features and controls–stuff that lets developers maximize innovation while reducing risks. So, if you read about 2.5 Pro or 2.5 Flash, those are Gemini versions.

First: Solutions that Engage and Inspire

We’re going to start with five IO highlights that use AI to transform human connectivity and collaboration. Digital can’t replace human-to-human connections, but they can help us do more with them.

1. Google Meet: Bring a Translator to Your Next Chat

It’s a big, big, world, and we’re both unified and separated by language. That’s why translation has always been a killer use case for a lot of digital tools, and obviously AI is one of them. Google Meet will now support near real-time translation across languages–right in Google Meet.

And according to Google (and the demo), it’s not just about translating the language itself. The listener hears speech that maintains the same quality, tone, and expressiveness of the original speaker.

2. Google Beam: Half Cool, Half Freaky Digital Intimacy

You probably spend A LOT of time on video calls, and you’re probably quite used to the fuzzy little avatars that represent you and the people you collaborate with. As good as you can get those filters, it never quite feels like being in the same room. Google Beam sets out to change that, turning 2D conversations into 3D experiences with minimal special hardware.

It’s hard to describe, and it’s best left to a demonstration, and it definitely isn’t designed to elevate your weekly staff meeting. But it’s a great use case for how AI can make remote feels warmer and hopefully a little more real.

3. Stitch: Build for the Web Almost Instantly

If you’ve experimented with v0 or similar “text-to-web design” models, you understand the power of being able to just ask for a good-looking design and get it. Google’s new Stitch lets you easily build both standout UIs as well as all the code required to make the backend actually work.

You can have a conversation with your project, using plain language to add features. And a differentiating feature (at least for now) is the ability to feed Stitch sketches and have it work from those vs. your plain language prompts. It’s another one of those tools that will definitely turn on some lightbulbs once people see what it can do.

4. Sparkify: Create Little Worlds Around Answers

If you’re mostly bored with most of the audio and video generation models, you’re not alone. While the technical capabilities of tools, even Google’s own Veo, are astounding, it’s still not really storytelling–they’re very pretty cut scenes at best. That’s what makes Sparkify so interesting.

Sparkify takes a question and delivers an expert answer in the form of a short, animated film. Check out the matcha example on the Google page to see it in action. Unlike the bigger foundational A/V models, the use cases for Sparkify feel much more straightforward, from internal knowledge sharing to customer education and beyond.

5. NotebookLM: Build Your Own Podcast Machine

Even in a year of crazy AI announcements, the 2023 release of Notebook LM made quite a splash. Google has always called it a research assistant, but it’s more than that. It can analyze up to 100 concurrent sources, performing deep analysis you can interact with in lots of ways.

What got Notebook LM so much attention was the ability to turn all this data into a multi-host podcast format, perfect for consuming complex info on the go. The announcement puts the application into the Play Store, giving a whole new crowd the chance to build their own notebook around the topics that matter most. It’s one of those tools that can really impress even the toughest AI skeptic.

Solutions for smarter, safer AI

The last four highlights are less about creativity and connectivity and more about reducing risk and increasing time to value. These are still some very very impressive ideas, even without fun animation.

6. MCP SDK: Get Open Source Freedom Built Right In

Gemini now includes an SDK for MCP development. What does that mean?

A big barrier to faster AI development has been a lack of interoperability across and between foundational models and other components. Engineers refer to it as an “M×N integration problem,” the challenge of connecting M AI models to N tools or data sources. Before MCP, these complex connections had to be built one at a time.

Anthropic’s 2024 launch of the open source Model Context Protocol gave developers a common language around which to build these connections. SDK support in Gemini means developers can focus on vision and building, not the details of connections.

7. Gemini Thinking Budgets: Manage Your Genius

Thinking budgets let businesses limit how hard/long a model thinks before responding to a prompt. Longer thinking yields better answers, but also requires more tokens, time, and money. Budgets enable teams to finely optimize for both answer quality and efficiency. (And this is an area where so far AIs are more advanced than humans – sticking to a budget).

Mentioning this because it’s focused on a common pain point for businesses: runaway costs. It still happens with the cloud, and people are very nervous about teams tinkering with AI only to receive a surprisingly huge invoice. I also love the idea of a thinking budget for humans. It sounds like an excellent excuse to quit work early on a Thursday. “Sorry, all out of thinking budget for the week. Try again Monday.”

8. SynthID: Google Will Recognize its Own AI Fakes

Everybody’s worried about the potential impact of deepfakes and their potential impact on life and work. Maybe you’re worried about fake news turning an election, or just that salesperson you know is faking airport parking receipts, but you just can’t catch them. If either of these gets created with any of the Google AI tools, SynthID will find it.

The tool helps with more than making sure you know who doesn’t get paid for parking a car he really leaves at home. It provides an easy way to flag AI content when required by local laws, and also brings automated watermarking and detection capabilities into integrations with other Google products. Sniffing out BS has never been easier, especially since they’ve apparently already tagged 10B pieces of content.

9. Google AI Edge Portal: Finally making On-device ML really Happen

AI has been a dream for more than 100 years–the availability of new kinds of compute has really unlocked all the innovation of the last few years. The next frontier is on-device machine learning, where deep calculation and analysis currently being done by GPUs in a data center can be accomplished by the device itself.

If you’re trying to build locally, without a stack of expensive H100s under the hood, you’ve probably run into skeptics. And, if you’ve ever tried to simulate what a website or app looks like on a phone or tablet, you know how hard it is and why you need special tools. Now imagine trying to predict how that same variety of devices would deliver ML performance. The AI edge portal lets you do that at scale.

But wait, there’s more

We looked at only nine announcements from Google IO 2025, obviously barely managing to scratch the surface. Visit Google to read the whole list – and best of luck helping your business navigate this AI moment where more change is probably the only certainty.

About the Author

Sean M. Dineen has spent over 20 years as a technical and marketing communicator with a strong focus on compliance and security. He has spent the last ten years helping leading B2B technology and security companies from AMD + AT&T to NVIDIA and Palo Alto Network bring their solutions to market.

Scroll to Top