AI is no longer a co-pilot. It wants to drive.
Remember that popular saying about wanting AI to do chores, but instead it’s doing creative work while we’re still stuck with laundry? Google’s I/O has just happened, and if you’re wondering whether AI has finally put on an apron and started scrubbing, the answer is: not quite. But it’s definitely ready to help you find cheaper baseball tickets while translating your terrible small talk into Hindi in real time.
Out of the hundred-plus Google I/O announcements, Gemini was a much-used term, so much so that you could’ve made it into a drinking game (though we wouldn’t recommend it if you wanted to make it to the end of the keynote). The conference itself was a bit awkward, and the crowd was tough! The presenters would pause for 7 seconds, waiting for applause between each update, only to find themselves standing in uncomfortable silence.
Yet the 2025 edition of Google I/O news was packed with some really cool features. You can now talk with AI (Project Astra) while you’re recording your surroundings, and ask it questions you’re too embarrassed to ask a real human. Then there’s Android XR glasses. The idea that Gemini can teleport you around Google Maps in XR, or that the glasses can give you directions, identify objects, and handle communication without taking your phone out of your pocket, that’s some serious Black Mirror stuff.
But we’re not here for gadgets and consumer tech (you can read the full list of announcements here or watch the keynote). We’re diving into the Google I/O summary of cloud technologies that professionals can leverage to build, deploy, and manage solutions. Let’s dig in!
Google Gemini 2.5: More brainpower and a longer attention span

The centerpiece of Google’s generative AI tools is Gemini 2.5, the latest generation of their large language model. Gemini 2.5 Pro now boasts a staggering one million token context window, meaning it can ingest whole codebases or encyclopedic documents in one go.
Both the high-end Pro and the latency-optimized Gemini 2.5 Flash models bring improved reasoning, better coding abilities, and stronger multitasking skills, performing better on complex tasks and handling long documents with ease.
Thought summaries and “thinking budgets” in Vertex AI
All that brainpower could be hard to wrangle, but Google is giving developers new tools to tame and trust their AI. Vertex AI (Google Cloud’s managed AI platform) now offers thought summaries for Gemini models, essentially letting you peek into the model’s reasoning process. It can provide a behind-the-scenes summary of the AI’s “raw thoughts” (including which tools it used) so you have transparency and auditability about how it’s reaching answers.
Developers also gain granular control over how much the model thinks before it speaks. Google introduced the concept of “thinking budgets”, a setting that adjusts how much computational effort (and time) the model spends reasoning on a task.
For simple queries, you can dial down the budget so the model responds faster and cheaper; for gnarly problems, dial it up to let the AI deeply ponder (at a higher compute cost). The model will even auto-adjust its effort based on query complexity when allowed.
Together, these Vertex AI enhancements give cloud developers both visibility into the AI’s mind and dials to control its output, making enterprise AI deployments more predictable and trustworthy.
Jules: An async coding assistant (beta)
One of the flashier reveals at I/O was Jules, Google’s new autonomous coding agent. Billed as a self-directed dev assistant, it promises to handle the grunt work: writing tests, fixing bugs, editing code across large repositories, and pushing clean, usable commits. It even spins up a virtual machine to do the work, creates branches, and integrates seamlessly with GitHub and Vercel.
One developer described using Jules to add language selection to a cover letter generator app. According to them, the tool correctly mapped out the necessary edits, introduced only surgical changes, created a separate branch, and deployed the result through Vercel. The developer praised Jules’ ability to make small, clear modifications rather than overwriting entire functions or hallucinating unnecessary changes.
But across the broader developer community, feedback has been mixed. Several users reported that Jules tends to generate very generic solutions. It doesn’t adapt well to project-specific libraries or follow conventions already in place. Another developer said that even with detailed instructions, Jules introduced new bugs, ignored environment variables, and broke existing functionality in their Next.js app.
Performance also seems to be a sticking point. Users described the interface as slow and buggy, with GitHub commits stalling out or getting stuck indefinitely. In some cases, Jules would say it had completed a task when nothing had actually happened.
In its current state, Jules is useful for experimentation and light scaffolding, but it’s not ready to be left unattended on production code. Developers still need to guide it closely, check its outputs, and clean up the mess if something goes sideways. If you’re expecting a reliable AI engineer, you’ll be disappointed. If you’re looking for a capable intern who sometimes forgets what they were doing, you might be pleasantly surprised.
Project Mariner: A multi‑tasking AI agent
Project Mariner is Google’s most ambitious effort yet to give Gemini agency in the real world. Instead of simply summarizing pages or generating responses based on static data, Gemini can now actually use the internet. It clicks through websites, scrolls down pages, fills out forms, and does it all in a live browser session.
This is not theoretical. It is Gemini running a headless browser, interpreting screen content like a human would, and following prompts to perform tasks online. That means it can comparison shop, update spreadsheets with data from vendor dashboards, or even cancel your old subscription to that mindfulness app you forgot about.
The big question here is oversight. Because Gemini operates visually, Mariner relies on screen captures to understand and interact with the web. That raises some natural concerns around privacy. Google has said very little about how long these snapshots are kept or where they are stored. This is worth watching, especially for enterprise use.
TPU Ironwood: 10× performance under the hood
All these AI advancements need serious horsepower, and Google delivered at the silicon level with TPU dubbed “Ironwood.” This is Google’s seventh-generation Tensor Processing Unit, and it’s an absolute beast.
In plain terms, Ironwood brings a 10× speed boost to AI inference workloads, a leap that will drastically boost throughput for large models. It’s also the first TPU built specifically to power “thinking and inferential” AI models at scale, underscoring that it’s tuned for the new generation of AI that isn’t just regurgitating info but reasoning and taking actions. By scaling up to 9,216 chips in a pod, Ironwood effectively creates a hypercomputer to fuel Google’s AI services and cloud customers’ most demanding apps.
For developers, the practical upshot is more compute on tap: those using Google Cloud’s AI infrastructure can train and deploy larger models or get faster results without spinning up as many instances. Ironwood is the heavy metal powering all these smart agents and models and a reminder that behind every great AI, there’s a lot of matrix math being crunched on a very fast chip.
Still, it is worth noting that unless you happen to work inside a hyperscale data center, you will never see one of these chips. Google is not selling Ironwood. The hardware stays in-house and powers Google Cloud services exclusively.
From a business perspective, it makes perfect sense. Google ensures that developers and enterprises who want to use the latest models at full throttle will do so through its infrastructure.
But this model creates an odd kind of innovation gatekeeping. The hardware that could enable breakthroughs in research, simulation, or high-intensity training remains locked inside Google’s ecosystem. This might be the shape of the future: a few companies owning the fastest machines, renting out slivers of compute when they need it.
Final thoughts
By the end of the I/O keynote, one thing was clear. Google wants you to stop writing code and start orchestrating agents. The message was repeated like a chant: AI will write the functions, AI will test the app, AI will deploy the build. All of this must have been music to the ears of cloud professionals who’ve spent half their careers mastering programming languages, frameworks, and design patterns.
Still, there’s no denying the ambition on display. This was not Google chasing the pack. This was Google reminding everyone why it has the deepest pockets and the biggest brain trust in the business. From Project Mariner’s web-automating agents to Jules’ GitHub commits, the message was clear. AI is no longer a co-pilot. It wants to drive.
Still, some skepticism is healthy. Alphabet’s stock dipped after the announcements. That was probably less about disappointment and more about investor whiplash. Many of these features are not available yet. Others are in early stages. Some will thrive. Some will quietly disappear, like other ambitious moonshots before them. Jules, for instance, is a fascinating idea but still it is by no means a developer you can trust with production code.
And yet, among the sizzling demo reels, a few inconvenient truths poked through. Alphabet stock dipped after the announcements. Most of what was shown worked beautifully in isolated demos (most of the time). In the real world, adoption will make or break it.
Jules, for example, is powerful but finicky. It handles basic scaffolding, pushes commits, and even launches previews. But it stumbles with anything too custom, too weird, or too real. It is not replacing developers. At best, it is a promising intern who sometimes forgets what you asked it to do.
The same goes for Project Mariner. An AI that browses the web on your behalf sounds like magic until you think about the implications. What does it mean to give a machine the power to interact with websites, fill out forms, and interpret screens like a human? The productivity gains are obvious. So are the privacy headaches.
Even the sheer hardware power of Ironwood, Google’s new TPU capable of powering reasoning-heavy models, has its own flavor of exclusivity. You cannot buy it. You can only rent its capabilities through Google Cloud. For most developers, this is a black box you pay to access, not a tool you can truly wield. It raises the question of how open this next wave of AI will be. Will innovation belong to everyone, or will it be sold in compute-hour increments from behind glass?
As virtual Einstein said at CloudFest 2025: “With any powerful technology, a balance is needed, fostering innovation while establishing clear guidelines to prevent misuse. This requires global collaboration, open dialogue, and a deep understanding of the potential consequences. We must not let the pursuit of progress overshadow the preservation of humanity’s well-being.”
We get it, this stuff can feel a little heavy. Here’s ChatGPT spitting some bars against a real human. Its debut against Gemini is highly anticipated.