Google announced a multitude of artificial intelligence (AI) updates and new features at I/O 2025 on Tuesday. Alongside, the company also shared its long-term vision with AI, and the way it wants to evolve its current line of AI products. Demis Hassabis, the Co-Founder and CEO of Google DeepMind, highlighted the new developments with Project Astra and Project Mariner, as well as the tech giant’s plans with Gemini Robotics. Google said that it eventually wants to build a universal AI assistant.
Project Astra, Mariner Is Rolling Out to Select Users
In a blog post, Hassabis highlighted the company’s vision of creating a universal AI assistant. This assistant is described as a “more general and more useful kind of AI,” which can understand the context of the user, can plan tasks proactively, and take actions on their behalf, across devices. While that is a long-term project of Google DeepMind, a significant step was taken with new capabilities in Project Astra and Project Mariner.
Project Astra deals with the real-time capabilities of Gemini models. The first wave of these features has been rolled out to Gemini Live, which can now access a device’s camera and read the content on the screen in real-time. The project has now updated the voice output to a more natural-sounding voice with native audio generation. Additionally, it is also adding improved memory and computer control capabilities.
In a demo shown during the keynote session of Google I/O 2025, the upgraded Gemini Live could speak to the user expressively, be interrupted and resume conversation from where it left off, and perform multiple tasks simultaneously in the background. With computer control, it also made calls to businesses, scrolled through a document, and found information by searching the web.
These features are currently being tested by the company and will eventually be added to Gemini Live, AI Mode in Search, and the Live application programming interface (API). It will also be added to new form factors such as smart glasses.
Next is Project Mariner, which develops agentic capabilities in Gemini. It was launched in December 2024, and Google has been exploring various research prototypes of human-agent systems. The company also previewed a browser-focused AI agent that can make reservations at a restaurant and book appointments.
Google said that Project Mariner now includes a system of agents that can complete up to 10 different tasks simultaneously. These can also be used to buy products and conduct research online. These updated capabilities are now being made available to Google AI Ultra subscribers in the US.
Developers using the Gemini API will also get its computer use capabilities. Additionally, DeepMind also wants to bring these capabilities to more products later this year.
Gemini Robotics and World Models
During the keynote session, Google also spoke about world models. These are essentially very powerful foundation AI models that have a deep knowledge of real-world physics and spatial intelligence. These models are considered ideal for training robots via simulations.
Google said it is using Gemini 2.0 models for its Gemini Robotics division, which is a platform to train and develop humanoid and non-humanoid robots. Currently, it is testing the platform with its trusted testers.