How Google just revamped Gemini Enterprise for the agentic era – here’s what’s new


aiagentnewgettyimages-2230989635

Iana Kunitsa/ Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Google updates Enterprise tools for agentic AI at Next. 
  • A new Agent Platform streamlines automated work and security. 
  • Google also upgraded Workspace and data infrastructure. 

As companies use more agents in their workflows, managing them securely and efficiently becomes a primary challenge. Google just created a possible solution, wrapped in the same accessible interface that many teams are used to. 

On Wednesday at Google Cloud Next, the company’s annual enterprise conference, Google released its new Gemini Enterprise Agent Platform for developers. Evolved from Vertex AI, Agent Platform “brings together the model selection, model building, and tuning services of Vertex AI that customers love, along with new features for agent integration, security, DevOps, orchestration, and more,” CEO Thomas Kurian said in the announcement. 

Also: This powerful Gemini setting made my AI results way more personal and accurate

The platform revamps the current Gemini Enterprise experience and offers over 200 models, including Gemini 3.1 Pro, Nano Banana 2, Gemma open models, and competitive models from Anthropic, such as its just-released Opus 4.7. Since Agent Platform is built on Vertex, Google noted that those services will now flow through Agent Platform exclusively. 

All-in-one agent building 

In the platform, according to Google, developers can design an agent’s life cycle start to finish, from building the agents themselves to scaling and governing them. MCP support and an upgraded Agent Development Kit help developers maximize reasoning capabilities by structuring agents into sub-networks. That tiered approach should set agents up to handle complex tasks, Google said, adding that other features like faster runtime and Memory Bank help agents delegate to each other more efficiently and operate with more context for longer. 

“Gemini Enterprise is now an end-to-end system for the agentic era, built for agents that can execute complex, multi-step work processes,” Google said in the announcement. 

Also: Prolonged AI use can be hazardous to your health and work: 4 ways to stay safe

The company also emphasized that it has baked security into the new platform through tools such as Agent Identity, which assigns each agent a cryptographic ID. If you’d rather not take any risks, however, you can use Google’s new Agent Simulation tool to “stress-test your agents against real-world scenarios before they ship,” the company said. 

Once developers are done building and testing, they can publish agents from the platform to the Gemini Enterprise app, where employees can run those agents or build their own with no-code or lower-code options like Google’s Agent Studio and Agent Designer. 

A Google employee demonstrated how users can deploy multiple agents in the enterprise app at once to tackle an inventory or marketing challenge, as if they were a team of workers. In the demo, each individual agent handled a specific element of a multi-step project for a furniture company, using the organization’s Workspace contents to pull relevant data and strategy points. 

Security 

Running multiple autonomous agents can pose a host of privacy and security risks for any organization, especially when non-developer employees use them. Google emphasized that its revamped Gemini Enterprise addresses this by simplifying guardrails and permissions before users can access agents. The company said it “provides the same level of oversight and auditability found in essential business applications like payroll or quarterly financial reporting.”

Also: I tested ChatGPT Plus vs. Gemini Pro to see which is better – and if it’s worth switching

The Gemini Enterprise app sits atop Agent Platform, which Google said standardizes governance and security. 

“We provide a single control plane for governance in Agent Platform, so every employee can use and share agents with full IT visibility,” the company added. “Both no-code and pro-code agents are managed through a consistent model for identity, security, and auditing.” 

Other announcements 

Google also announced Agentic Data Cloud, a new data architecture intended to help scale AI agents. Several new features let developers instantly query data without moving it out of AWS or Azure, leverage new data science tools across multiple surfaces, and enrich files with metadata to give agents more semantic context, among other capabilities. 

At the Workspace level, Google launched Workspace Intelligence, which uses Gemini reasoning to understand “complex semantic relationships within your Workspace apps (such as Docs, Slides, or Gmail) content, your active projects, your collaborators, and your organization’s domain knowledge,” the company wrote. 

Also: Scaling agentic AI demands a strong data foundation – 4 steps to take first

While that may sound like what Gemini already does, Google framed Workspace Intelligence as an additional tool that Gemini will leverage when automating tasks such as slide generation and project prep. Google noted a few upgrades in the new feature, including proprietary infographics in Docs and advanced personalization tailored to a user’s style. 

“Workspace Intelligence retrieves your relevant emails, chats, files, and information from the web to transform ideas into professionally formatted drafts that mimic your exact voice, brand, style, and company templates,” Google said. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link