Broadcom releases VMware Fusion security update for root access bug


Broadcom releases VMware Fusion security update for root access bug

Pierluigi Paganini
May 14, 2026

Broadcom patched a high-severity VMware Fusion flaw, CVE-2026-41702, that could let local attackers gain root privileges.

Broadcom released a security update for VMware Fusion to address a high-severity vulnerability, tracked as CVE-2026-41702, that could allow local attackers to escalate privileges to root on affected systems.

The flaw is a time-of-check time-of-use (TOCTOU) vulnerability affecting operations performed by a SETUID binary that was reported by security researcher Mathieu Farrell.

Broadcom explained that an attacker with local non-administrative user privileges can exploit the bug to escalate privileges to root on the system where Fusion is installed.

“A local privilege escalation vulnerability in VMware Fusion was privately reported to Broadcom.” reads the advisory. “Updates are available to remediate this vulnerability in affected Broadcom products.”

Successful exploitation could allow attackers with limited access to gain full control of vulnerable machines, significantly increasing the risk posed by compromised user accounts or insider threats.

TOCTOU vulnerabilities occur when a system checks the state of a resource and later uses it without ensuring that the state has not changed in the meantime. Attackers can exploit this timing gap to manipulate files, permissions, or other resources and execute unauthorized actions with elevated privileges.

VMware Fusion is widely used by developers, IT professionals, and security researchers to run virtual machines on macOS systems. Because the vulnerability requires local access, it does not expose systems directly to remote compromise. However, privilege escalation flaws remain highly valuable to attackers because they can turn a limited foothold into complete system compromise.

The patch arrives as Broadcom participates in the Pwn2Own hacking competition taking place this week in Berlin. The event, organized by Trend Micro’s Zero Day Initiative, brings together some of the world’s top security researchers to demonstrate zero-day exploits targeting widely used enterprise and consumer technologies.

VMware products have historically attracted strong interest from Pwn2Own participants due to the high value of virtualization exploits. This year, participants are expected to showcase attacks against VMware ESX, with successful demonstrations potentially earning rewards of up to $200,000.

Broadcom has sent members of its security team to the competition and may announce additional VMware-related patches in the coming days, depending on the results of the event.

Interestingly, VMware Workstation, which has frequently appeared as a target in previous Pwn2Own editions and generated significant payouts for researchers, was removed from this year’s list of eligible targets.

Organizations and users running VMware Fusion are advised to apply the latest updates as soon as possible to reduce the risk of privilege escalation attacks.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, VMware Fusion)







Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link