Researchers uncover YellowKey and GreenPlasma Windows Zero-Days


Researchers uncover YellowKey and GreenPlasma Windows Zero-Days

Pierluigi Paganini
May 15, 2026

Researchers disclosed two new Windows zero-days named YellowKey and GreenPlasma affecting BitLocker and the CTFMON framework.

A security researcher known as Chaotic Eclipse, also called Nightmare-Eclipse, disclosed two new Windows zero-day vulnerabilities named YellowKey and GreenPlasma.

The flaws affect BitLocker and the Windows Collaborative Translation Framework (CTFMON). YellowKey could allow attackers to bypass BitLocker protections, while GreenPlasma enables privilege escalation. The researcher previously disclosed three Microsoft Defender vulnerabilities.

YellowKey is a BitLocker bypass issue that affects Windows 11 and Windows Server 2022/2025 systems. The flaw allows attackers with physical access to bypass BitLocker protections and gain unrestricted shell access to encrypted volumes through the Windows Recovery Environment (WinRE). The attack involves placing specially crafted files inside the System Volume Information\FsTx directory on a USB drive or directly in the EFI partition. According to the researcher, the vulnerable component only exists inside the WinRE image and not in standard Windows installations, raising suspicions about its design. Windows 10 systems do not appear affected.

“Now why would I say this is a backdoor ? The component that is responsible for this bug is not present anywhere (even in the internet) except inside WinRE image and what makes it raise suspicions is the fact that the exact same component is also present with the exact same name in a normal windows installation but without the functionalities that trigger the bitlocker bypass issue.” wrote Chaotic Eclipse. “Why ? I just can’t come up with an explanation beside the fact that this was intentional. Also for whatever reason, only windows 11 (+Server 2022/2025) are affect, windows 10 is not.”

The second issue, dubbed GreenPlasma, is a Windows privilege escalation vulnerability affecting the CTFMON framework on Windows 11 and Windows Server 2022/2026. The proof-of-concept exploit allows attackers to create arbitrary memory section objects inside directories writable by SYSTEM. By abusing trusted paths used by services and kernel drivers, attackers could potentially escalate privileges to SYSTEM level. The researcher withheld the full exploit code but said the flaw can still be turned into full privilege escalation by skilled attackers.

In April, Chaotic Eclipse disclosed three other flaws in Microsoft Defender called BlueHammer, RedSun, and UnDefend.

Chaotic Eclipse also published proof-of-concept code for the unpatched Windows bug.

BlueHammer and RedSun let attackers escalate privileges locally in Microsoft Defender. UnDefend instead triggers a denial-of-service, blocking security definition updates and weakening protection.

At this time, Microsoft has only fixed the BlueHammer flaw, tracked as CVE-2026-33825, but the others remain unpatched.

Huntress researchers reported attackers are exploiting the three Windows flaws to target systems, though the victims and attackers remain unknown.

Huntress said it saw real-world exploitation of all three flaws. Attackers used BlueHammer starting April 10, 2026, then followed with RedSun and UnDefend proof-of-concept exploits on April 16.

Researchers believe attackers were using public exploit code released online by Chaotic Eclipse.

Huntress said attackers started exploiting BlueHammer on April 10, 2026, then followed with RedSun and UnDefend proof-of-concept exploits on April 16.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, US CISA Known Exploited Vulnerabilities catalog)







Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link