Pwn2Own Berlin 2026, Day One: $523,000 paid out, AI products fall


Pwn2Own Berlin 2026, Day One: $523,000 paid out, AI products fall

Pierluigi Paganini
May 15, 2026

Pwn2Own Berlin 2026 day one saw 22 entries and 24 zero-days across major software, with researchers earning $523,000 in total rewards.

Day one of Pwn2Own Berlin 2026 featured 22 entries targeting widely used technologies, including browsers, operating systems, AI platforms, and NVIDIA infrastructure. By the end of the day, researchers demonstrated 24 unique zero-day vulnerabilities and earned a total of $523,000 in rewards, highlighting ongoing security risks across major enterprise and consumer software ecosystems.

Orange Tsai of the DEVCORE Research Team made the headlines; he chained four separate logic bugs to escape the Microsoft Edge sandbox, a technically demanding achievement that earned him $175,000 and 17.5 Master of Pwn points in a single attempt. It was the kind of result that reminds you why this competition exists: not to embarrass vendors, but to surface flaws in controlled conditions before someone with worse intentions finds them first.

“Orange Tsai (@orange_8361) of DEVCORE Research Team (@d3vc0r3) chained 4 logic bugs to achieve a sandbox escape on Microsoft Edge, earning $175,000 and 17.5 Master of Pwn points.” reads the post by Zero Day Initiative.

Windows 11 was successfully exploited three times during the day: by Angelboy and TwinkleStar03 of the DEVCORE Internship Program, by Marcin Wiązowski, and by Kentaro Kawane of GMO Cybersecurity. Each demonstrated a distinct privilege-escalation zero-day on a fully patched system, earning $30,000 apiece. Three different researchers, three different bugs, one operating system. That pattern alone is worth noting.

Valentina Palmiotti (@chompie1337) of IBM X-Force Offensive Research had arguably the most productive individual day, picking up $70,000 across two separate wins: a $50,000 award for a zero-day in the NVIDIA Container Toolkit, and another $20,000 for rooting Red Hat Linux for Workstations.

“chompie of IBM X-Force Offensive Research (XOR) used a single bug to exploit NV Container Toolkit, earning $50,000 and 5 Master of Pwn points.” continues the post.

On the NVIDIA side, Satoki Tsuji of Ikotas Labs exploited an overly permissive allowed list vulnerability in NVIDIA Megatron Bridge for $20,000, while haehae collected another $20,000 for a separate Megatron Bridge zero-day, and rounded out the day with a further $20,000 for dropping a zero-day in the Chroma vector database.

AI platforms were a prominent target throughout the day, reflecting this year’s competition theme around enterprise and artificial intelligence technologies. k3vg3n chained three bugs, including a server-side request forgery and a code injection, to bring down LiteLLM, walking away with $40,000. Two separate teams, Compass Security and maitai of Doyensec, each collected $40,000 for independently exploiting OpenAI’s Codex coding agent. STARLabs SG earned another $40,000 for a zero-day in LM Studio.

“k3vg3n chained 3 bugs including SSRF and Code Injection to take down LiteLLM. $40,000 and 4 Master of Pwn points. Full win.” states the post.

Not every attempt succeeded. Le Duc Anh Vu of Viettel Cyber Security could not get their OpenAI Codex exploit working within the time limit, and Park Jae Min’s attempt against the Oracle Autonomous AI Database also fell short.

At the end of day one, DEVCORE Research Team sits atop the leaderboard with $205,000, a commanding lead built almost entirely on Orange Tsai’s Edge chain. Valentina Palmiotti follows in second place at $70,000.

Day two brings a new set of targets to the stage, including Microsoft SharePoint, Microsoft Exchange, Apple Safari, Mozilla Firefox, Cursor, Anthropic Claude Code, and additional attempts against Windows 11, Red Hat Enterprise Linux, and several AI platforms. The full prize pool across all categories exceeds $1,000,000. Per competition rules, all targets run the latest available software versions, and vendors receive 90 days to patch any zero-days demonstrated on stage before public disclosure.

Last year’s Berlin edition paid out $1,078,750 for 29 vulnerabilities. With $523,000 already awarded after a single day and a full schedule still ahead, 2026 is shaping up to exceed it.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, Pwn2Own Berlin 2026)







Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link