Sereact raises $110 million to scale its AI that makes any robot adaptable


The round is led by Headline, with new investors Bullhound Capital, Felix Capital, and Daphni. Valuation is undisclosed. Sereact’s vision language action models already run in BMW, Daimler Truck, and logistics customers. The $110M is more than four times the €25M Series A raised just 15 months ago.


Sereact, the Stuttgart-based AI robotics software company, has raised $110 million in a Series B round led by Headline, the international venture firm with offices in Berlin, San Francisco, and Paris. New investors Bullhound Capital, Felix Capital, and Daphni joined alongside several existing backers.

The company declined to disclose its valuation. Funds will be used to develop Sereact’s core AI model, one that “makes robots smarter and more adaptable to different tasks”, and to scale deployment across logistics, manufacturing, and, increasingly, humanoid robot platforms.

Sereact was founded in 2021 by Ralf Gulde (CEO) and Marc Tuscher (CTO), both former AI researchers, at the University of Stuttgart.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The company’s technical approach is grounded in Vision Language Action Models (VLAMs): AI systems that combine computer vision, natural language understanding, and action planning into a single model, allowing robots to perceive their environment, interpret instructions, and execute physical tasks without requiring complex programming or environment-specific pre-training.

A robot picking a fragile object can, in principle, evaluate whether its planned grip will cause damage before its gripper closes.That capability is the meaningful differentiator in a market where most industrial robotics still operate on pre-programmed sequences that assume a controlled, predictable environment.

Warehouses, manufacturing floors, and logistics facilities are not controlled environments: objects arrive in unpredictable orientations, packaging varies, and edge cases are constant.

Sereact’s software-first approach, explicitly positioned against the hardware-first strategies of most robotics companies, is designed to make robots adaptable to this variation without requiring engineers to reprogram them for each new object type or layout change.

In Gulde’s formulation from the Series A announcement: “with our technology, robots act situationally rather than following rigidly programmed sequences.”

The commercial record behind the Series B is substantive. Customers include BMW Group, Daimler Truck, the Dutch e-commerce fulfilment company Bol, and logistics specialists MS Direct and Active Ants.

The deployment at automotive OEMs is editorially significant: BMW and Daimler Truck are not pilots or proof-of-concepts, they are production environments where the economic cost of a robot failure is measured in line stoppages.

Sereact’s technology reaching production at that tier of customer is the validation signal that distinguishes it from the large number of AI robotics companies still operating at the demonstration stage.

The funding trajectory makes the ambition of the round clear. Sereact raised $5 million in seed funding in 2023, €25 million (approximately $26 million) in a Series A led by Creandum in January 2025, and now $110 million in April 2026, a more than four-times step-up from the Series A in fifteen months.

Creandum’s Johan Brenner captured the investment thesis at the Series A: “most AI robotics companies are currently hardware-first. What sets Sereact apart is their software-first, foundational approach which means they have the potential to become the brain of any robot that requires vision and autonomous capabilities.”

That thesis, a software-first robotics intelligence layer deployable across any hardware platform, is essentially the same thesis that has made Mobileye valuable in autonomous vehicles and that Nvidia is pursuing through its Isaac robotics platform: the idea that the highest-margin position in robotics is not the robot itself but the intelligence running it.

The broader market context is accelerating. Humanoid robot deployments by Figure AI, Boston Dynamics, and Unitree are moving from controlled tests to commercial production at warehouse and manufacturing customers.

The global humanoid robot market, valued at under $1 billion in 2023, is projected to exceed $38 billion by 2030. Tesla’s Optimus production ramp, targeting volume output from July 2026, will require robotics intelligence software at scale.

Sereact’s explicit intention, stated at the Series A, to expand beyond logistics into humanoid robot platforms positions it to compete for that market. The $110 million Series B is the capital raise that makes that expansion credible.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link