Shein accuses Temu of industrial-scale copyright theft


The two-week trial opened Monday. Shein’s complaint covers around 2,300 product images; Temu has abandoned its defence of those images and is countering with anti-competition claims.

Shein accused Temu of copyright infringement “on an industrial scale” as a two-week trial opened at London’s High Court on Monday.

Shein’s barrister, Benet Brandreth, told the court Temu used around 2,300 product photographs created by Shein employees to advertise look-alike clothing on Temu’s website.

“This was an attempt to steal a march on an existing participant in the market and Temu has sought to obtain, we say, an unfair advantage,” Brandreth said.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Temu has effectively abandoned its defence of the disputed images, leaving the court to focus on damages, injunctive relief and the broader competition arguments.

Temu’s counter-claim alleges that Shein engages in anti-competitive practices by locking suppliers into exclusive manufacturing agreements that prevent the same workshops from selling to competing platforms.

The proceedings sit in the Business and Property Courts. The judge has scheduled two weeks for evidence; a ruling is unlikely before late summer.

The London case is the latest in a multi-jurisdictional litigation war between the two Chinese-founded platforms, which have also sued each other in the US, the EU and Singapore.

PDD Holdings, Temu’s parent company, is listed on the Nasdaq. Shein is preparing a public listing whose venue has shifted between Hong Kong and London under regulatory pressure.

The dispute is set against rising regulatory scrutiny on both sides. The UK Competition and Markets Authority has been investigating both companies on consumer-protection and pricing-transparency grounds since 2024.

The Financial Conduct Authority has separately reviewed Shein’s IPO disclosures. The Trump administration moved in February 2025 to close the US de minimis exemption, which had been central to both companies’ US growth model.

Shein’s commercial argument, in Brandreth’s framing, was that copying the images allowed Temu to publish complete product pages faster and at lower cost than the platform would otherwise have managed.

Temu’s competition counter-claim is asking the court to consider whether Shein’s exclusive-supplier arrangements constitute restraint of trade under UK law, and whether they should be unwound.

Neither company commented further outside court on Monday.

The hearing continues. The case is being followed by competition regulators in Brussels, Washington and London, and by both companies’ bankers in advance of Shein’s listing process.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link