How Chinese short dramas became AI content machines


The scene is from Carrying the Dragon King’s Baby, one of the many hundreds of short dramas that appear on apps like DramaWave and ReelShort. There’s just something about this one that isn’t quite right. The lighting may be glossy and cinematic, but the show has an odd visual texture like something between a movie and a video game cutscene. 

That’s because Carrying the Dragon King’s Baby is part of a new trend for making these shows entirely with AI: no actors, camera operators, cinematographers, or CGI specialists required.

China’s short drama industry has boomed since its launch, in 2018. These ultrashort, melodramatic, and often smutty shows are designed for smartphone viewing, with episodes often running just one or two minutes long: Viewers can finish an entire series in as little as 30 minutes to an hour. The films are made for endless scrolling, packed with emotional confrontations and melodramatic plot twists. The trend’s growth is driven by apps that bombard TikTok, Instagram, and Facebook with cliffhanger-heavy ads designed to lure viewers into buying subscriptions. In 2024, China’s short drama market reached roughly $6.9 billion in revenue, surpassing the country’s annual box office earnings for the first time. 

Since 2022, Chinese short drama companies have aggressively expanded overseas, translating existing hits and producing localized series featuring local actors. Globally, short drama apps have approached a billion cumulative downloads. The United States is the biggest market outside of China, providing around 50% of the revenue, according to research firm DataEye.

Now the industry is reinventing itself. Chinese short drama companies—already masters of low-budget, algorithmically optimized entertainment—are embracing generative AI to produce content faster and cheaper than ever. An average of 470 AI-generated short dramas were released every day in January, according to DataEye. Short-drama companies like Kunlun Tech are ramping up AI productions, shrinking film crews, and reorganizing the labor pipeline from the ground up. For some studios, AI has moved from being a supporting tool to providing the backbone of production itself.

Infinite stories, infinite tropes

Short dramas are already famously low-budget. But AI has made them dramatically cheaper to mass-produce, helping to accelerate the entire process—and save money. Production timelines have collapsed. Conceptualization, script writing, casting, shooting, and editing used to take three to four months. With AI, the process can now take less than a month, says Tang Tang, vice president at short-drama platform FlexTV. Producing a short drama in North America once cost roughly $200,000, but AI can cut that cost by 80% to 90%, according to Tang.

After expanding into the US market, Chinese short drama companies largely followed the same playbook they used in China: Buy traffic aggressively on TikTok, Facebook, and YouTube; offer a handful of free episodes; then charge viewers to unlock the rest inside the companies’ apps. Decisions about what to produce next are often driven less by creative instinct than by performance data. “We look at what themes, plotlines, and writers resonate with audiences, then quickly adjust,” says Tang.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link