Moonshot AI’s new Kimi K2.6 swarms your complex tasks with 1,000 collaborating agents


Information for Moonshot AI's Kimi chatbot arranged on a computer

Bloomberg / Contributor via Bloomberg / Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Moonshot AI pushes autonomous coding to new limits.
  • AI designs and builds full-stack apps from prompts.
  • Persistent agents run for days, handling real operations.

Yesterday, Moonshot AI announced Kimi K2.6, the latest version of its open-source AI model. This release has enhanced coding capabilities, long multi-step operation execution, and agent swarm capabilities (which doesn’t sound terrifying at all).

Also: The best free AI for coding – only 3 make the cut now

The company is doubling down on what it calls a “seamless AI coworker experience,” based on a reinterpretation of the OpenClaw AI assistant approach to automated AI processing for complex, real-world workflows.

Improvements in long-horizon coding performance

At the core of the Kimi K2.6 release is a substantial improvement in long-horizon coding performance. Long-horizon coding is another way of saying that the AI can do a very long series of steps without human oversight.

Think of the difference between short-horizon and long-horizon as analogous to the difference between having an employee you have to check on every 15 minutes, and an employee to whom you can just give an assignment and know that what you need will be on your desk tomorrow morning without fuss or hassle.

Also: 7 AI coding techniques I use to ship real, reliable products – fast

Moonshot uses a SysY compiler project as an example of a long-horizon assignment. SysY is a minimalist C-like language used for teaching compiler design to students. Kimi K2.6 designed and built a full SysY compiler from scratch in 10 hours, passing 140 functional tests without human input. It says this work is the equivalent of having four engineers working for two months.

Without a doubt, this is a considerable accomplishment. But Moonshot is not alone in using AI to build compilers. Anthropic reported in February that it built a full C compiler (not just a cut-down training wheels version) using its Opus 4.6 model.

The Anthropic project did fairly well, but it did run into a snag when the agents hit the complex task of compiling the Linux kernel, causing them to get stuck on the same bugs, overwrite each other’s work, and break existing functionality as new features were added.

I’m guessing that the choice of SysY on the part of the Kimi developers was to keep the overall complexity down, and that this new model would probably hit a similar set of snags to those Anthropic encountered.

Moonshot says that the K2.6 model demonstrates strong generalization (meaning it’s able to handle new and unexpected situations across languages including Rust, Go, and Python). It also reports that the new model demonstrates reliability across front-end, DevOps, and performance optimization tasks.

Expanding from coding into design and creation

Coding output isn’t Kimi K2.6’s only big trick. The model is capable of doing user interface design work and then producing coding output from that design. This enables non-coders to build full web applications from prompts, including the look and feel. It provides an assist to developers who may not have design expertise.

Also: I tried to save $1,200 by vibe coding for free – and quickly regretted it

Going back to the long-horizon claim discussed earlier, Moonshot demonstrated the full-scale project capability by building a series of websites. The company reported that Kimi K2.6, “Identified 30 restaurants in Los Angeles without official websites, then automatically generated high-converting landing pages for each. These pages include booking functionality, with all information seamlessly synchronized to their database.”

Agent swarms, proactive agents, and persistent execution

According to Moonshot AI founder Zhilin Yang, “By orchestrating 100 or even 1,000 sub-agents in parallel, we can accomplish complex tasks within a timeframe that is tolerable for the real world.” It calls this “agent swarms.”

I don’t know. I’ve probably seen Terminator too many times, but while I can see the practical benefit, the very idea of swarms of AI agents is freaky as heck.

The company reports, “It seamlessly coordinates heterogeneous agents to combine complementary skills and broad search capabilities layered with deep research, plus large-scale document analysis fused with long-form writing, and multi-format content generation executed in parallel.”

It says that, “This compositional intelligence enables the swarm to deliver end-to-end outputs spanning documents, websites, slides, and spreadsheets within a single autonomous run.”

The Kimi K2.6 model now supports autonomous agents operating continuously across applications and workflows. This release also improves API interpretation, long-running stability, and safety awareness.

The company demonstrated a K2.6-backed agent that, “Operated autonomously for 5 days, managing monitoring, incident response, and system operations, demonstrating persistent context, multi-threaded task handling, and full-cycle execution from alert to resolution.”

Also: AI agents are fast, loose, and out of control, MIT study finds

Another capability added to Kimi K2.6 is what the company calls “Claw Groups,” enabling multiple OpenClaw-style agents running across devices to collaborate with a shared context. There is a central coordinator that dynamically assigns tasks and resolves failures.

Moonshot AI says this all becomes a form of collective intelligence. It says, “We are moving beyond simply asking AI a question or assigning AI a task, and entering a phase where human and AI collaborate as genuine partners–combining strengths to solve problems collectively.”

As long as the agents don’t go and invent time travel, we’re probably safe. For now.

Would you feel comfortable letting an AI agent run continuously for days, managing systems on your behalf? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link