Microsoft’s own ToS calls Copilot ‘entertainment only’ amid adoption slump



In short: Microsoft has spent billions building Copilot into every corner of its product lineup, pitching it as an indispensable AI co-worker. Its own Terms of Use tell a different story. A clause quietly buried in the document labels Copilot “for entertainment purposes only” and warns users not to rely on it for important advice. The gap between the marketing and the fine print has drawn fresh scrutiny as adoption figures reveal that fewer than one in 30 eligible users is actually paying for the tool.

Somewhere between Satya Nadella’s earnings calls and the product pages promising to “transform the way you work,” Microsoft inserted a sentence into Copilot’s Terms of Use that reads rather differently from the rest of its AI pitch. Updated in October 2025 and surfacing widely in early April 2026, the clause appears under a section in bold capital letters labelled “IMPORTANT DISCLOSURES & WARNINGS.” It says: “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.

The same document states that Microsoft makes no warranty or representation of any kind about Copilot, that users should not assume its outputs are free from copyright, trademark, or privacy rights infringement, and that users are solely responsible for any Copilot content they choose to share or publish. The terms apply to consumer Copilot products; the enterprise-facing Microsoft 365 Copilot is excluded from the clause.

What Microsoft has been saying publicly

The disclaimer sits in sharp contrast to years of aggressive promotion. Since integrating Copilot across Windows 11 and the Microsoft 365 suite in 2023, the company has positioned the tool as a productivity multiplier, its “AI companion” for workers in Word, Excel, PowerPoint, and Outlook. Nadella has described Copilot as “becoming a true daily habit” and told investors that daily active users had grown nearly threefold year on year. The company spent approximately $80 billion on AI-related capital expenditure in fiscal year 2025, including a $13 billion investment in OpenAI whose models underpin Copilot’s core capabilities.

Microsoft 365 Copilot is priced at $30 per user per month as an enterprise add-on, with a business tier at $18 per user per month. Premium consumer tiers carry costs that reach into the tens of dollars monthly. “Entertainment purposes only” is not language typically associated with a product charging at those rates.

The legal logic behind the clause

Legal analysts who reviewed the language offered a measured interpretation. The most widely cited read is that the clause represents a lawyer’s attempt to limit liability in circumstances where the product fails, an overcorrection that has become embarrassing because of how bluntly it contradicts the marketing. OpenAI, Google, and Anthropic all include similar advisories in their terms of service, acknowledging inaccuracy and placing responsibility for verifying outputs on users. None of them, however, uses the phrase “entertainment purposes only,” which Android Authority noted is “the same disclaimer that a psychic uses to avoid getting sued.”

The broader legal context matters. Microsoft has faced litigation over Copilot’s outputs before: a class-action suit in a US federal court in San Francisco challenged the legality of GitHub Copilot over alleged open-source licence violations, and a separate dispute in Australia concerned customers who were moved to more expensive plans with Copilot bundled in. The consumer Copilot ToS language, on this reading, is corporate defensiveness made explicit, an attempt to establish in writing that the product never warranted the reliance users might have placed on it.

The adoption numbers that give context

The disclaimer arrives at an awkward moment for Copilot’s commercial trajectory. Data published in early 2026 showed that only 3.3% of Microsoft 365 and Office 365 users who have access to Copilot Chat actually pay for it. Of roughly 450 million Microsoft 365 seats, 15 million are paid Copilot subscribers, a conversion rate that reflects the difficulty of persuading existing users to pay a significant premium for AI they find unreliable.

Research from Recon Analytics traced the problem in part to accuracy. Its tracking of Copilot’s accuracy Net Promoter Score found it at -3.5 in July 2025, deteriorating to -24.1 by September 2025, and only partially recovering to -19.8 by January 2026. In surveys of lapsed Copilot users, 44.2% cited distrust of answers as the primary reason they had stopped using the tool. Separately, the US paid subscriber market share fell from 18.8% in July 2025 to 11.5% in January 2026, a 39% contraction in six months. When users are given a choice between Copilot, ChatGPT, and Gemini, just 8% of workers opt for Copilot.

The hallucination record has not helped. In August 2024, Copilot falsely accused German court reporter Martin Bernklau of the crimes he had covered for years, describing him as a convicted child abuser and fraudster and providing his home address. Microsoft was forced to block queries about Bernklau after a data protection complaint. In January 2026, Copilot generated false claims about football-related violence, triggering further coverage of the tool’s reliability problem. The “entertainment purposes only” clause looks rather less like a legal technicality in that context, and rather more like an accurate description.

Microsoft’s pivot and what it means

Nadella’s response to Copilot’s uneven performance has been to assume direct control over AI product development, reportedly delegating other responsibilities from September 2025 onward to focus personally on the roadmap. The company has also begun building its own models. Microsoft’s launch of MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 in April 2026 , its first proprietary AI model releases since renegotiating its contract with OpenAI in September 2025 — signals a strategic intent to reduce dependency on the models that currently sit under Copilot’s hood.

The irony is that Copilot’s limitations are well understood inside Microsoft. The company’s own leaked internal feedback, as reported by several outlets, described integrations that “don’t really work.” The ToS language is, in a sense, the legal department’s way of saying what the product team has been grappling with in private. The expectation that AI tools be trustworthy, verifiable, and fit for purpose has moved from aspiration to regulatory reality across multiple jurisdictions, making the gap between Copilot’s marketing and its terms of service harder to sustain.

None of this means Copilot is uniquely unreliable by the standards of the current generation of AI assistants. Its primary competitor, ChatGPT, has its own well-documented accuracy problems even as OpenAI pushes into commercialisation. The difference is that Microsoft bet earlier, louder, and more money on the proposition that AI assistants were ready to become essential workplace tools. The fine print in its own terms of service suggests the company is hedging on that bet while the marketing continues to double down on it. Competitors raising billions on promises of AI reliability will have noticed the opening. The race that defined 2025 is entering a phase where the gap between “for entertainment purposes only” and genuinely trustworthy AI is the most valuable real estate in the industry.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


There’s something oddly brilliant about outsourcing your curiosity to an AI that doesn’t get tired or awkward. After all, if an AI agent can call thousands of pubs and build a Guinness price index, why stop there? Why not send one loose into the wild to track the cost of your daily caffeine fix or your late-night ramen cravings?

I’m sold — I want one of those

That’s exactly the kind of domino effect sparked by a recent experiment inspired by Rachel Duffy from The Traitors. A developer built an AI voice agent that sounded natural enough to chat up bartenders and casually ask for Guinness prices, compiling the data into a public index. It worked so well that most people on the other end didn’t even clock that they were speaking to a machine. And just like that, a slightly chaotic, very clever idea turned into something surprisingly useful.

Now imagine applying that same idea to coffee and ramen. Because if there are two things people are oddly loyal and sensitive about, it’s how much they’re paying for a flat white or a bowl of tonkotsu.

A “CaffIndex,” for instance, could map out the price of cappuccinos across cities, highlighting everything from overpriced aesthetic cafés to hidden gems that don’t charge $3 for foam. Similarly, a “Ramen Radar” could track where you’re getting the most bang for your broth, whether it’s a premium bowl or a spot that somehow gets everything right. Don’t giggle, I’m serious.

The appeal isn’t just novelty. It’s scale. Calling up a handful of places yourself is tedious. Getting real-time, city-wide data? Nearly impossible. But an AI agent doesn’t mind dialing a thousand numbers, repeating the same question, and logging every answer with monk-like patience. What you get in return is a living, breathing map of prices.

It’s not all sunshine and roses

Of course, it is not all smooth sipping and slurping. There is a slightly uneasy side to this, too. Questions around consent and transparency start to creep in, and you cannot help but wonder if every business would be okay with being surveyed by an AI that sounds just a little too real. In the original experiment, the AI was designed to be honest when asked directly, but let’s be real: most people aren’t going to question a friendly voice casually asking about prices. It feels harmless in the moment, and that is exactly what makes it a bit tricky.

Still, there is something genuinely exciting about the idea. Not in a scary, robots-are-taking-over kind of way, but in a way that makes you pause and think, this could actually be useful if handled right. Prices are creeping up everywhere, from your rent to that comforting bowl of ramen you treat yourself to after a long day. Having something that keeps track of it all feels like a small win.

Maybe that is the real takeaway here. Today it is Guinness. Tomorrow it could be your morning coffee or your go-to ramen spot. It makes you wonder how long it will be before your phone steps in, calls up a café, asks about their espresso, and saves you from spending more than you should. Because honestly, if AI is willing to do the boring work for you, the least it can do is make sure your next cup and your next bowl actually feel worth it.



Source link