Spotify’s new green checkmark separates real artists from AI content



The green checkmark, rolling out over the coming weeks, requires consistent listener engagement, platform policy compliance, and a real-world, identifiable presence. Content farms and AI-generated artist profiles are explicitly excluded at launch.


Spotify has introduced a Verified by Spotify badge, a green checkmark that will appear on artist profiles and next to artist names in search, signalling that the account has passed the company’s review for authenticity and trust. The feature was announced on the Spotify Newsroom and will roll out progressively over the coming weeks.

The timing is not accidental. Spotify has faced sustained criticism over the past year for allowing AI-generated music to accumulate on the platform under fake or misleading artist profiles.

As we  wrote in June 2025, The Velvet Sundown, an AI-generated band whose tracks appeared on users’ Discover Weekly playlists, had no label distinguishing it from a human act. The following month, we covered further outrage after AI-generated songs appeared on the official Spotify pages of deceased artists, including artists murdered decades ago, uploaded without consent from their estates.

While rivals like Deezer introduced AI-generated content tagging, Spotify stayed quiet. The Verified badge is its most substantive response to date.

To receive the badge, artist profiles must meet three criteria: consistent listener activity and intentional engagement over a sustained period (not one-time spikes); compliance with Spotify’s platform policies; and signals of a real-world artist presence, including concert dates, merchandise, and linked social accounts.

Critically, profiles that “appear to primarily represent AI-generated or AI-persona artists are not eligible for verification” at launch. Spotify says it will pair algorithmic standards with human review to identify “real artists behaving in good faith, not just filtering out bad actors.”

At launch, Spotify says more than 99% of artists that listeners actively search for will be verified, representing hundreds of thousands of artists, the majority independent, spanning genres, career stages, and geographies. T

he company is explicitly deprioritising “functional music creators and content farms whose content is primarily designed for passive or background listening.”

Alongside the badge, Spotify is introducing artist detail sections (in beta) across all profiles regardless of verification status, surfacing career milestones, release activity, and touring history, described as “nutrition facts” for music, giving listeners context about an artist’s authentic activity on the platform.

Artist Profile Protection, also in beta, gives artists greater control over what appears on their own profiles.

Not seeing the badge on an artist profile at launch does not mean they will be permanently excluded; Spotify says verification will happen on an ongoing basis across its catalogue of millions of profiles.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link