Meta ends Sama contract after Kenyan workers report seeing intimate footage from Ray-Ban smart glasses users


TL;DR

Meta ended its contract with Sama after Kenyan data annotation workers told Swedish journalists they had viewed intimate footage, including people having sex, undressing, and using the toilet, captured by Meta’s Ray-Ban smart glasses. The 1,108 workers received six days’ notice. A class action lawsuit, UK and Kenyan regulatory investigations, and an EFF advisory followed. The case exposes the human infrastructure beneath AI: the workers who train the models see everything, own nothing, and lose their jobs when they talk about it.

In February 2026, workers at Sama, a Nairobi-based outsourcing company contracted by Meta, told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that they had been reviewing footage captured by users of Meta’s Ray-Ban smart glasses. The footage included people having sex, going to the toilet, undressing, and handling bank details. The workers’ job was to label the content so that Meta’s AI systems could learn to interpret what the glasses see. Less than two months after the investigation was published, Meta ended its contract with Sama, and on 16 April the company issued formal redundancy notices to 1,108 employees. Meta said Sama “don’t meet our standards.” Sama rejected the characterisation and said it had received no notification of any failure. Naftali Wambalo, co-founder of the Africa Tech Workers Movement, alleged the real reason was simpler: Meta was retaliating against the workers who spoke out. Meta has not responded to that allegation. The people who trained the AI saw what the glasses see. Then they lost their jobs.

The glasses

Meta sold more than seven million pairs of Ray-Ban smart glasses in 2025, more than tripling its previous year’s volume. The product line has since expanded to include prescription models designed to reach the billions of people who already buy corrective eyewear, converting what was a novelty into something closer to a default. The glasses record video, capture photos, stream audio, and route queries through Meta AI, which processes images and voice commands either on-device or in the cloud. A small LED on the frames illuminates when the camera is active, which Meta has described as a privacy safeguard. The light is designed for the people around the wearer, not for the wearer themselves. It tells strangers that they are being recorded. It does not tell them that the recording may be reviewed by a human being in a different country, sitting at a desk in Nairobi, labelling what they see so that an algorithm can learn the difference between a kitchen and a bedroom, a handshake and an embrace, a document and a face.

Meta’s privacy policy for the glasses states that users who opt into sharing data for AI training purposes allow their footage to be processed by the company’s AI systems. The policy does not dwell on the human layer between the camera and the algorithm. AI training data does not label itself. Before a model can learn to interpret a scene, a person must first watch the scene and describe it. The Swedish investigation revealed what that process looks like in practice: workers in Kenya, employed by a third-party contractor, viewing the most private moments of strangers’ lives, cataloguing them, and moving on to the next clip. The footage was not anonymised before review. The workers could see faces, bodies, and personal documents. They had no way to contact the people being filmed, no mechanism to flag footage they believed had been captured without consent, and no authority to refuse the work without risking their employment.

The workers

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Sama was founded in 2008 as a social enterprise with the stated mission of providing dignified digital work to people in low-income communities. The company has operations in Kenya, Uganda, and India, and has provided data annotation services to some of the largest technology companies in the world, including Google, Microsoft, and Meta. The contract with Meta for smart glasses data annotation was one of several Sama held with the company. Workers were tasked with labelling images and video captured by the glasses to train Meta’s AI models, a process that required them to view, categorise, and describe whatever the cameras had recorded.

The Swedish investigation, published in late February 2026, reported that workers described seeing users engaged in sexual activity, using the toilet, undressing, and displaying financial information on screen. The content was not exceptional. It was the ordinary residue of a camera worn on someone’s face throughout the day, capturing whatever the wearer happened to be looking at. The workers told the journalists that the experience was distressing but that they had limited options: the work paid better than most available alternatives, and Sama’s contracts typically included non-disclosure agreements that discouraged public discussion of the content they reviewed. When the Swedish publications broke the story, they gave the workers a voice they had not previously been permitted to use.

On 16 April, less than seven weeks after the investigation was published, Sama notified 1,108 employees that their positions were being made redundant. The workers received six days’ notice. Meta’s statement attributed the termination to Sama’s failure to meet its standards, but declined to specify which standards had been breached or when the assessment was made. Sama said it was “surprised and disappointed” by Meta’s decision and that it had not been informed of any performance shortfalls prior to the termination. The timing was noted by labour advocates, regulators, and the workers themselves. Wambalo, whose organisation represents data workers across the continent, described Meta’s reasoning as a cover for retaliation: the company, he said, was enforcing “standards of secrecy” rather than standards of quality.

The pattern

This is not the first time Sama’s relationship with Meta has ended in controversy. Between 2019 and 2023, Sama employed content moderators in Nairobi who reviewed posts flagged as potentially violating Facebook’s community standards. The work required moderators to view graphic violence, sexual abuse, hate speech, and other disturbing material for hours each day, often at wages as low as $1.50 per hour. A 2022 investigation by Time magazine found that 81 per cent of 144 Sama content moderators who underwent clinical assessment were diagnosed with “severe” or “extremely severe” symptoms of post-traumatic stress disorder. Former workers filed lawsuits in Kenya alleging that Sama and Meta had subjected them to conditions amounting to human trafficking and had interfered with their attempts to form a union. Sama later said publicly that it “regretted” taking on the content moderation work, and exited the business in 2023 to focus on what it described as less harmful data annotation services.

The smart glasses contract was supposed to be different. Data annotation, labelling images and video to train AI, is generally considered less traumatic than content moderation, which requires workers to confront the worst material humans produce. But the Swedish investigation revealed that the distinction depends entirely on what the AI is being trained to see. When the AI is attached to a camera worn on someone’s face throughout the day, the training data is their life. The workers who labelled Meta’s smart glasses footage were not reviewing content that users had chosen to upload to a platform. They were reviewing content that a camera had passively captured, often without the knowledge or meaningful consent of the people being filmed. The nature of the work had changed, but the structural dynamic had not: a Silicon Valley company outsourcing the human cost of its AI ambitions to workers in East Africa who lack the bargaining power to set the terms of their own labour.

The response

The regulatory and legal response has been swift by the standards of technology enforcement. The UK Information Commissioner’s Office wrote to Meta in early March, calling the Swedish report “concerning” and requesting information about how data captured by the glasses is processed, stored, and reviewed. The Office of the Data Protection Commissioner in Kenya announced an investigation into whether the glasses’ data collection practices comply with Kenyan data protection law. In the United States, the Clarkson Law Firm filed a class action lawsuit on behalf of consumers, alleging that Meta engaged in false advertising by marketing the glasses as “designed for privacy, controlled by you” while routing user footage through a human review pipeline in a country with weaker data protection enforcement than the markets where the glasses are sold. The Electronic Frontier Foundation published an advisory titled “Think Twice Before Buying or Using Meta’s Ray-Bans,” warning that the glasses’ AI features allow “all parts of their life to be recorded, and then reviewed, either by the AIs or by humans behind it.

Privacy complaints against Meta for using personal data to train AI have been mounting across the European Union, where noyb filed 11 simultaneous complaints with national data protection authorities alleging that Meta’s AI training practices violate the General Data Protection Regulation. The complaints focus on Meta’s decision to process user data under a “legitimate interest” basis rather than seeking explicit consent. The smart glasses controversy adds a physical dimension to what had been a largely digital dispute: it is one thing to train AI on posts users wrote on Facebook, and another to train it on footage of people in their bedrooms, captured by a device and reviewed by a stranger. Meta has argued that European privacy regulations are “stifling” AI innovation and that pre-emptive regulation of “theoretical harms” will prevent European businesses from benefiting from AI advances. The harms documented by the Swedish investigation are not theoretical. They are workers in Nairobi who watched strangers undress and were then told their jobs no longer existed.

The infrastructure beneath the intelligence

Meta’s AI ambitions require an enormous volume of human-labelled training data. The company is building an AI clone of Mark Zuckerberg for its employees, developing the Muse Spark model to power its platforms, and expanding the glasses’ AI capabilities to include real-time visual understanding, object identification, and conversational assistance. Each of these products depends on the same pipeline: humans look at data, describe what they see, and their descriptions become the instructions that teach the model what the world looks like. When that pipeline involves a contractor, the humans become invisible. They do not appear in Meta’s product announcements, earnings calls, or marketing materials. They appear only when something goes wrong, when a Swedish newspaper publishes an investigation, or when a contractor breach exposes the fragility of the training operation.

Mercy Mutemi, the Kenyan human rights lawyer who leads the Oversight Lab, told the BBC that the pattern of outsourcing AI’s human costs to East African workers represents a structural failure, not an aberration. “This is a very flimsy foundation to build your entire industry on,” she said. The industry she is describing is worth trillions of dollars. The foundation she is describing is a workforce paid data annotation wages in Nairobi, given six days’ notice when the contract ends, and prevented by non-disclosure agreements from telling anyone what they saw. Meta’s smart glasses are designed for privacy, controlled by the user. The question the Swedish investigation answered is which user: the person wearing the glasses, or the person in Nairobi who watched the footage and lost their job for talking about it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


The Samsung Keyboard supports glide typing, voice dictation, multiple languages, and deep customization through Good Lock. On paper, it’s a very capable and perfectly functional keyboard. However, it’s only when I started using it that I realized great features don’t necessarily translate to a great user experience. Here’s every problem I faced with the Samsung Keyboard, and why I’m permanently sticking with Gboard as my main Android keyboard.

I have been using Gboard and the Samsung Keyboard on a recently bought Galaxy S24, which I got at a massive discount.

Google’s voice typing doesn’t cut me off mid-sentence

Fewer corrections, fewer cutoffs, faster dictation

I might be a professional writer, but I hate typing—whether it’s on a physical keyboard or a virtual one. I type slower than I think, which I suspect is true for most people. That becomes a problem when I have multiple ideas in my head and need to get them down fast. It’s happened far too often: I start typing one idea and forget the other. Since jacking my brain into a computer isn’t an option (yet), I’ve been leaning more and more on voice typing as the fastest way to capture my thoughts.

Now, both Samsung Keyboard and Gboard support voice typing, but I’ve noticed that Gboard with Google’s voice engine is just better at transcription accuracy. It picks up on accents flawlessly and manages to output the right words. In my experience, it also seems to have a more up-to-date dictionary. When I mention a proper noun—something recently trending like a video game or a movie name—Samsung’s voice typing fails to catch it, but Google nails it.

That said, you can choose Google as your preferred voice typing engine inside Samsung Keyboard, but it’s a buggy experience. I’ve noticed that the transcription gets cut off while I’m in the middle of talking—even when I haven’t taken a long pause. This can be a real problem when I’m transcribing hands-free.

Gboard offers a more accurate glide typing experience

Google accurately maps my swipe gestures to the right words

Voice typing isn’t always possible, especially when you’re in a crowded place and want to be respectful (or secretive). At times like these, I settle for glide (or swipe) typing. It’s generally much faster than tapping on the keyboard—provided the prediction engine maps your gestures to the right word. If it doesn’t, you have to delete that word, draw that gesture again, or worse—type it out manually.

Now, both Samsung Keyboard and Gboard support glide typing, but I’ve noticed Gboard is far more accurate. That said, when I researched this online, I found a 50-50 divide—some people say Gboard is more accurate, others say Samsung is. I do have a theory on why this happens.

Before my Galaxy S24, I used a Pixel 6a, before that a Xiaomi, and before that a Nokia 6.1 Plus. All of my past smartphones came with Gboard by default. I believe Gboard learned my typing patterns over time—what word correlates to what gesture, which corrections I accept, and which ones I reject. After a decade of building up that prediction model, Gboard knows what I mean when my thumb traces a particular shape. Samsung Keyboard, on the other hand, is starting from zero on this Galaxy S24—leading to all the prediction errors. At least that’s my working theory.

There’s also the argument for muscle memory. While glide typing, you need to hit all the correct keycaps for the prediction engine to work. If you’re even off by a slight amount, the prediction model might think you meant to hit “S” instead of “W.” Now, because of my years of typing on Gboard, it’s likely that my muscle memory is optimized for its specific layout and has trouble adapting to Samsung’s.

Swiping vs typing.


Is Swiping Really Faster Than Typing on a Phone Keyboard?

Which typing method reigns supreme?

I mix three languages in one message, and Gboard just gets it

Predictive multilingual typing doesn’t get any better than this

I’m trilingual—I speak English, Hindi, and Bengali. When I’m messaging my friends and family, we’re basically code-mixing—jumping between languages in the same sentence using the Latin alphabet. Now, my friends and I have noticed that Gboard handles code-mixing much more seamlessly than Samsung Keyboard.

If you just have the English dictionary enabled, neither keyboard can guess that you’re trying to transliterate a different language into English. It’ll always try to autocorrect everything, which breaks the flow. The only way to fix this is by downloading a transliteration dictionary like Hinglish (Hindi + English) or Bangla (Latin). Both Samsung Keyboard and Gboard support these dictionaries, but the problem with Samsung Keyboard is that it can only use one dictionary at a time.

Let’s say I’m writing something in Latinized Bangla and suddenly drop a Hindi phrase. Samsung Keyboard will attempt to autocorrect those Hindi words. Gboard is more context-aware. Since my Hinglish keyboard is already installed, I don’t have to manually switch to it. Gboard can detect that I’m using a Hindi word even with the English or Bangla keyboard enabled, and it won’t try to autocorrect what I’m writing. This also works flawlessly with glide typing, which is a huge quality-of-life improvement over Samsung Keyboard.

This isn’t just an India-specific thing either. Code-mixing is how billions of people type every day—Spanglish in the US, Taglish in the Philippines, Franglais across parts of Europe and Africa.

Gboard looks good without me spending an hour on it

I don’t have time for manual customization

Samsung Keyboard is hands down the more customizable option, especially if you combine it with the Keys Cafe module inside Good Lock. You get granular control over almost every aspect of the keyboard—key colors, keycaps, gesture animations, and a whole lot more. While for some users, this is heaven, I just find it too overcomplicated and a massive time sink.

I don’t have the patience to sit and adjust every visual detail of my keyboard. Sure, it gets stale after a while, and you’d want to freshen it up, but I don’t want to spend the better part of an hour tweaking a virtual keyboard. This is where Gboard wins (at least for me) by doing less.

Android 16 brings Material 3 Expressive, which automatically themes your system apps using your wallpaper’s color scheme. With Gboard, all you have to do is change the wallpaper, and the keyboard updates to match—no Good Lock, no manual color picking. It’s a cleaner, more seamless way to keep your phone looking good without putting in the extra legwork.


The keyboard you don’t think about is the one that’s working

I didn’t switch to Gboard because Samsung Keyboard was broken. I switched because Gboard made typing feel effortless. If you’re a Samsung user who’s never tried it, it’s a free download and a five-second switch. You might not go back either.

Pixel 7 with the 8vim keyboard.


I Tried the Weirdest Android Keyboards So You Don’t Have To

Can strange layouts and gestures beat the good old-fashioned QWERTY?



Source link