An investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has revealed that contract workers in Kenya hired by Sama—a Kenyan outsourcing firm that provides data annotation services—to help train Meta Platforms’s AI systems are deeply exposed to personal images and videos captured by users of the company’s Ray-Ban smart glasses.
The report, published on February 27, shines a light on the hidden human labour behind Meta’s push into wearable artificial intelligence, and raises fresh questions about data protection, cross-border data transfers, and the psychological toll on content moderators working for Sama in Nairobi.
Meta’s Ray-Ban smart glasses, developed in partnership with EssilorLuxottica, are marketed as an AI-powered assistant that can translate languages, describe surroundings, capture hands-free photos and videos, and answer questions about what a user is seeing.
However, beyond the futuristic pitch, interviews with Sama and Meta’s current and former employees by Svenska Dagbladet revealed that footage recorded through the glasses ends up thousands of kilometres away in Kenya, where data annotators review and label it to improve the system’s performance.
Privacy, quietly broken
Several Kenyan workers told the Swedish newspaper that they regularly encounter sensitive material in the course of their work, including ordinary household scenes to intimate moments that users may not have realised were being captured.
In some cases, workers said, footage includes financial information such as bank cards visible in the frame, or recordings made in private spaces like bedrooms and bathrooms.
“In some videos, you can see someone going to the toilet, or getting undressed,” one Sama worker told the reporters. “I don’t think they know, because if they knew, they wouldn’t be recording.”
Another contractor claimed they reviewed footage showing the wearer of the glasses setting them down on a bedside table, only for their wife to walk into the room and undress, presumably unaware she was being watched. Other footage reportedly showed the wearer watching porn or even recording themselves having sex
According to the investigation, there was little transparency for the wearables. Retailers in Europe reportedly gave inconsistent information about whether data captured by the glasses remains on the device or is transmitted to Meta’s servers. Independent testing cited in the report indicated that many of the glasses’ AI features require cloud connectivity, meaning images and voice inputs can be processed remotely rather than locally on the device.
The Sama connection
Sama, formerly Samasource, provides data annotation services to large technology companies like Meta and OpenAI. The company has been accused in the past of labour violations in some of its contracts, particularly with OpenAI.
Sama requires strict confidentiality agreements that limit what employees can publicly disclose. But the accounts published by the Swedish newspapers suggest that the promise of frictionless AI is powered by a labour system in which human reviewers sift large volumes of raw, unfiltered data so algorithms can learn to recognise objects, environments, and context.
Meta states in its privacy policies that user content may be subject to human review to improve products and ensure safety. For European users, the company’s Irish subsidiary is responsible for compliance with the EU’s General Data Protection Regulation (GDPR).
However, the investigation raises questions about how data collected in Europe or the United States is transferred and processed in countries such as Kenya, which do not have an EU adequacy decision recognising their data protection regimes as equivalent to GDPR.
While data annotation, content moderation, and AI training have become critical to Nairobi’s tech ambitions, these jobs—primarily for college students and young graduates—come with low pay, heavy workloads, and exposure to disturbing material.
Meta has defended its practices in previous public statements, saying it invests in privacy safeguards and minimises the amount of data used for training. Still, the accounts published by the Swedish papers suggest that the line between automated intelligence and human oversight is blurrier than many consumers assume.
















