Date published

The Hidden Workforce Behind Artificial Intelligence

Artificial intelligence may seem like a triumph of algorithms and code, but behind every smart system lies a vast human workforce. These unseen workers, often in developing countries, perform the gruelling tasks that AI requires to function: reviewing endless streams of violent or explicit content, annotating images and videos, and cleansing training data. This invisible labour underpins machine learning models, yet the individuals performing it remain largely hidden from view.

This global army of data labellers and content moderators pays a steep price for the digital progress we celebrate. Many spend their days immersed in traumatic imagery scenes of abuse, violence, and other disturbing content as they teach AI models how to recognise and filter such material. They do this work for low wages (sometimes only a few dollars per hour) and often without adequate mental health support. Yet, adding insult to injury, most are contractually forbidden from speaking about what they endure.

Major tech companies routinely require these workers to sign strict non-disclosure agreements (NDAs) that bar them from discussing their jobs or the content they see. In effect, the very firms that tout ethics and transparency in AI are using legal gag orders to silence their frontline workers, creating a climate of fear and self-censorship.

NDAs: The Machinery of Silence in AI Work

NDAs are intended to protect proprietary information, but in the AI industry, they have become a tool for silencing. These agreements commonly extend far beyond any trade secrets; workers report being warned not to talk about anything related to their job, even with close family or therapists. The possibility of breaching an NDA (and facing immediate termination or legal action) looms constantly over them. Much like an all-seeing surveillance camera, the NDA’s threat is often enough to enforce compliance without anyone having to watch.

Workers internalise this pressure. They carefully weigh every word, wondering: What can I safely say, to whom, and when? The result is a pervasive culture of silence. Data labellers have described the NDA they signed as feeling like a trap, admitting they suffer nightmares from the horrible content reviewed, but fear even discussing those traumas with a therapist, lest they violate the agreement.

From the very start, new hires are made to understand that breaking the NDA will have severe consequences. This chills any impulse to seek help or solidarity. An interview investigation by the human rights group Equidem illustrated how deeply this fear runs. Researchers reached out to hundreds of content moderators and data annotators in countries like Kenya, Colombia, Ghana, and the Philippines, and a majority refused to speak at all. In Colombia, 75 out of 105 workers declined to be interviewed; in Kenya, 68 out of 110 would not talk. The reason was clear and consistent: workers had been cowed into silence by NDAs and the constant reminder that “you could be fired or sued if you say anything.”

As one Kenyan labour organiser noted, many moderators are shaking with fear over what they have signed. Some won’t even utter the acronym “NDA” aloud, worried that doing so could get them in trouble. This self-imposed gag order shows how effective the machinery of silence has become.

The Invisible Workforce and Its Trauma

The suppression of speech might be somewhat defensible if these jobs were mundane and harmless, but they are far from it. In truth, content moderation and data labelling can be a psychologically brutal work. Each day, an AI content moderator might have to watch hundreds or even thousands of pieces of extreme content to filter out the worst of the internet.

Studies have found moderators often review up to 1,000 disturbing videos in a single shift. These could include graphic violence, sexual abuse, torture, and other traumatising scenes. They must make split-second decisions on each item, with minimal breaks, under intense pressure to meet accuracy and speed targets. Over time, this exposure takes a severe toll on mental health, yet NDAs ensure that many workers suffer in silence.

Multiple investigations have documented widespread psychological injuries among this hidden workforce. In Equidem’s report “Scroll. Click. Suffer,” researchers recorded over 60 cases of serious mental health issues such as depression, anxiety, post-traumatic stress disorder (PTSD), insomnia, and even suicidal thoughts among content moderators. In addition, at least 76 workers reported physical symptoms believed to be stress-related, chronic fatigue, debilitating migraines, panic attacks, and other signs of burnout.

These figures likely understate the problem, given that they come only from those brave enough to speak up despite the NDAs. The true scale of trauma is undoubtedly larger. Front-line workers describe experiencing intrusive flashbacks of horrific videos and enduring constant anxiety. Some turn to substances or develop health problems, and many feel isolated because their NDAs forbid even a conversation with friends, family, or professional counsellors about what they’re going through.

NDAs also prevent them from getting help. For instance, a moderator might desperately want to tell a therapist about disturbing images that haunt his dreams, but he’s unsure if describing them would breach his contract. This kind of gag clause effectively forces people to bottle up their trauma. “I have kept everything bottled inside of me,” one young data labeller from Kenya said, explaining that he couldn’t even confide in his own family about the things he’d seen at work.

Such isolation exacerbates the psychological harm. It’s a cruel irony: the technology sector prides itself on innovation and well-being, yet it has created a subclass of workers who quietly absorb the worst of humanity online and must suffer alone.

A System of Outsourcing and Impunity

Why do tech companies lean so heavily on NDAs and third-party contractors? The answer lies in how the AI industry has structured its labour supply chain. Most content moderators and annotators aren’t hired directly by tech giants. Instead, they are employed through layers of outsourcing specialised subcontractors or Business Process Outsourcing (BPO) firms based in regions with cheaper labour and often weaker labour protections. This setup isn’t incidental; it’s quite strategic.

Outsourcing the most gruelling tasks to external vendors in the Global South, big tech companies shield themselves from liability and public scrutiny. The outsourcing firms handle the dirty work (both literally and figuratively), while the platforms can claim ignorance or distance if abuses occur. NDAs reinforce this arrangement by preventing the workers from revealing any “dirty laundry” about working conditions.

This architecture of impunity allows the AI profit machine to hum along with minimal accountability. Tech corporations reap the benefits of human moderation, cleaner platforms, and well-curated training data, but if something goes wrong, responsibility is fragmented. For example, consider the tragic case of Ladi Anzaki Olubunmi, a content moderator in Kenya reviewing TikTok videos under contract with the outsourcing giant Teleperformance. In 2022, after repeatedly complaining of extreme workloads and exhaustion, Ladi collapsed and died at work.

Her grieving family said she had been pushed beyond human limits by relentless quotas and stress. Yet TikTok’s parent company, ByteDance, faced essentially no consequences. ByteDance could point out that Ladi was not their direct employee; she worked for a contractor, and NDAs ensured that co-workers and staff were muzzled from speaking freely about the incident.

It’s a stark illustration of how layered subcontracting plus strict NDAs create a vacuum of accountability. Platforms get to distance themselves from the very real human costs of keeping their services “clean” and operational.

Meanwhile, on a day-to-day level, the work conditions imposed on these moderators are often inhumane. Time and again, investigations have uncovered facilities where moderators are expected to meet punishing targets under constant surveillance and pressure. They might have to sign waivers acknowledging the job could affect their mental health, yet receive inadequate counselling or time off.

Some report being denied breaks or pushed to meet higher quotas even after traumatic incidents. Efforts to organise for better conditions are stifled not only by the threat of losing one’s job in economies with scarce alternative employment, but also by NDAs that prohibit discussing workplace issues with colleagues or labour organisers.

In essence, NDAs help keep workers isolated from one another, undercutting collective action. Banning talk of wages, workloads, or stress, these agreements make it extremely difficult for workers to unionise or even collectively demand improvements, since doing so could be construed as “disparaging the company” or leaking confidential information.

This systematic silencing is no accident; it’s highly profitable. It lets tech firms extract maximum value from a hidden workforce while minimising the risk of public backlash or legal liability.

The status quo is grim, but it is not going entirely unchallenged. Increasingly, labour advocates and even some governments are recognising that NDAs have been abused to cover up exploitation.

Likewise, campaigns are emerging to demand that NDAs be scaled back to their original purpose – protecting genuine trade secrets and not be used to bar workers from talking about basic working conditions or illegal conduct. In some countries, there are pushes to strengthen whistleblower protections and ensure that subcontracted workers in digital industries have the same free speech and organising rights as any other employee.

These efforts aim to pierce the veil of secrecy that tech companies have draped over their labour practices.

The core of the change demanded is modest: workers like Onyango simply want the freedom to speak about their lives without fear. They want to be able to tell their families what they do at work, or to debrief with a therapist about the horrors they have witnessed, or to meet with coworkers to discuss forming a union, all without risking their livelihoods or facing a lawsuit. “All we want is to share the burden of what we’ve seen without being punished for it,” one moderator explained. That is not an unreasonable ask.

Granting this freedom would mean revising NDAs so they cannot be used to conceal potential rights violations or mental health crises. It would mean tech companies taking responsibility for the well-being of all the people who make their AI products possible, even if those people are on another continent. It might also mean consumers and regulators demanding more transparency about how AI systems are built, including the labour conditions behind the algorithms we interact with. When the people training an AI are too afraid to reveal what that training entails, society loses insight into the true costs of our technology. We should all be concerned about that.

Content moderators and data labellers are not footnotes in the AI story; they are essential authors of it, and their well-being matters. Ensuring that they can talk about their experiences safely is a critical first step toward aligning the tech industry with basic human rights. In the end, an AI-driven future that is built on transparency, accountability, and respect for labour will be better for everyone. A digital future rooted in secrecy and suppression is no future at all.

Latest articles

Regulation as Africa’s Innovation Operating System

Africa’s entrepreneurs are often told to “move fast and break things.” But what if real innovation comes from moving thoughtfully and building within the system? In the tech world, no

How SIM Recycling Exposes Nigerians to Data and Financial Risks.

Imagine losing access to your mobile line, only to discover later that someone else is using your old number and receiving sensitive messages or even accessing your bank account. This

Misinformation, Disinformation, and Malinformation

One morning in Lagos, a young taxi driver was jolted awake by a 4 a.m. phone call from his mother. Panicked by the deadly Ebola outbreak sweeping West Africa in

How Algorithms Secretly Run Your Day

It’s 6:00 a.m., and your alarm on your smartphone just dragged you out of sleep. Still groggy, you reach for your phone and open Instagram without thinking. The screen lights

Inside Nigeria’s Advertising Playbook, What the ARCON Act, 2022 Really Means

You are a young creative working on a bold new ad campaign for a fintech startup. You have nailed the message. The visuals are clean. The copy pops. Just when

When the Music Plays but the Artist Doesn’t Get Paid

In a small studio tucked away in Surulere, Lagos, Nigeria, a young singer named Amaka poured her soul into a melody she wrote after losing her father. The song, raw,