To make artificial intelligence sound more human, tech companies are imposing increasingly inhuman working conditions on the people who train it. The quest for a “better answer that sounds more human” has led to a system where thousands of contract workers face intense pressure, psychological distress, and a growing sense of futility.
The job of an AI rater is a mental grind. Initially, many enjoy the challenge of refining the AI’s responses. But as deadlines tighten, the work becomes a frantic rush. A task that once allowed for 30 minutes of thoughtful consideration must now be completed in a fraction of that time, undermining the very purpose of quality control. This relentless pace has led to widespread burnout.
For many, the psychological burden goes beyond stress. Workers hired for writing or analysis roles find themselves unexpectedly moderating violent and sexually explicit content generated by the AI. This exposure, coupled with a lack of institutional support, has resulted in anxiety and panic attacks, a clear sign of the hidden human cost of creating “safe” AI.
The irony is that the people closest to the technology are the ones who trust it the least. They see the shortcuts, the unverified information, and the ethical compromises made in the name of progress. The result is a workforce that feels complicit in building a product they believe is not ready or safe for the public, a heavy burden for those tasked with making AI more “human.”