The Ghosts in the Machine
You have just requested a chatbot to provide you with a recipe. Your automobile has braked automatically. Incredible, right? This IT magic feels seamless. But what shall I tell ye this convenience Hath a human price dark side? The intelligent algorithms have an army of people behind them. They make the AI revolution work. This is their story.
Our mind tends to imagine AI to be autonomous only. That is a dangerous fantasy. As a matter of fact, these systems are trained on millions of data points. All of them have to be classified by a human hand. This is known as data labeling and it is the dirty secret of the industry.
More Than Clicks: The IT Work that No One Raves About
What does it really look like this work? Imagine that you look at a screen nine hours. It is your task to paint a thousand street scenes and surround them with boxes on all the pedestrians. This conditions the AI of a self-driving car. Imagine it is the same task now, but violent or explicit content. You are to label it in order to train the AI to filter it. This is the dark side of thousands of people.
It is a psychologically tough job. It is also digitally disjointed. The quotas on workers are impossible. They are subjected to algorithmic surveillance all the time. They are paid depending on the number of clicks per hour. This forms a pressure digital assembly line. It is the groundwork, but rough, work of the modern IT.
The Global Labor Supply Chain
So where is this workforce? This labor is outsourced by major tech companies in large numbers. They employ subcontractor layers. It is a practice that pushes them out of responsibility. The piece frequently finds its way to such countries as Kenya, Venezuela, and the Philippines. People there are in dire need of the job.
This system offers cheap labor to the IT giants. It also provides them a plausible deniability. They establish the violent data quotas. Nevertheless, they are not the ones who dictate the labor standards. The global AI supply chain has this as its ethical blind spot.
One of the project managers of an American AI startup admitted:
“We do not inquire how the data is labelled. We just need it fast and cheap. The model is all that matters.”
A Case Study in Trauma
Let’s get specific. Time magazine reported in 2022 on a company known as Sama. It had a contract with OpenAI. The Kenyan workers earned less than two dollars an hour. Their task? Censuring text to make ChatGPT safer by labelling it as extremely graphic.
These employees are reading accounts about sexual violence, hate speech, and murder in details. They did this for hours on end. The mental situation was devastating. Most of them complained of chronic nightmares and acute anxiety. There was a minimum of counseling provided by the company. This case is not an outlier. The industry is used to doing so.
Why Can’t AI Fix This Itself?
You would ask, will AI things not automate this at some point? That’s the promise, isn’t it? The ironical fact is that, it usually aggravates the problem. Models that are more complex require further still more sophisticated training data. This information involves sensitive human analysis.
Considering how to explain sarcasm to a child. You have to elaborate the context. AI is the same. It requires human coded examples of high quality to learn subtlety. Therefore, the supply of this human labor is skyrocketing. This global workforce is currently in millions.
The Expert opinion: A faulty System
I interviewed a researcher in digital ethics, who prefers to keep their identity confidential because they work in one of the large IT companies. They presented an antiutopian viewpoint.
They explained:
“The IT industry came up with an ideal system. The credit of the AI magic goes to them. It is an invisible workforce distributed and all the risk is carried. This isn’t an accident. It’s a business model.”
Researcher equated it to the factory conditions of the industrial revolution. The proprietors had made huge profits. The laborers sold their health and dignity. The digitally outsourced, imbalanced power structure, over which our present AI boom is constructed, is the same.
Pathways to a Fairer Future
This feels overwhelming. What can we possibly do? It takes the effort of all of us to make the solutions. To begin with, tech companies should be forced to be radically transparent. And what is their source of data? Who labels it? Under what conditions?
We will be able to support the direct employment and just benefits of these employees. There is no compromise with mental health support. Also, we ought to enable unions and organizations of workers in the online world. Suppose that there was an ethical data certification known as Fair Trade. This can be fully achieved when we develop the market need of it.
Our collective Reckoning with AI
The second time that you operate a smooth artificial intelligence tool, stop and think. Question: who was the trainer to this? IT and AI future must be founded on the principles of exploitation. We are at a crossroads.
We can choose to look away. Or we can insist that the dignity of all human beings in the chain should be encoded in the very design of the technology. Insight in our machines will be very little when we lose our own humanity in the process. The automatized human cost of AI is the IT crisis of the day. We must not ignore it.


