Kala, a 43-year-old mother of two, sits alone at her home office, in the corner of a tiny bedroom, and studies her laptop screen, trying to determine if what she’s reading is offensive.

Her part-time job, which she picked up as a way to feel “fruitful” between making meals and helping her kids with their homework, involves “content moderation.” As an independent contractor for UHRS, an outsourcing company run by Microsoft, she spends hours every day studying text and photos for internet companies like Google, Microsoft, Facebook and Twitter and helping them determine what should stay and what should go.

“Do you know what this word means?” she yells to her teenage sons in the next room. “Is it something you shouldn’t say?”

As she reads the text out loud to them, they giggle as she tries to pronounce “chick flick.” They assure her that it isn’t offensive, and she clicks “no” on the screen.

“They are more qualified to recognize these words than me,” she says. “They help me keep the internet clean and safe for other families.”

Kala is just one of dozens of workers providing human services for internet and technology companies whom researchers Mary L. Gray and Siddharth Suri interviewed for their new book, “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass” (Houghton Mifflin Harcourt), out now.

These are the men and women who work off the radar, providing so-called “human intelligence tasks” to help train the artificial intelligence we use every day, from Google searches to “smart” digital assistants like Amazon Echo.

It may not sound like demanding work, but as Kala learned after three years at UHRS, “words can mean many different things depending on who is reading and writing them,” the authors point out.

The same applies for images. “The software used by Google, Microsoft, and Facebook can’t always tell the difference between a thumb and the penis, let alone hate speech and sarcasm.”

‘We talk about all the jobs that automation and robotics take away…Yet these numbers confound how many jobs are created by automation’

Joan, a 39-year-old who lives in Houston with her 81-year-old mother, makes the majority of her income from MTurk, a popular crowdsourcing platform launched by Amazon in 2005 that today has more than 100,000 active users who, the authors note, are hired to “do tasks that are beyond a computer’s capacity.”

This wasn’t Joan’s original plan — she has a master’s degree in communications and previously worked full-time as a technical writer. But as her mother’s health worsened, she looked for work she could do from home, and in 2013 she fell into on-demand jobs.

Turning a spare bedroom into a home office, she assesses content on platforms like Twitter and Match.com. Although she does a variety of tasks, the best money, Joan says, comes from flagging offensive photos. Or what she calls “dollars for d–k pics.” After years of practice, she earns an average of $40 doing these tasks for every 10-hour day.

“I’ve worked harder at this than I ever did at any office job,” she says.

Across the globe, tens of thousands of workers just like Kala and Joan are working from home, using crowdsourcing platforms to “foster our belief in the magical promise of technology,” the authors write.

“That’s how we build algorithms,” Gray told The Post. “You build out a predictive system that’s going to anticipate what somebody will type or ask for before they do it. It needs training data so the algorithms know what to do. And that requires thousands of people annotating pictures, telling the algorithms, ‘That’s a picture of a dog.’ ”

Gray and Suri compare ghost work to that classic scene from the movie “The Wizard of Oz,” when Dorothy and her friends realize that the Great and Powerful Oz is really just a guy pulling levers behind a curtain.

MTurk’s name was inspired by another Mechanical Turk, a purportedly autonomous chess-playing machine created in the 18th century by Hungarian engineer and inventor Wolfgang von Kempelen.

The original Turk appeared to be entirely mechanical, its interior filled with gears, cogs and levers. But it was soon revealed to be a hoax. Hidden inside was a real human being, creating the illusion that a robot was in control.

The modern Mechanical Turk, although not quite so devious, is a similar sort of smoke-and-mirrors deception: The goal is to make consumers believe in the autonomous intelligence of robots, when really there’s a human workforce crouching behind the cogs and gears.

The rise of ghost work goes against the assumption that technology is rapidly making human workers irrelevant. A recent Associated Press headline warned, a “quarter of US workers [are] at risk” for losing work to robots. But Gray and Suri don’t believe the future is quite so dire.

“We talk about all the jobs that automation and robotics take away,” they write. “Yet these numbers confound how many jobs are created by automation.”

The great paradox of computer automation is that the desire to eliminate human labor “always generates new tasks for humans.”

While these new tasks might bring steady employment, they’re hardly lucrative. A Pew Research Center survey found that 25% of workers who get the majority of their income from platforms like MTurk and TaskRabbit only do so for lack of any other employment options. And almost 40% of them can’t afford health insurance.

For Joan, the typical pay for text categorization is 2 cents for every data point. During her first year working for MTurk, she earned $4,400. That may seem woefully low, but as Joan is quick to point out to the authors, “$4,400 is a meaningful amount when your previous income was zero.”

But not everybody drawn to ghost work does it for the financial rewards. Lakshya, 34, was a mechanical engineer in East Delhi, India, before an auto rickshaw accident several years ago left him paralyzed from the waist down. Although he lives with a large family that’s willing to carry him up and down the stairs of their large home, commuting to an office became impossible for him.

So two years ago he started doing ghost work on UHRS, because “at least no one online would see his disability.” He works tirelessly, averaging 150 tasks every hour and logging 200 or more hours every month, doing everything from reviewing adult content to categorizing words used in Bing searches.

“I do it to keep my mind active,” he told the authors, with a noticeable urgency in his voice. “I have to do this. I have to keep busy.”

The companies who employ these contract workers — Pew Center’s best-guess estimate is that 20 million people globally are currently involved in ghost work — are in no hurry to funnel more money toward the people who step in when AI falls short. In fact, many of them are loathe to admit that these ghost workers exist at all.

The most glaring example of this happened in the summer of 2016, when it surfaced that the trending topics on Facebook, supposedly generated by unbiased algorithms, were in fact being determined by a small in-house “editorial team.” Even though Facebook investigated the claims and determined there was “no evidence of systematic bias,” the curtain had been pulled back.

Acknowledging these ghost workers can be “fraught with danger when it comes to market response,” says Prayag Narula, the founder and CEO of LeadGenius, a marketing automation service. “Startups get valued higher by venture capitalists if they highlight automated ‘tech’ rather than the hard work of building a community of human data miners.”

Gray appreciates the optimism of technologists who believe that complete automation is on the horizon. But she worries that “they don’t see the limits of what they’re offering. They believe so much in the software, that it’s so powerful and doing such amazing things — which it is, of course — but they have a very difficult time saying, ‘Oh, there are humans involved too.’ ”

The companies selling the products and services that tout AI autonomy “don’t fully get how constrained they are by the need to have humans in the mix,” Gray says. “They’re not used to thinking that way. Selling software will get you a lot further in Silicon Valley than selling a service that has people in the background.”

‘The software used by Google, Microsoft, and Facebook can’t always tell the difference between a thumb and the penis, let alone hate speech and sarcasm’

That reluctance to invest in human labor, or at least value it more than autonomous algorithms, is one of the reasons why dangerous content, like livestreaming violence and fake news, still manages to slip through the cracks.

“AI tends to struggle with context and nuance,” says Dean Jansen, the co-founder and executive director of PCF/Amara, a company that provides subtitles for websites like YouTube and Vimeo. “In the US, we have lots of graphic depictions of violence in our fiction and AI is much more likely to hit a wall when we ask it to distinguish between film violence and real violence.” What, for instance, is the difference to the Facebook algorithm between a “Game of Thrones” clip and a livestream of the New Zealand shooting?

Perhaps someday there will be artificial intelligence that fully displaces human judgment. “But as we’re arguing about when and how that happens,” Gray says, “there’ll be generations of people doing this work who won’t be noticed.”

And as the volume of murky internet content continues to increase, a rising number of people will be needed to take on the task.

Joan is among that growing generation of ghost workers, and she’s feeling more hopeful about her future than ever. After two years, her annual MTurk earnings almost quadrupled, up to $16,000 a year. She’s now among the less than 4% of MTurk workers who earn more than $7.25 an hour on average.

It’s not much, but she enjoys the work. And when her job begins to feel boring or repetitive, she sometimes listens to techno music or watches TV to pass the time.

“People talk about ‘Netflix and chill,’ ” she says. “But I watch Netflix and MTurk.”

HOW GHOSTS WORK

Ghost workers identify things “that should be easy for technology to catch but isn’t,” says LeadGenius founder Prayag Narula. Here are four common algorithm shortcomings they tackle every day.

1. Pornography: Algorithms have a problem not just with context “but literally seeing if a photo is a d–k pic or nipple pic,” says Narula. Humans still beat out AI when asked to identify X-rated penis selfies.

2. Understanding language: The nuances of human language are confounding to most algorithms. “Is something written in jest, or is it sarcastic?” says Narula.

3. Mislabeling: Algorithms often make simple mistakes about images, as Google Photos once infamously did back in 2015, classifying black people as “gorillas.”

4. Hate speech: Remember Microsoft’s AI-powered bot “Tay.ai,” which in 2016 inexplicably turned into a Hitler-loving, sexist troll? Algorithms need help recognizing when speech turns ugly.