SAN FRANCISCO — There’s a dirty little secret about artificial intelligence: It’s powered by an army of real people.

From makeup artists in Venezuela to women in conservative parts of India, people around the world are doing the digital equivalent of needlework – drawing boxes around cars in street photos, tagging images, and transcribing snatches of speech that computers can’t quite make out.

CrowdFlower's Human-in-the-Loop tool allows a person to label and structure every part of a photo, left, and convert it into "training data," right, that an AI system can understand and interpret. CrowdFlower via AP TOP PHOTO: Jessica McShane monitors person-to-computer communications, helping computers understand what a human is saying, in the "intent analysis" room at Interactions Corp.'s headquarters in Franklin, Mass. When a computer can't make out a customer call to the Hyatt Hotels chain, an audio snippet is sent to the Interactions call center. While the customer waits on the phone, an analyst transcribes everything from misheard numbers to profanities and quickly tells the computer how to respond. Adding word tags to clothing pictures on websites such as eBay and Amazon pays Indonesia resident Aria Khrisna about $100 a month, roughly half of his income. Aria Khrisna via AP

Such data feeds directly into “machine learning” algorithms that help self-driving cars wind through traffic and let Alexa figure out that you want the lights on. Many such technologies wouldn’t work without massive quantities of this human-labeled data.

These repetitive tasks pay pennies apiece. But in bulk, this work can offer a decent wage in many parts of the world – even in the U.S. And it underpins a technology that could change humanity forever: AI that will drive us around, execute verbal commands without flaw, and – possibly – one day think on its own.

For more than a decade, Google has used people to rate the accuracy of its search results. More recently, investors have poured tens of millions of dollars into startups like Mighty AI and CrowdFlower, which are developing software that makes it easier to label photos and other data, even on smartphones.

Venture capitalist S. “Soma” Somasegar says he sees “billions of dollars of opportunity” in servicing the needs of machine learning algorithms. His firm, Madrona Venture Group, invested in Mighty AI. Humans will be in the loop “for a long, long, long time to come,” he says.

Accurate labeling could make the difference between a self-driving car distinguishing between the sky and the side of a truck – a distinction that Tesla’s Model S failed to make in the first known fatality involving self-driving systems in 2016.

“We’re not building a system to play a game, we’re building a system to save lives,” said Mighty AI CEO Daryn Nakhuda.

Marjorie Aguilar, a 31-year-old freelance makeup artist in Maracaibo, Venezuela, spends four to six hours a day drawing boxes around traffic objects to help train self-driving systems for Mighty AI. She earns about 50 cents an hour, but in a crisis-wracked country with runaway inflation, just a few hours’ work can pay a month’s rent in bolivars.

“It doesn’t sound like a lot of money, but for me it’s pretty decent,” she said. “You can imagine how important it is for me getting paid in U.S. dollars.”

Aria Khrisna, a 36-year-old father of three in Tegal, Indonesia, says that adding word tags to clothing pictures on websites such as eBay and Amazon pays him about $100 a month, roughly half his income.

And for 25-year-old Shamima Khatoon, her job annotating cars, lane markers and traffic lights at an all-female outpost of data-labeling company iMerit in Metiabruz, India, represents the only chance she has to work outside the home in her conservative Muslim community.

“It’s a good platform to increase your skills and support your family,” she said.

The benefits of greater accuracy can be immediate. At InterContinental Hotels Group, every call that its digital assistant Amelia can take from a human saves $5 to $10, says information technology director Scot Whigham.

When Amelia fails, the program listens while a call is rerouted to one of about 60 service desk workers. It learns from their response and tries the technique out on the next call, freeing up human employees to do other things.

When a computer can’t make out a customer call to the Hyatt Hotels chain, an audio snippet is sent to AI-powered call center Interactions in an old brick building in Franklin, Massachusetts. There, while the customer waits on the phone, one of a roomful of headphone-wearing “intent analysts” transcribes everything from misheard numbers to profanity and quickly directs the computer how to respond.

That information feeds back into the system. “Next time through, we’ve got a better chance of being successful,” said Robert Nagle, Interactions’ chief technology officer.

Researchers have tried to find workarounds to human-labeled data, often without success.

In a project that used Google Street View images of parked cars to estimate the demographic makeup of neighborhoods, then-Stanford researcher Timnit Gebru tried to train her AI by scraping Craigslist photos of cars for sale that were labeled by their owners.

But the product shots didn’t look anything like the car images in Street View, and the program couldn’t recognize them. In the end, she says, she spent $35,000 to hire auto dealer experts to label her data.

Trevor Darrell, a machine learning expert at the University of California Berkeley, says he expects it will be five to 10 years before computer algorithms can learn to perform without the need for human labeling. His group alone spends hundreds of thousands of dollars a year paying people to annotate images.

Send questions/comments to the editors.