As her fellow patients read dog-eared magazines or swipe through Instagram, Shari Forrest opens an app on her phone and gets busy training artificial intelligence.

Forrest isn’t an engineer or programmer. She writes textbooks for a living. But when the 54-year-old from suburban St. Louis needs a break or has a free moment, she logs on to Mighty AI and whiles away her time identifying pedestrians and trash cans and other things you don't want driverless cars running into. And Mighty AI even pays her a few cents for her trouble. “If I am sitting waiting for a doctor's appointment and I can make a few pennies, that’s not a bad deal,” she says.

The work may be a pleasant distraction for Forrest, but it's absolutely essential to the dawning age of the driverless car. The volume of data needed to train the AI underpinning those vehicles staggers the imagination. The Googles and GMs of the world rarely mention it, but their shiny machines and humming data centers rely on a growing and global army of people like Forrest to help provide it.

You've probably heard by now that almost everyone expects AI to revolutionize almost everything. Automakers in particular love this idea, because robocars promise to increase safety, reduce congestion, and generally make life easier. “The automotive space is one of the hottest and most advanced fields applying machine learning,” says Matt Bencke, CEO of Mighty AI. He won't name names but claims his company is working with at least 10 automakers.

The challenge lies in teaching a computer how to drive. The DMV rule book provides a good place to start, because it covers rudimentary things like "yield to pedestrians." Ah, but what does a pedestrian look like? Well, a pedestrian usually has two legs. But a skirt can make two legs look like one. What about a guy in a wheelchair or a mother pushing a stroller? Is that a small child or a large dog? Or a trash can? Any artificial intelligence controlling a two-ton chunk of steel must learn how to identify such things and to make sense of an often confusing world. This is second nature for humans but utterly foreign to a computer.

Cue Forrest and 200,000 other Mighty AI users around the world.

The cameras mounted on today's robocars photograph almost every environment and circumstance you can imagine. Automakers and tech companies send those photos by the millions to an outfit like Mighty AI, which makes a game of identifying everything in those photos. It sounds tedious, but Mighty AI turns it into a 10-minute task with points, skills, and level-ups to keep it engaging. “It’s more like Candy Crush than a labor farm,” Bencke says. The monetary rewards, although small, help, too.

Forrest carefully draws a box around every person in each picture, then around every approaching car, and then around the tires on each car. That done, she zooms in and working pixel-by-pixel meticulously outlines things like trees. Click click, click. She selects a different color pointer and highlights traffic lights, a telephone pole, a safety cone. When she’s finished, the scene is annotated in language a computer understands. Engineers call it a "semantic segmentation mask."

The need for accuracy makes for painstaking work, but Forrest, who makes a few cents per picture, enjoys it. “It’s like why some adults color,” she says. “It’s become a relaxing task.”