Netscher’s startup makes software that monitors video feeds from elderly-care homes, to detect when a resident has fallen. People with dementia often can’t remember why or how they ended up on the floor. In 11 facilities around California, Safely You’s algorithms help staff quickly find the place in a video that will unseal the mystery.

Safely You was soliciting faked falls like mine to test how broad a view its system has of what a toppled human looks like. The company’s software has mostly been trained with video of elderly residents from care facilities, annotated by staff or contractors. Mixing in photos of 34-year-old journalists and anyone else willing to lay down for 7 cents should force the machine-learning algorithms to widen their understanding. “We’re trying to see how well we can generalize to arbitrary incidents or rooms or clothing,” says Netscher.

The startup that paid for my Whole Foods performance, Twenty Billion Neurons, is a bolder bet on the idea of paying people to perform for an audience of algorithms. Roland Memisevic, cofounder and CEO, is in the process of trademarking a term for what I did to earn my $3.50—crowd acting. He argues that it is the only practical path to give machines a dash of common sense about the physical world, a longstanding quest in AI. The company is gathering millions of crowd-acting videos, and using them to train software it hopes to sell clients in industries such as automobiles, retail, and home appliances.

Games like chess and Go, with their finite, regimented boards and well-defined rules, are well-suited to computers. The physical and spatial common sense we learn intuitively as children to navigate the real world is mostly beyond them. To pour a cup of coffee, you effortlessly grasp and balance cup and carafe, and control the arc of the pouring fluid. You draw on the same deep-seated knowledge, and a sense for the motivations of other humans, to interpret what you see in the world around you.

How to give some version of that to machines is a major challenge in AI. Some researchers think that the techniques that are so effective for recognizing speech or images won’t be much help, arguing new techniques are needed. Memisevic took leave from the prestigious Montreal Institute of Learning Algorithms to start Twenty Billion because he believes that existing techniques can do much more for us if trained properly. “They work incredibly well,” he says. “Why not extend them to more subtle aspects of reality by forcing them to learn things about the real world?”

To do that, the startup is amassing giant collections of clips in which crowd actors perform different physical actions. The hope is that algorithms trained to distinguish them will “learn” the essence of the physical world and human actions. It’s why when crowd acting in Whole Foods I not only took items from shelves and refrigerators, but also made near identical clips in which I only pretended to grab the product.

Twenty Billion’s first dataset, now released as open source, is physical reality 101. Its more than 100,000 clips depict simple manipulations of everyday objects. Disembodied hands pick up shoes, place a remote control inside a cardboard box, and push a green chili along a table until it falls off. Memisevic deflects questions about the client behind the casting call that I answered, which declared, “We want to build a robot that assists you while shopping in the supermarket.” He will say that automotive applications are a big area of interest; the company has worked with BMW. I see jobs posted to Mechanical Turk that describe a project, with only Twenty Billion's name attached, aimed at allowing a car to identify what people are doing inside a vehicle. Workers were asked to feign snacking, dozing off, or reading in chairs. Software that can detect those actions might help semi-automated vehicles know when a human isn’t ready to take over the driving, or pop open a cupholder when you enter holding a drink.