Ask Geordie Rose and Suzanne Gildert, co-founders of the startup Kindred, about their company’s philosophy, and they’ll describe a bold vision of the future: machines with human-level intelligence. Rose says these will be perhaps the most transformative inventions in history — and they aren’t far away. More intriguing than this prediction is Kindred’s proposed path for achieving it. Unlike some of the most cash-flush corporations in Silicon Valley, Kindred is focusing not on chatbots or game-playing programs, but on automating physical robots.

Gildert, a physicist who conceived Kindred in 2013 while working with Rose at quantum computing company D-Wave, thinks giving AI a physical body is the only way to make real progress toward a true thinking machine. “If you want to build intelligence that conceptually thinks in the same way a human does… it needs to have a similar sensory motor as humans do,” Gildert says. The trick to achieving this, she thinks, is to train robots by having them collaborate with humans in the physical world. Rose, who co-founded D-Wave in 1999, stepped back from his role as chief technology officer to work on Kindred with Gildert.

Kindred wants to train robots by having them collaborate with humans in the physical world

The first step toward their new shared goal is an industrial warehouse robot called the Orb. It’s a robotic arm that sits inside a hexagonal glass encasement, equipped with a bevy of sensors to help it see, feel, and even hear its surroundings. The arm is operated using a mix of human control and automated software. Because so many warehouse workers today spend a significant amount of time sorting products and scanning barcodes, Kindred developed a robotic arm that can do some elements automatically. Meanwhile, humans step in when needed to manually operate the robot to perform tasks that are difficult for machines, like gripping a single product from a cluster of different items.

Workers can even operate the arm remotely using an off-the-shelf HTC Vive headset and virtual reality motion controllers. It turns out that VR is great for gathering data on depth and other information humans intuitively use to grasp objects.

Kindred is now focused on getting its finished Orb into warehouses, where it can begin learning at an accelerated pace by sorting vastly different products and observing human operators. Because the company gathers data every time a human uses the Orb, engineers are able to improve its software over time using techniques such as reinforcement learning, which improves software through repetition. Down the line, the Orb should slowly take over more responsibility and, ideally, learn to perform new tasks.

But Kindred’s ultimate goal is much more ambitious. It may sound counterintuitive, but Rose and Gildert think warehouses are the perfect place to start on the path toward human-level artificial intelligence. Because the US shipping marketplace is already rife with single-purpose robots, thanks in part to Amazon, there are plenty of opportunities for humans to train AI. Finding, handling, and sorting products while maneuvering in a fast-moving environment is a data gold mine for building robots that can operate in the real world.

Rose and Gildert believe the next generation of AI won’t be in the form of a disembodied voice living in our phones. Rather, they believe the greatest strides will come from programs running inside a physical robot can gain knowledge about the world and itself from the ground up, like a human infant does from birth.

Kindred’s is working toward what’s known as artificial general intelligence, or software capable of performing any task a human being can do. Artificial general intelligence, or AGI, is sometimes referred to as “strong” or “full” AI because it exists in contrast to AI programs, like DeepMind’s AlphaGo system, with very specific applications. Other more conventional forms of “weak” or “narrow” AI include the underlying software behind Netflix and Amazon recommendations, Snapchat camera effects that rely on facial recognition, and Google’s fast and accurate language translations.

These algorithms are developed by applying deep learning techniques to large-scale neural networks until they can, say, differentiate between an image of a dog and a cat. They perform one task, or perhaps many in some cases, far better than humans can. But they are extremely limited and don’t learn or adapt the way humans do. The software that recognizes a sunset can’t predict whether you’ll like a Netflix movie or translate a sentence into Japanese. Right now, you can’t ask AlphaGo to face off in chess — it doesn’t know the rules and wouldn’t know how to begin learning them.

Kindred thinks our physical body is intrinsic to the secrets of human cognition

This is the fundamental challenge of AGI: how to create an intelligent system, the kind we know only from science fiction, that can truly learn on its own without needing to be fed thousands of examples and trained over the course of weeks or months.

The biggest names in AI research, like DeepMind, are focused on game-playing because it seems to be the most viable path forward. After all, if you can teach software to play Pong, perhaps it can take the lessons learned and apply them to Breakout? This applied knowledge approach, which mimics the way a human player can quickly intuit the rules of a new game, has proven promising.

For instance, AlphaGo Master, DeepMind’s latest Go system that just bested world champion Ke Jie, now effectively teaches itself how to play better. “One of the things we’re most excited about is not just that it can play Go better, but we hope that this’ll actually lead to technologies that are more generally applicable to other challenging domains,” DeepMind co-founder and CEO Demis Hassabis said at the event last week.

Yet for Kindred’s founders, the quest to crack the secret of human cognition can’t be separated from our physical bodies. “Our founding belief was that in order to make real progress toward the original objectives of AI, you needed to start by grounding your ideas in the physical world,” Rose says. “And that means robots, and robots with sensors that can look around, touch, hear the world that surrounds them.”

This body-first approach to AI is based on a theory called embodied cognition, which suggests that the interplay between our brain, body, and the physical world is what produces elements of consciousness and the ability to reason. (A fun exercise here is thinking about how many common metaphors have physical underpinnings, like thinking of affection as warmth or something inconceivable as being “over your head.”) Without understanding how the brain developed to control the body and guide functions like locomotion and visual processing, the theory goes, we may never be able to reproduce it artificially.

The body-first approach to AI is based on a theory called embodied cognition

Other than Kindred, work on AI and embodied cognition mostly happens in the research divisions of large tech companies and academia. For example, Pieter Abbeel, who leads development on the Berkeley Robot for the Elimination of Tedious Tasks (BRETT), aims to create robots that can learn much like young children do.

By giving its robot sensory abilities and motor functions and then using AI training techniques, the BRETT team devised a way for it to acquire knowledge and physical skills much faster than with standard programming — and with the flexibility to keep learning. Much like how babies are constantly adjusting their behavior when attempting something new, BRETT also approaches unique problems, fails at first, and then adjusts over repeated attempts and under new constraints. Abbeel’s team even uses children’s toys to test BRETT’s aptitude for problem solving.

OpenAI, the nonprofit funded by SpaceX and Tesla CEO Elon Musk, is working on both general purpose game-playing algorithms and robotics, under the notion that both avenues are complementary. Helping the team is Abbeel, who is on leave from Berkeley to help OpenAI make progress fusing AI learnings with modern robotics. “The interesting thing about robotics is that it forces us to deal with the actual data we would want an intelligent agent to deal with,” says Josh Tobin, a graduate student at Berkeley who works on robotics at OpenAI.

Applying AI to real-world tasks like picking up objects and stacking blocks involves tackling a whole suite of new problems, Tobin says, like managing unfamiliar textures and replicating minute motor movements. Solving them is necessary if we’re to ever deploy intelligent robots beyond factory floors.

Wojciech Zaremba, who leads OpenAI’s robotics work, says that a holy grail of sorts would be a general-purpose robot powered by AI that can learn a new task — scrambling eggs, for instance — by watching someone do it just once. This is why OpenAI is working on teaching robots new skills that are first demonstrated by a human in a simulated VR environment, much like a video game, where it’s much easier and less costly to produce and collect data.

“You could imagine that, as a final outcome, if it’s doable, you have files online of recordings of various tasks,” Zaremba says. “And then if you want the robot to replicate this behavior, you just download the file.”

When I first operated the Orb, on an April afternoon in Kindred’s San Francisco warehouse space, a group of six or so engineers were scattered about testing the robotic arms with various pink-colored bins of products — vitamin bottles, soft plastic cylinders of Lysol cleaning wipes, rolls of paper towels.

The Orb is designed to help sort these objects in a large heap inside its glass container, while the arm sits affixed to the roof of the container. First, an operator wearing a VR headset moves the arm to a desired object, lowers the gripper, and adjusts the two clamps until a firm grip is established. Then the human can simply let go. Kindred has already automated the process of lifting the object in the air, scanning the barcode, and sorting it into the necessary bin.

Using the Orb resembles operating a video game version of a toy claw machine

“In any gigantic warehouse, people have to walk around and pick up things,” says George Babu, Kindred’s chief product officer. “The most efficient way to do that is to pick up a whole bunch of different things at the same time. Those go to someplace where you have them separated. Our robot does that job in the middle.” The idea is that warehouse workers can dump a bunch of products into the Orb, while a remote operator works with the robot to sort them.

Amazon is working on something similar, and the company now holds an annual “picking” challenge to spur development in industrial robotics that are capable of handling and sorting physical items. Kindred is quick to recognize Amazon’s prowess in this department. “In the fulfillment world, Amazon uses a different set of approaches than all of the other fulfillment provisioners. They have the scale, the scope, and the know-how to implement end-to-end systems that are very effective at what they do,” Rose says. But he thinks Amazon is likely to keep this technology to itself. “The advancements that Amazon makes toward doing this job well don’t benefit all of their competitors.”

Kindred’s system, on the other hand, is designed to integrate into existing warehouse tools. Last month, Kindred finished its first deployable devices, and it “created more demand than we anticipated,” according to Jim Liefer, Kindred’s chief operating officer, though he won’t disclose any initial customers.

I was surprised when using the Orb, with a Vive headset, by just how much it resembles a video game. Think of a toy claw machine, where the second the clamp touches down on an object, the automated process takes over and the arm springs to life with an uncanny jerkiness. It makes sense, considering Kindred built its depth-sensing system using the game engine Unity.

Kindred imagines future versions of the Orb being affixed to sliding rails or bipedal roaming robots

Max Bennett, Kindred’s robotics product manager, says that the process is designed so that human warehouse workers can operate multiple Orbs simultaneously, gripping objects and letting the software take the reins before cycling to the next setup. Kindred imagines future versions of the robotic arm being affixed to sliding overhead rails or maybe even to bipedal robots that roam the floor. There is also a point at which the Vive is no longer necessary. “Nobody’s going to want to use a VR headset all day,” Bennett tells me, suggesting that an Xbox controller or even just a computer mouse will do in the future.

As for how the Orb might impact jobs, Babu says there will be need for human labor for quite some time. He’s partly right: Amazon hired 100,000 workers in the last year alone, and plans to hire 100,000 more this year, mostly in warehouse and other fulfillment roles. But systems like the Orb raise the possibility that fewer jobs are needed as the work becomes more a matter of assisting and operating robots.

“My view is that the humans will all move on to different work in the stream,” Babu says.

Still, Forrester Research predicts that automation will result in 25 million jobs lost over the next decade, with only 15 million new jobs created. The end goals of automation have always been to reduce costs and improve efficiency, and that will inevitably mean the disappearance of certain types of labor.

Kindred is unique in the AI field not just for its robotics focus, but also because it’s diving head first into the industrial world with a commercial product. Many of the big tech companies working on AI are doing so with huge research organizations, like Facebook AI Research and Google Brain. These teams are filled with academics and engineers who work on abstract problems that then help inform real software features that get deployed to millions of consumers.

Kindred, as a startup, can’t afford this approach. “Day one we said: ‘We’re going to find a big market. We’re going to build a wildly successful product for that initial market, and build a business by executing along that path first with one vertical and then maybe others,’” Rose explains. He adds that his experience with D-Wave, which raised more than $150 million over the course of more than a decade just to release its first product, inspired him to seek out a different approach to tackling big-picture problems.

Gildert and Rose don’t want to rely solely on venture capital funding to build Kindred

“You have this quandary that doing it right is going to take a long time, on the order of decades. How do you sustain that organization for that length of time without all the negative side effects of raising a lot of rounds of VC?” Rose says. “The answer is that you have to create a real business that is cash-flow positive very early.” Kindred has raised $15 million in funding thus far from Eclipse, GV, Data Collective, and a number of other investors. But Rose stresses that the company’s focus is to become profitable with the Orb, and that will help it in its main objective.

That objective, since the beginning, is human-level AI with a focus on what Gildert calls “in-body cognition,” or the type of thought processes that only arise from giving AI a physical shell. “Intelligence absence a body is not what we think it means,” she says. “Intelligence with a body brings to it a number of constraints that are not there when you think about intelligence in a virtual environment. We certainly don’t believe you can build a chatbot without a human-like body and expect it to pass [for a human].”

“Brains evolved to control bodies,” Rose adds. “And all these things that we think about as being the beautiful stuff that comes from cognition, they’re all side effects of this.”