OpenAI, a non-profit co-founded by Elon Musk, recently unveiled its newest trick: A robot hand that can ‘solve’ Rubik’s Cube. Whether this is a feat of science or mere prestidigitation is a matter of some debate in the AI community right now.

In case you missed it, OpenAI posted an article on its blog last week titled “Solving Rubik’s Cube With a Robot Hand.” Based on this title, you’d be forgiven if you thought the research discussed in said article was about solving Rubik’s Cube with a robot hand. It is not.

Don’t get me wrong, OpenAI created a software and machine learning pipeline by which a robot hand can physically manipulate a Rubik’s Cube from an ‘unsolved’ state to a solved one. But the truly impressive bit here is that a robot hand can hold an object and move it around (to accomplish a goal) without dropping it.

The robot, which is just a hand, doesn’t actually figure out how to solve the puzzle. An old-school non-AI algorithm does the math using sensor data, then it transmits each step to the hand in succession. The hand just follows directions.

It’s relatively simple to create a purpose-built machine designed to perform a specific function in a perfect environment. Here’s a machine that can “solve” the Rubik’s Cube in less than a second:

But that machine can’t do it with one hand, under unpredictable, adverse conditions.

OpenAI had to develop a training method that constantly challenges the AI to figure out new ways of solving a problem. As soon as the AI came up with a method that works well, thus becoming complacent, the researchers would change things up. This kept it ready for things it’d never encountered in more than 10,000 hours of simulation training.

In the real world, the researchers physically messed with it by pushing and shoving it while it tried to work.

These perturbations resulted in the training of an AI that can process all the physics (gravity, friction, etc.) involved in keeping the hand on task.

And it isn’t as robust at performing this task as you might think. Consider this bit at the end of OpenAI‘s blog post:

Our method currently solves the Rubik’s Cube 20% of the time when applying a maximally difficult scramble that requires 26 face rotations. For simpler scrambles that require 15 rotations to undo, the success rate is 60%.

Remember, when it “fails” that doesn’t mean it can’t figure out the puzzle. It means that it’s either dropped it or fumbled its attempts to spin the cube’s sides until time ran out. But, it’s still impressive nonetheless.

The Rubik’s Cube puzzle is just a placeholder for “whatever problem you need a robot hand to solve.” It could just as easily be tasked with juggling tomatoes without squishing them or playing a piano while people throw beer bottles at it, and it would essentially be the same kind of accomplishment.

Unfortunately, the language OpenAI used to describe this incredible cutting-edge AI research makes it look like it’s ‘solved’ a Rubik’s Cube using deep learning neural networks. Which would not only be, as far as we know, the first time anyone’s done such a thing, but probably pointless – refer to the machine above that can do it in less than a second.

The point is: a robot hand that can do random stuff is a much, much more impressive accomplishment than using an old algorithm to align the colors on a Rubik’s Cube. In machine learning, “general” or “broad” tasks are generally much more difficult to pull off than “specific” or “narrow” tasks. It’s easier to build a Rubik’s Cube solver than a hand that doesn’t drop stuff.

Gary Marcus, CEO of Robust AI and author of the just-released “Rebooting AI: Building Artificial Intelligence We Can Trust,” took immediate exception to OpenAI’s blog post. He called it misleading and intimated that OpenAI was, once again, causing the media to print overzealous, hyperbolic headlines and stories.

Since @OpenAI still has not changed misleading blog post about "solving the Rubik's cube", I attach detailed analysis, comparing what they say and imply with what they actually did. IMHO most would not be obvious to nonexperts. Please zoom in to read & judge for yourself. pic.twitter.com/R7HgnyyNRj — Gary Marcus (@GaryMarcus) October 19, 2019

Credit: Gary Marcus

Ilya Sutskever, Chief Scientist at OpenAI, took umbrage to Marcus’ assertion and seems to think there’s an ulterior motive:

Surprised and saddened by all the bad faith criticism of our robotic manipulation result https://t.co/POljoP8jog — Ilya Sutskever (@ilyasut) October 21, 2019

Marcus outright dismisses the criticism of his criticism:

.@openAI It's not bad faith, it's facts. You may feel I made these points because I have a book out, but fact is I have been puncturing hype & championing nativism & hybrid models for 30 years. I will always stand up for what I believe. https://t.co/MinHbnubJH — Gary Marcus (@GaryMarcus) October 22, 2019

And everyone else seems torn between calling Marcus and those who agree with him pedantic and OpenAI intentionally misleading. Carnegie Mellon’s Zachary Lipton called the research “interesting” and the PR behind it “weapons-grade.”

So to elaborate a bit because the @nytimes included just a tiny snippet, I think *this @OpenAI research is interesting* and I generally think *demos are terrific*. However, I don't like this pattern where weapons-grade press releases are primary vector for disseminating research. https://t.co/MYnF6Ufy8k — Zachary Lipton (@zacharylipton) October 15, 2019

And perhaps he has a point, after all the Washington Post published an article titled “This Robotic Hand Learned to Solve a Rubik’s Cube on its Own – Just Like A Human.” That’s a headline that feels like it’s straddling the border between poor interpretation and outright poppycock.

In OpenAI‘s defense, it doesn’t control the media. But, to OpenAI‘s detriment, it kinda does control the media. It’s a non-profit co-founded by Elon Musk (no longer involved) that just received a billion dollars from Microsoft in a much ballyhooed “partnership” to “develop human-level AI” that just looks like a marketing deal for Azure. It’s no stranger to press coverage.

Furthermore, OpenAI recently stirred up controversy after choosing to withhold the models for an open-source AI text-generator over concerns it wouldn’t be ethical to release it to the public (to be fair, it’s pretty scary). OpenAI knows exactly what kind of controversy it’s courting when it presents these stories to the media. Critics claim it doesn’t do quite enough to head these stories off or disabuse the general public of hyperbolic notions.

OpenAI says it contacts journalists when articles don’t get the story right.

And that’s a shame. The real story, the amazing robot hand that can manipulate physical objects in a non-optimized environment, is a great one. Unfortunately, it’s been largely swallowed up in the noise.

For more information on OpenAI‘s cool robot hand, check out the team’s research paper here.

Read next: Ajit Pai wants to abolish federalism so Verizon and AT&T can throttle your data