Facebook CTO Mike Schroepfer ran a demo for a Fortune journalist to showcase how quickly the firm's AI system can pick up harmful content.

To do this, he made the system distinguish between broccoli and cannabis.

Schroepfer said AI is more accurate and much faster than humans at making the distinction.

Facebook's CTO Mike Schroepfer showed off Facebook's AI software by making it distinguish between cannabis and broccoli.

Fortune journalist Michal Lev-Ram spoke to Schroepfer as part of a lengthy piece about Facebook's attempts to move on after a year riddled with scandal.

Schroepfer showed Lev-Ram two photographs, and asked her to determine which was broccoli and which was weed. According to Lev-Ram, it was not obvious which was which.

"Both pictures look[ed] convincingly cannabis-like—dense, leafy-green buds that are coated with miniature, hair-like growths, or perhaps mold," she wrote.

Lev-Ram eventually guessed correctly, but the challenge was Schroepfer's way of talking up Facebook's AI tools for spotting harmful content.

He told Lev-Ram that AI is more accurate than humans, and that Facebook's system had concluded which picture was marijuana and which was broccoli with 93.77% and 88.39% certainty, respectively.

Read more: 3 things we learned from Facebook's AI chief about the future of artificial intelligence

He also said it was way faster than a human. Lev-Ram had taken a more than a second to guess, while the system can do so in "hundredths of milliseconds, billions of times a day."

Facebook CTO Mike Schroepfer. Greg Sandoval/Business Insider

Since critics and politicians have been calling out Facebook for the proliferation of harmful content on its platform, the social network has been keen to point to the AI tools it's developing to help combat problems as diverse as illegal drugs, suicide, and hate-speech.

Facebook's automated systems are not foolproof. In July, Facebook's systems automatically picked up and blocked a post containing a line from the US Declaration of Independence because the section in question contained racist language referring to "Indian Savages."