Marco della Cava

USA TODAY

AUSTIN — So here’s the bottom line on robots and artificial intelligence.

Don’t be too concerned about the rise of humanoid robots, because they’re still not ready for prime time.

But you might want to keep a wary eye on the machine-learning systems that power them. Researchers are concerned that in the wrong hands, artificial intelligence (AI) could power the return of fascism.

SXSW Interactive's spotlight on AI makes sense given the technology’s presence is increasingly being noticed in our smartphones, cars and home devices, all of which generates associated concerns about the hacking of our device data.

This year, attendees at of the Texas tech confab got a close look at just how hard it is to program a sentient robot, and how dangerous it may be to trust them.

Osaka University roboticist Hiroshi Ishiguro returned to the show, this time bringing two robots. One looked like C-3PO on an extreme diet, while the female android featured an unyielding creepiness that wouldn't have been erased if she broke out into Adele's "Hello."

The demo for a packed audience got off to an inauspicious start. The two robots were supposed to enter into a dialog but minutes of dead air passed. A dozen men with laptops frantically worked to get the system back online.

Finally, a conversation started about sushi, and whether it was better than ramen.

Creepy android robot says that “sushi is better for dating,” presumably because the diner is not rudely slurping. Other robot wonders if ramen is even healthy. The voices were robotic. The pacing was too slow for a toddler. Snooze.

But another AI-related lecture suggested that the power of machines is truly something worth fearing —because they’re run by humans.

In her talk “Dark Days: AI and the Rise of Fascism,” Microsoft Research scholar Kate Crawford laid out how historically data has been misused by the powerful. This included the 18th-century practice of phrenology, which tried to link head measurements to intelligence, to Nazi Germany’s use of Hollerith tabulating machines to isolate its Jewish citizens.

“AI has the power to check as well as help along (authoritarian) regimes,” she said, adding there already is evidence “that the tricks of fascist regimes of the past are getting a re-run.”

Some of the present transgressions actually come from the private sector, she said.

Crawford mentioned Uber’s development of Greyball, the ghost app that allowed the ride-hailing company to thwart municipal officials in cities trying to crack down on unauthorized expansion. Uber has agreed to stop using it to block regulators, while defending its purpose in weeding out fraudulent customers.

She also cited Palantir Technologies, the data mining start-up. According to a report in The Verge, which cited public records, Palantir has helped customs officials with ways to track and assess immigrants. Palantir is backed by Peter Thiel, the tech billionaire who is now a key adviser to President Trump. CEO Alex Karp told Forbes in January that the company would decline requests by the administration to create a Muslim Registry.

Crawford allows that data-crunching revelations in health and science can come from AI. But she’s worried about the human biases inherent in such machines.

“Always be suspicious if you hear than some machine is ‘free from bias’ if it was trained through human-generated data,” she said. “Because as our biases show, machines could create a terrifying system in the hands of an autocrat.”

Crawford cited an experiment by Chinese researchers that fed facial characteristics about criminals into a computer, which then was able to scan a random photo and “tell within 90% accuracy" if that person was likely to become a criminal.

“We’re at a volatile moment, one where data could be attached to unaccountability,” she said.

Crawford urged attendees not to despair, but rather to stay vigilant about how AI data is used in the coming years. To that end, she and a few colleagues have launched AI Now, an ACLU-backed initiative that encourages researchers to join together to monitor AI’s social impact.

“What these dark days do is challenge us to be prepared,” she said. “Done right, AI can be used to keep power structures in check. But in the wrong hands …”

Follow USA TODAY tech reporter Marco della Cava on Twitter.