“We understand the human much better than other humans understand each other,” said Faception chief executive Shai Gilboa. “Our personality is determined by our DNA and reflected in our face. It’s a kind of signal.”

Faception has built 15 different classifiers, which Gilboa said evaluate with 80 percent accuracy certain traits. The start-up is pushing forward, seeing tremendous power in a machine’s ability to analyze images.

AD

AD

Yet experts caution there are ethical questions and profound limits to the effectiveness of technology such as this.

“Can I predict that you’re an ax murderer by looking at your face and therefore should I arrest you?” said Pedro Domingos, a professor of computer science at the University of Washington and author of “The Master Algorithm.” “You can see how this would be controversial.”

Gilboa said he also serves as the company’s chief ethics officer and will never make his classifiers that predict negative traits available to the general public.

The danger lies in the computer system’s imperfections. Because of that, Gilboa envisions governments considering his findings along with other sources to better identify terrorists. Even so, the use of the data is troubling to some.

AD

“The evidence that there is accuracy in these judgments is extremely weak,” said Alexander Todorov, a Princeton psychology professor whose research includes facial perception. “Just when we thought that physiognomy ended 100 years ago. Oh, well.”

AD

Faception recently showed off its technology at a poker tournament organized by a start-up that shares investors with Faception. Gilboa said that Faception predicted before the tournament that four players out of the 50 amateurs would be the best. When the dust settled two of those four were among the event’s three finalists. To make its prediction Faception analyzed photos of the 50 players against a Faception database of professional poker players.

There are challenges in trying to use artificial intelligence systems to draw conclusions such as this. A computer that is trained to analyze images will only be as good as the examples it is trained on. If the computer is exposed to a narrow or outdated sample of data, its conclusions will be skewed. Additionally, there’s the risk the system will make an accurate prediction, but not necessarily for the right reasons.

AD

Domingos, the University of Washington professor, shared the example of a colleague who trained a computer system to tell the difference between dogs and wolves. Tests proved the system was almost 100 percent accurate. But it turned out the computer was successful because it learned to look for snow in the background of the photos. All of the wolf photos were taken in the snow, whereas the dog pictures weren’t.

AD

Also, an artificial intelligence system might zero in on a trait that could be changed by a person — such as the presence of a beard — limiting its ability to make an accurate prediction.