I attended a viewing and panel discussion for the final episode of Jeopardy! IBM Challenge in which IBM’s Watson supercomputer beat reigning champions Ken Jennings and Brad Rutter in a 3-night challenge of standard Jeopardy! games. Check out my previous post to read more about the tech behind Watson and what IBM hopes to do with this impressive technology. This post will focus more on the public perception of Watson, and what it means to have a technology that is credited with producing very accurate conclusions based on complex data.

The panel discussion and viewing event was hosted at Rensselaer Polytechnic Institute in Troy, NY. In the interest of full disclosure- I am currently enrolled as an M.S./Ph.D student in RPI’s Science and Technology Studies Dept. It’s also worth noting that Principal Investigator, David Ferrucci; research scientist Chris Welty; and Senior Software Engineer Adam Lally, are all RPI alumni.

I asked a few RPI students, during the preceding reception, two questions: 1) what they foresaw as the first application of Watson, and 2) whether or not they were afraid of Watson. A group of three guys, Alex, Sean, and Thomas wanted to see this technology replace the Google algorithm or WebMD. Two women, Anna and Karen, repeated what they’d heard the previous night, that Watson would be a tool to deal with “information overload.” For the second question, all five students seemed puzzled. How could a machine that sorts data, be evil?

To me, Watson is dangerous because of how people react to it, not what it does. The panel, which consisted of Adam Lally, Chris Welty, and several others, was asked whether humans are capable of “trusting a machine.” Dr. Lally’s response was, “The confidence values build trust.” Chris elaborated by noting that Watson provides a precise percentage of certainty, while a human will most likely say, “I’m certain.” The panel even collectively considered the possibility of government officials asking a Watson-like computer, how to solve the economic crisis.

When someone came to the microphone and asked the panel whether or not this was a technology that they should be making, the panel looked generally confused. Chris jokingly (I think) said “Make sure I’m the one in power.” RPI’s Department head and professor of Cognitive Science concluded that as long as an embodied Watson-like machine in a combat scenario had some sort of “acceptable risk” algorithm, it would be ethically sound. None of the panelists believed that “the singularity” was a realistic probability worthy of discussion.

Overall, I think we need to be worried about who has access to, what I would call, a “Truth Machine.” Because as long as it retains its reputation as a fount of 98% certain Truth, access to the machine (and the credibility it bestows) will remain with power elites. Will these machines prove “trickle-down economics?” Or will the most efficient solution be a command-economy with a central executive? But maybe a less radical question is: Will this machine be used to end poverty, or to calculate optimal mutual fund portfolios?