Turing ended a BBC interview in 1951 by discussing various ways in which the whole idea of machines thinking is unsettling to some people. Those qualms exist to this day, and there are even more arguments against the possibility now than there were then. Turing's final words in this interview echo still. For me, they express one of the most important reasons for continuing the quest. He says, "The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves."

The sentence in the Mind paper I like best, though, is this one: "Conjectures are of great importance since they suggest useful lines of research." At a time when so much focus in education, industry and even research is on the short term, on following paths sure to deliver results, it is crucial to look far afield, to conjecture about what might be and to imagine different futures. With this in mind, I have been asking myself and others in computer science what question Turing might pose today were he around to see the vastly increased power of computing machines and the many ways in which computers are deployed now, which are so very different from his day's one-on-one person-computer interactions focused on mathematical computations.

I've made one proposal for this new conjecture, a challenge that reflects the great advances since 1950 in computer science, neuroscience and the behavioral sciences as well as the highly networked ways in which ordinary people now use computers daily: Is it imaginable that a computer-agent team-member could behave, over the long-term and in uncertain, dynamic environments, in such a way that people on the team will not notice it is not human?

Unlike Turing's original question, this question asks not that a computer agent be indistinguishable from a person, but that it behave reasonably, that its mistakes make sense and are not noticeably non-human. It's here that even Watson and Siri reveal their nature: it's in their errors that systems make most evident they are not yet thinking like Turing imagined they might. This question raises the challenge of building systems that work better for people, because they are smarter about matching their abilities with people's. It lets us imagine health care systems that help physicians, nurses, social workers, and patients' families do the work of caring better, rather than imposing on them data entry tasks or taxing interfaces. It suggests a vision of computer agents that support children in exploring biological, physical and mathematical worlds so that learning is fun and integrated with life. It imagines a future in which computer systems make us feel smarter, not dumber, and work seamlessly with us, like a good human partner.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.