German philosopher Arthur Schopenhauer sees three stages in the revelation of any truth: First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self‐evident. In the case of quantum cognition, an emerging field that advocates using quantum theory to explain the human mind, depending on your perspectives (i.e., how the wave function collapses, or more precisely, which subspace in your mental Hilbert space you choose to project your mental vector to), different assessments can be given. To a certain extent, however, we think that “confused” is probably a more fitting description of people's perception to the field. Here, the best footnote may be the critical comment made by Stephen Hawking on Roger Penrose's quantum theoretical approach to human consciousness: “His argument seemed to be that consciousness is a mystery and quantum gravity is another mystery so they must be related” (Penrose, 1997 , p. 171). It is in this context that we applaud this special issue of topiCS on quantum cognition. The target papers showcase some of the newest results in using quantum theory to build models of human cognition. More important, they stimulate people to ponder some of the fundamental questions in cognitive modeling.

Different versions of quantum cognition

Quantum theory, the brain child of a few of the greatest minds in the early 20th century, offers a very accurate description of how the physical world works. For example, quantum field theory, the combination of quantum mechanics and the special theory of relativity, is known to be accurate to about one part in 1011 (Penrose, 2005). Although the idea that quantum theory may also offer a credible description to metaphysical aspects of reality goes back to as early as the dawn age of the theory (e.g., Schrodinger, 1944/1958), using quantum theory to seriously tackle issues related to the human mind is a more recent development (for recent reviews, see Aaronson, 2013; Busemeyer & Bruza, 2012; Hameroff, 2012; Koch & Hepp, 2006). By “seriously” we mean those efforts that literally, rather than philosophically/metaphorically, believe in quantum theory as the road to mental reality and hold that the human mind results from quantum computation. There is a difference, however, in terms of the degree of seriousness. We see at least two levels of commitment. The strong claim, represented by, among others, the Penrose–Hameroff theory of “orchestrated objective reduction (Orch OR)” to human consciousness (Hameroff, 1998, 2007; Penrose, 1997), argues that the human brain is a quantum computer and that quantum computations occur in the brain materially and literally. More important, it is exactly this kind of quantum computations in the brain that leads to the mind in general and consciousness in particular. Much effort has been taken to pinpoint how quantum computations are carried out neurophysically, for example, through entangled microtubules in neurons connected and synchronized by gap junctions. When entanglement collapses by “orchestrated objective reduction,” a fundamental effect of quantum gravity, consciousness arises. Recently, this Orch OR state reduction is linked to the gamma band EEG signal in the brain (~40 Hz), suggesting a ~25‐ms rhythm of conscious progression (Hameroff, 2012).

On the other hand, there is a weak claim, which is the one taken by the editors and many authors in this special issue. In the call for commentaries, it was claimed that “this special issue is not interested in physics, and neither does the work presented claim the brain is a quantum computer.” Rather, “our approach applies abstract, mathematical principles of quantum theory to inquiries in cognitive science” (Wang, Busemeyer, Atmanspacher, & Pothos, 2013, p. 3).

It is not clear, however, to what extent that one can clearly treat the two levels of commitment completely separate and still proclaim a complete theory of human cognition. In a seminal analysis on cognitive theory development, John Anderson suggests that any credible cognitive theories have to pass two tests, discovery and uniqueness (Anderson, 1993). The discovery test has to do with how a theory is found and identified among often many potential candidates, and the uniqueness test deals with how to demonstrate and prove the discovered theory is a right one. One of the important criteria underlying the uniqueness test is the demonstration of biological realism and implementation. Without such a demonstration, a theory is no more than a principled framework and is therefore often incomplete and unfalsifiable.

Similar criticisms exist for the classical probabilistic (CP) approach to human cognition (Bowers & Davis, 2012; McClelland et al., 2010), which the current quantum probability (QP)‐based approach is compared to (as well as contrasted with). In a recent Science review, Tenenbaum and colleagues justify the Bayesian approach to modeling cognition and suggest that “the claim human minds learn and reason according to Bayesian principles is not a claim that the mind can implement any Bayesian inference” (Tenenbaum, Kemp, Griffiths, & Goodman, 2011, p. 1280). However, in the end of the article, they recognize the issue of pushing Bayesian models down through the algorithmic and implementation levels in neural circuits as one of the key open questions and acknowledge that “the project of reverse‐engineering the mind must unfold over multiple levels of analysis” (p. 1284). From this perspective, the QP approach is advantageous over the CP approach in providing a more complete theory of the human mind given its closer link to physics and biology. A unified set of quantum mechanisms such as superposition, entanglement, decoherence, and interference may therefore be used to describe both physical and psychological realities.

We would like to point out another interesting yet related observation concerning the difference between the CP and QP approaches to modeling cognition. In justifying the appeal of the Bayesian approach, Tenenbaum and colleagues further delimit its scope and notice that “only those inductive computations that the mind is designed to perform well, where biology has had time and cause to engineer effective and efficient mechanisms, are likely to be understood in Bayesian terms” (p. 1280). They claim that this is why the Bayesian approach enjoys great successes in modeling rapid, intuitive, low‐level, and unconscious processes, but notoriously fails in various high‐level and explicit judgment and decision‐making tasks (e.g., “biases and heuristics”). It is interesting to note that it is in these explicit judgment and decision‐making tasks that the QP models excel. The QP model of the conjunction fallacy in the Linda problem is not only elegant but also insightful (Busemeyer & Bruza, 2012; Wang et al., 2013). Similar examples, as demonstrated by papers in this special issue, include the order effect (Wang & Busemeyer, 2013) and vagueness judgment (Blutner, Pothos, & Bruza, 2013). Then, it is quite puzzling how the QP theory enjoys most success in modeling these naturally not “effective and efficient” tasks, whereas the CP models justifiably give them up as the mind is not “designed to perform them well.”