Given that computational approaches to moral decision making, GWT, and the LIDA model are subjects that may not be familiar to all readers, the initial sections of this study provide brief overviews of these topics. The next section of the study introduces several approaches to computerizing ethics, GWT, and LIDA. The following section provides a description of the LIDA model and various theories and research that support this approach to human cognition. A discussion of the manner in which the LIDA model might be used to make moral decisions and some concluding comments follow.

Both as a set of computational tools and an underlying model of human cognition, LIDA is one attempt to computationally instantiate Baars’ global workspace theory (GWT). Such a computational instantiation of GWT, which attempts to accommodate the psychological and neuroscientific evidence, will be particularly helpful in thinking through an array of challenges with a high degree of specificity. In this study, we will explore how the LIDA model of GWT can be expected to implement a higher‐order cognitive task, specifically the kind of decision making involved in the resolution of a moral dilemma.

Despite significant gaps in scientific understanding, it is feasible to design systems that try to emulate the current best understanding of human faculties, even if those systems do not perform exactly as the brain functions. Computational models of human cognition are built by computer scientists who wish to instantiate human‐level faculties in AI, and by cognitive scientists and neuroscientists formulating testable hypotheses compatible with empirical data from studies of the nervous system, and mental and behavioral activity.

A central fascination with AI research has been the opportunity it offers to test computational theories of human cognitive faculties. AGI does not require that the computational system emulate the mechanisms of human cognition in order to achieve a comparable level of performance. However, human cognition is the only model we currently have for general intelligence or moral decision making (although some animals demonstrate higher‐order cognitive faculties and prosocial behavior). The cognitive and brain sciences are bringing forth a wealth of empirical data about the design of the human nervous system and about human mental faculties. This research suggests a host of new theories for specific cognitive processes that can, at least in principle, be tested computationally.

Nevertheless, we feel it is important to recognize that moral judgment and behavior are not the products of one or two dedicated mechanisms. Nor do we feel it is helpful to merely underscore the complexity of moral decision making. Therefore, we offer this model in hope of stimulating a deeper appreciation of the many cognitive mechanisms that contribute to the making of moral decisions, and to provide some insight into how these mechanisms might work together.

In proposing a comprehensive model for moral decision making, we are fully aware that other scholars will criticize this model as being inadequate. For example, neuroscientists might argue that a modular system such as LIDA does not capture the full complexity of the human neural architecture. Moral philosophers might contend that the agent we will describe is not really engaged in moral reflection because it lacks Kantian “autonomy” or “will.” The computer scientist Drew McDermott (unpublished data) asserts that appreciating the tension between self‐interest and the needs of others is essential for moral decisions and will be extremely difficult to build into computational agents. There are many criticisms that can be made of AGI models, as well as many arguments as to why computational agents are not capable of “true” moral reflection.

To demonstrate that many moral decisions can be made using the same cognitive mechanisms that are used for general decision making. In other words, moral cognition is supported by domain‐general cognitive processes. Certainly, some kinds of moral decisions may require additional mechanisms, or may require that the kinds of mechanisms described in this study be modified to handle features peculiar to moral considerations. Elucidation of such mechanisms and their probable design is beyond the scope of this study.

To outline a comprehensive approach to moral decision making. Philosophers and cognitive scientists have stressed the importance of particular cognitive mechanisms, for example, reasoning, moral sentiments, heuristics, intuitions, or a moral grammar in the making of moral decisions. But there has been very little work on thinking comprehensively about the broad array of cognitive faculties necessary for moral decision making. In analyzing how a moral machine might be built from the ground up, it becomes apparent that many cognitive mechanisms must be enlisted to produce judgments sensitive to the considerations humans accommodate when they respond to morally charged situations ( Wallach & Allen, 2009 ).

So, if demonstrated success in either of these pursuits is so far in the future, what do we expect to achieve in this study? Our goals are two‐fold:

Artificial general intelligence and machine morality have emerged as distinct fields of inquiry. The intersection between their agendas has been minimal and primarily focused on Friendly AI ( Yudkowsky, 2001 ), the concern that future super‐intelligent machines be friendly to humans. But let us be clear at the outset. No AGI systems have been completed, nor do any computer systems exist that are capable of making sophisticated moral decisions. However, some computer scientists believe such systems can be built relatively soon. Ben Goertzel (personal communication, 2009) estimates that, with adequate funding, scientists could complete an AGI within 10 years. Certainly, sophisticated moral machines will require at least a minimal AGI architecture.

This interest in building computer systems capable of making moral decisions (“moral machines”) has been spurred by the need to ensure that increasingly autonomous computer systems and robots do not cause harm to humans and other agents worthy of moral consideration ( Wallach & Allen, 2009 ). Although the goals of this new research endeavor are more practical than theoretical, an interest in testing whether consequentialist, deontological, and virtue‐based theories of ethics can be implemented computationally has also attracted philosophers and social scientists to this new field. Most of the research to date is directed at either the safety of computers that function within very limited domains or at systems that serve as advisors to human decision makers.

Human‐level intelligence entails the capacity to handle a broad array of challenges, including logical reasoning, understanding the semantic content of language, learning, navigating around the obstacles in a room, discerning the intent of other agents, and planning and decision making in situations where information is incomplete. The prospect of building “thinking machines” with the general intelligence to tackle such an array of tasks inspired the early founders of the field of Artificial Intelligence. However, they soon discovered that tasks such as reasoning about physical objects or processing natural language, where they expected to make rapid progress, posed daunting technological problems. Thus, the developers of AI systems have been forced to focus on the design of systems with the ability to intelligently manage specific tasks within relatively narrow domains, such as playing chess or buying and selling currencies on international markets. Despite the fact that many tasks such as visual processing, speech processing, and semantic understanding present thresholds that have yet to be crossed by technology, there has been in recent years a transition back to the development of systems with more general intelligence. Such systems are broadly referred to as having artificial general intelligence (AGI; Wang, Goertzel, & Franklin, 2008 ).

In the section that follows we describe the LIDA model, its architecture, its antecedents, its relationship to other cognitive architectures, its decision making, and its learning processes. Then, we return to discussing how the LIDA model might be used for moral decision making. In particular, we offer hypotheses for how the LIDA model answers each of the questions raised in the six issues listed above. Through this exercise, we hope to demonstrate the usefulness of a computational model of GWT, and how a computer system might be developed for handling the complexity of human‐level decision making and, in particular, moral decision making. Whether a fully functioning LIDA would be judged to demonstrate the moral acumen necessary for moral agency is, however, impossible to determine without actually building and testing the system.

When a resolution to the challenge has been determined, how might the LIDA model monitor whether that resolution is successful? How might LIDA use this monitoring for further learning?

LIDA is a model of human cognition, inspired by findings in cognitive science and neuroscience, which is able to accommodate the messiness and complexity of a hybrid approach to decision making. Our task here is not to substantiate one formal approach to ethics in LIDA. Rather, we will describe how various influences, such as feelings, rules, and virtues, on ethical decisions might be represented within the mechanisms of the LIDA model. The resulting agent may not be a perfect utilitarian or deontologist, and it may not live up to ethical ideals. A LIDA‐based AMA is intended to be a practical solution to a practical problem: how to take into account as much ethically relevant information as possible in the time available to select an action.

The LIDA model describes how an agent tries to make sense of its environment and decides what to do next. An action is selected in every LIDA cognitive cycle (see below), of which there may be 5–10 in every second. More complex decisions require deliberation over many such cycles. The challenge for a model of cognition such as LIDA is whether it can truly describe complex higher‐order decision making in terms of sequences of bottom‐up, single‐cycle action selection.

Given that GWT is a leading model of human cognition and consciousness, it is valuable to explore whether a computational model of GWT can accommodate higher‐order mental processes. Three different research teams, led by Stanislas Dehaene, Murray Shanahan, and Stan Franklin, have developed models for instantiating aspects of GWT computationally. In this study, we focus on the LIDA model developed by Franklin and his team. In doing so, we do not mean to suggest that LIDA, or for that matter any computational model of cognition based on GWT, is the only AGI model capable of modeling human‐level decision making. We merely consider LIDA to be a particularly comprehensive model and one that includes features similar to those built into other AGI systems.

Global workspace theory ( Baars, 1988 ) was originally conceived as a neuropsychological model of consciousness, but it has come to be widely recognized as a high‐level theory of human cognitive processing, which is well supported by empirical studies ( Baars, 2002 ). GWT views the nervous system as a distributed parallel system with many different specialized processes. Some coalitions of these processes enable the agent to make sense of the sensory data coming from the current environmental situation. Other coalitions incorporating the results of the processing of sensory data compete for attention. The winner occupies what Baars calls a global workspace, whose contents are broadcast to all other processes. These contents of the global workspace are presumed to be conscious, at least from a functional perspective. This conscious broadcast serves to recruit other processes to be used to select an action to deal with the current situation. GWT is a theory of how consciousness functions within cognition. Unconscious contexts influence this competition for consciousness. In GWT, and in its LIDA model, learning requires and follows from attention, and occurs with each conscious broadcast.

To date, the experimental systems that implement some sensitivity to moral considerations ( Anderson et al., 2006 ; Guarini, 2006 ; McLaren, 2006 ) are rudimentary and cannot accommodate the complexity of human decision making. Scaling any approach to handle more and more difficult challenges will, in all likelihood, require additional mechanisms.

Work has begun on the development of artificial mechanisms that complement a system’s rational faculties, such as affective skills ( Picard, 1997 ), sociability ( Breazeal, 2002 ), embodied cognition ( Brooks, 2002 ; Glenberg, 1997 ), ToM ( Scassellati, 2001 ), and consciousness ( Holland, 2003 ), but these projects are not specifically directed at designing systems with moral decision‐making faculties. Eventually, there will be a need for hybrid systems that maintain the dynamic and flexible morality of bottom‐up systems, which accommodate diverse inputs, while subjecting the evaluation of choices and actions to top‐down principles that represent ideals we strive to meet. Depending on the environments in which these AMAs operate, they will also require some additional supra‐rational faculties. Such systems must also specify just how the bottom‐up and top‐down processes interact.

Furthermore, even agents who adhere to a deontological ethic or are utilitarians may require emotional intelligence as well as other “supra‐rational” faculties ( Wallach & Allen, 2009 ). A sense of self, a theory of mind (ToM), an appreciation for the semantic content of information, and functional (if not phenomenal) consciousness ( Franklin, 2003 ) are probably also prerequisites for full moral agency. A complete model of moral cognition will need to explain how such faculties are represented in the system.

Bottom‐up approaches, if they use a prior theory at all, do so only as a way of specifying the task for the system, but not as a way of specifying an implementation method or control structure. A bottom‐up approach aims at goals or standards that may or may not be specified in explicit theoretical terms. Evolution, development, and learning provide models for designing systems from the bottom up. Alife (artificial life) experiments within computer environments, evolutionary and behavior‐based robots, and genetic algorithms all provide mechanisms for building sophisticated computational agents from the bottom up. Bottom‐up strategies influenced by theories of development are largely dependent on the learning capabilities of artificial agents. However, the bottom‐up development of moral agents is limited given present‐day technologies, but breakthroughs in computer learning or Alife, for example, might well enhance the usefulness of these platforms for developing artificial moral agents (AMAs; Wallach & Allen, 2009 ).

It is helpful, although somewhat simplistic, to think of implementing moral decision‐making faculties in AI systems in terms of two approaches: top‐down and bottom‐up ( Allen, Smit, & Wallach, 2006 ; Allen et al., 2000 ; Wallach & Allen, 2009 ; Wallach et al., 2008 ). A top‐down approach entails the implementation of rules or a moral theory, such as the Ten Commandments, Kant’s categorical imperative, Mill’s utilitarianism, or even Asimov’s laws. Generally, top‐down theories are deliberative and even metacognitive, although individual duties may be implemented reactively. A top‐down approach takes an antecedently specified ethical theory and analyzes its computational requirements to guide the design of algorithms and subsystems capable of implementing the theory.

Following Sloman (1999) , we note that moral behavior can be reflexive, or the result of deliberation, and at least for humans, also includes metacognition 1 when criteria used to make ethical decisions are periodically reevaluated. Successful responses to challenges reinforce the selected behaviors, whereas unsuccessful outcomes have an inhibitory influence and may initiate a reinspection of one’s actions and behavior selection. Thus, a computational model of moral decision making will need to describe a method for implementing reflexive value‐laden responses, while also explaining how these responses can be reinforced, or inhibited through learning, top‐down deliberative reasoning, and metacognition.

Commonly, ethics is understood as focusing on the most intractable of social and personal challenges. Debate often centers on how to prioritize duties, rules, or principles when they conflict. But ethical factors influence a much broader array of decisions than those we deliberate upon as individuals or as a community. Values and ideals are instantiated in habits, normative behavior, feelings, and attitudes. Ethical behavior includes not only the choices we deliberate upon but also the rapid choices that substantiate values—choices that might be modeled in LIDA as single‐cycle, consciously mediated responses to challenges. Given this broad definition of ethical decisions, values play an implicit role, and sometimes an explicit role, in the selection of a broad array of actions.

Ethical decisions are among the more complex decisions that agents face. Ethical decision making can be understood as action selection under conditions where constraints, principles, values, and social norms play a central role in determining which behavioral attitudes and responses are acceptable. Many ethical decisions require having to select an action when information is unclear, incomplete, confusing, and even false, where the possible results of an action cannot be predicted with any significant degree of certainty, and where conflicting values can inform the decision‐making process.

3. LIDA

3.2. The LIDA cognitive cycle The LIDA model and its ensuing architecture are grounded in the LIDA cognitive cycle. Every autonomous agent (Franklin & Graesser, 1997), human, animal, or artificial, must frequently sample (sense) its environment and select an appropriate response (action). Sophisticated agents such as humans process (make sense of) the input from such sampling in order to facilitate their decision making. Neuroscientists call this three‐part process the action‐perception cycle. The agent’s “life” can be viewed as consisting of a continual sequence of these cognitive cycles. Each cycle consists of a unit of sensing, attending, and acting. A cognitive cycle can be thought of as a cognitive “moment.” Higher‐level cognitive processes are composed of many of these cognitive cycles, each a cognitive “atom.” Just as atoms have inner structure, the LIDA model hypothesizes a rich inner structure for its cognitive cycles (Baars & Franklin, 2003; Franklin , Baars, Ramamurthy, & Ventura, 2005). During each cognitive cycle, the LIDA agent first makes sense of (see below) its current situation as best as it can by updating its representation of both external and internal features of its world. By a competitive process to be described below, it then decides what portion of the represented situation is most in need of attention. This portion is broadcast, making it the current contents of consciousness, and enabling the agent to choose an appropriate action and execute it. Fig. 1 shows the process in more detail. It starts in the upper left corner and proceeds roughly clockwise. Figure 1 Open in figure viewer PowerPoint LIDA cognitive cycle diagram. The cycle begins with sensory stimuli from external and internal sources in the agent’s environment. Low‐level feature detectors in sensory memory begin the process of making sense of the incoming stimuli. These low‐level features are passed on to perceptual memory where higher‐level features such as objects, categories, relations, situations, and so on are recognized. These entities, which have been recognized preconsciously, make up the percept that passed to the workspace, where a model of the agent’s current situation is assembled. This percept serves as a cue to two forms of episodic memory, transient and declarative. Responses to the cue consist of local associations, that is, remembered events from these two memory systems that were associated with the various elements of the cue. In addition to the current percept, the workspace contains recent percepts and the models assembled from them that have not yet decayed away. A new model of the agent’s current situation is assembled from the percepts, the associations, and the undecayed parts of the previous model. This assembly process will typically be carried out by structure‐building codelets.4 These structure‐building codelets are small, special purpose processors, each of which has some particular type of structure it is designed to build. To fulfill their task, these codelets may draw upon perceptual memory and even sensory memory, to enable the recognition of relations and situations. The newly assembled model constitutes the agent’s understanding of its current situation within its world. It has made sense of the incoming stimuli. For an agent operating within a complex, dynamically changing environment, this current model may well be too much for the agent to consider all at once in deciding what to do next. It needs to selectively attend to a portion of the model. Portions of the model compete for attention. These competing portions take the form of coalitions of structures from the model. Such coalitions are formed by attention codelets, whose function is to bring certain structures to consciousness. One of the coalitions wins the competition. In effect, the agent has decided on what to attend. The purpose of this processing is to help the agent decide what to do next. To this end, a representation of the contents of the winning coalition is broadcast globally, constituting a global workspace (hence the name global workspace theory). Although the contents of this conscious broadcast are available globally, the primary recipient is procedural memory, which stores templates of possible actions including their contexts and possible results. It also stores an activation value for each such template that attempts to measure the likelihood of an action taken within its context producing the expected result. Templates whose contexts intersect sufficiently with the contents of the conscious broadcast instantiate copies of themselves with their variables specified to the current situation. Instantiated templates remaining from previous cycles may also continue to be available. These instantiations are passed to the action‐selection mechanism, which chooses a single action from one of these instantiations. The chosen action then goes to sensory‐motor memory, where it is executed by an appropriate algorithm. The action taken affects the environment, external or internal, and the cycle is complete. The LIDA model hypothesizes that all human cognitive processing is via a continuing iteration of such cognitive cycles. These cycles occur asynchronously, with each cognitive cycle taking roughly 300 ms. The cycles cascade; that is, several cycles may have different processes running simultaneously in parallel. This cascading must, however, respect the serial nature of conscious processing necessary to maintain the stable, coherent image of the world it provides (Franklin, 2005b; Merker, 2005). Together with the asynchrony, the cascading allows a rate of cycling in humans of 5–10 cycles/s. A cognitive “moment” is thus quite short! There is considerable empirical evidence from neuroscience suggestive of and consistent with such cognitive cycling in humans (Massimini et al., 2005; Sigman & Dehaene, 2006; Uchida, Kepecs, & Mainen, 2006; Willis & Todorov, 2006). None of this evidence is conclusive, however.

3.4. Feelings and emotions in the LIDA model The word “feeling” may be associated with external haptic sense, such as the feeling in fingertips as they touch the keys while typing. It is also used in connection with internal senses, such as the feeling of thirst, of fear of a truck bearing down, of the pain of a pinprick, of pressure from a full bladder, of shame at having behaved ungraciously, and so on. Here, we are concerned with feelings arising from internal senses. Following Johnston (1999), in the LIDA model, we speak of emotions as feelings with cognitive content, such as the joy at the unexpected meeting with a friend, or the embarrassment at having said the wrong thing. The pain in one’s arm when scratched by a thorn is a feeling that is not an emotion, because it does not typically involve any cognitive content. Thirst is typically a feeling but not an emotion. Although the boundary between emotions and feelings is fuzzy, the distinction will prove important to our coming discussion of how feelings and emotions motivate low‐level action selection and higher‐level decision making. Every autonomous agent must be equipped with primitive motivators, drives that motivate its selection of actions. In humans, in animals, and in the LIDA model, these drives are implemented by feelings (Franklin & Ramamurthy, 2006). Such feelings implicitly give rise to values that serve to motivate action selection. Douglas Watt (1998, p. 114) describes well the pervasive role of affect, including feelings, hypothesized by the LIDA model, as seen from the perspective of human neuroscience: Taken as a whole, affect seems best conceptualized as a highly composite product of distributed neural systems that together globally organize the representation of value. As such, it probably functions as a master system of reference in the brain, integrating encodings done by the more modular systems supported in various relatively discrete thalamocortical connectivities. Given the central organizing nature of affect as a system for the global representation of value, and given evidence that virtually all stimuli elicit some degree of affective “valence tagging,” it would be hard to overestimate the importance of this valence tagging for all kinds of basic operations. The centrality of affective functions is underlined by the intrinsic interpenetration of affect, attentional function, and executive function, and it certainly makes sense that these three global state functions would be highly interdependent. It is logically impossible to separate representation of value from any neural mechanisms that would define attentional foci or that would organize behavioral output. Watt’s emphasis on “representation of value” and “valence” will be important later for our discussion of the role emotions play in moral decision making. This section will be devoted to an explication of how feelings are represented in the LIDA model, the role they play in attention, and how they act as motivators, implicitly implementing values. (Feelings also act as modulators to learning, as we describe below.) Referring back to the LIDA cognitive cycle diagram in Fig. 1 may prove helpful to the reader. Every feeling has a valence, positive or negative. Also, each feeling must have its own identity; we distinguish between the pains of a pinprick, a burn, or an insult, and we distinguish pains from other unpleasant feelings, such as nausea. From a computational perspective, it makes sense to represent the valence of a single feeling as either positive or negative, that is, as greater or less than zero, even though it may be simplistic to assume that the positive and negative sides of this scale are commensurable. Nevertheless, it may be a viable working hypothesis that, in biological creatures, feelings typically have only positive valence or negative valence (Heilman, 1997). For example, the feeling of distress at having to over‐extend holding one’s breath at the end of a deep dive is a different feeling from the relief that ensues with the taking of that first breath. Such distress is implemented with varying degrees of negative valence, and the relief with varying positive valence. Each has its own identity. For complex experiences, multiple feelings with different valences may be present simultaneously, for example, the simultaneous fear and exhilaration experienced while on a roller coaster. Feelings are represented in the LIDA model as nodes in its perceptual memory (Slipnet). Each node constitutes its own very specific identity; for example, distress at not having enough oxygen is represented by one node, relief at taking a breath by another. Each feeling node has its own valence, always positive or always negative, with varying degrees. The current activation of the node measures the momentary value of the valence, that is, how positive or how negative. Although feelings are subjected to perceptual learning, their base‐level activation would soon become saturated and change very little. Those feeling nodes with sufficient total activations, along with their incoming links and object nodes, become part of the current percept and are passed to the workspace. Like other workspace structures, feeling nodes help to cue transient and declarative episodic memories. The resulting local associations may also contain feeling nodes associated with memories of past events. These feeling nodes play a major role in assigning activation to coalitions of information to which they belong, helping them to compete for attention. Any feeling nodes that belong to the winning coalition become part of the conscious broadcast, the contents of consciousness. Feeling nodes in the conscious broadcast that also occur in the context of a scheme in procedural memory (the scheme net) add to the current activation of that scheme, increasing the likelihood of it instantiating a copy of itself into the action‐selection mechanism (the behavior net). It is here that feelings play their first role as implementation of motivation by adding to the likelihood of a particular action being selected. A feeling in the context of a scheme implicitly increases or decreases the value assigned to taking that scheme’s action. A feeling in the conscious broadcast in LIDA also plays a role in modulating the various forms of learning. Up to a point, the higher the affect, the greater the learning in the LIDA model. Beyond that point, more affect begins to interfere with learning. In the action‐selection mechanism, the activation of a particular behavior scheme, and thus its ability to compete for selection and execution, depends on several factors. These factors include how well the context specified by the behavior scheme agrees with the current and very recent past contents of consciousness (i.e., with the contextualized current situation). The contribution of feeling nodes to the behavior scheme’s activation constitutes the environmental influence on action selection. As mentioned earlier, the activation of this newly arriving behavior also depends on the presence of feeling nodes in its context and their activation as part of the conscious broadcasts. Thus, feelings contribute motivation for taking action to the activation of newly arriving behavior schemes. On the basis of the resulting activation values, a single behavior is chosen by the action‐selection mechanism. The action ensuing from this behavior represents the agent’s current intention in the sense of Freeman (1999, p. 96ff), that is, what the agent intends to do next. The expected result of that behavior can be said to be the agent’s current goal. Note that the selection of this behavior was affected by its relevance to the current situation (the environment), the nature and degree of associated feelings (the drives), and its relation to other behaviors, some of these being prerequisite for the behavior. The selected behavior, including its feelings, is then passed to sensory–motor memory for execution. There the feelings modulate the execution of the action (Zhu & Thagard, 2002). Feelings may bias parameters of action such as speed or force. For example, an angry person picking up a soda may squeeze it harder than he would if he were not angry.

3.5. Higher‐level cognitive processes and levels of control Higher‐level cognitive processing in humans includes categorization, deliberation, volition, metacognition, reasoning, planning, problem solving, language comprehension, and language production. In the LIDA model, such higher‐level processes are distinguished by requiring multiple cognitive cycles for their accomplishment. In LIDA, higher‐level cognitive processes can be implemented by one or more behavior streams,5 that is, streams of instantiated schemes and links from procedural memory. Cognitive processes have differing levels of control. Sloman (1999) distinguishes three levels that can be implemented by the architecture of an autonomous agent—the reactive, the deliberative, and the metacognitive. The first of these, the reactive, is the level that is typically expected of many insects, that is, a relatively direct connection between incoming sensory data and the outgoing actions of effectors. The key point is the relatively direct triggering of an action once the appropriate environmental situation occurs. Though direct, such a connection can be almost arbitrarily intricate, requiring quite complex algorithms to implement in an artificial agent. The reactive level is perhaps best defined by what it is not. “What a purely reactive system cannot do is explicitly construct representations of alternative possible actions, evaluate them and choose between them, all in advance of performing them” (Sloman, 1999). Reactive control alone is particularly suitable for agents occupying relatively simple niches in reasonably stable environments, that is, for agents requiring little flexibility in their action selection. Such purely reactive agents typically require relatively few higher‐level, multicyclic cognitive processes. On the contrary, deliberative control typically employs such higher‐level cognitive processes as planning, scheduling, and problem solving. Such deliberative processes in humans, and in some other animals,6 are typically performed in an internally constructed virtual reality. Such deliberative information processing and decision making allows an agent to function more flexibly within a complicated niche in a complex, dynamic environment. An internal virtual reality for deliberation requires a short‐term memory in which temporary structures can be constructed with which to try out possible actions “mentally” without actually executing them. In the LIDA model, the workspace serves just such a function. In the earlier IDA software agent, the action selected during almost all cognitive cycles consisted of building or adding to some representational structures in the workspace during the process of some sort of deliberation. Structure‐building codelets, the subprocesses that create such structures, modify, or compare them, and so on, are typically implemented as internal reactive processes. Deliberation builds on reaction. In the LIDA model, deliberation is implemented as a collection of behavior streams, each behavior of which is an internal reactive process (Franklin, 2000a). According to the LIDA model, moral decision making will employ such processes. As deliberation builds on reactions, metacognition typically builds on deliberation. Sometimes described as “thinking about thinking,” metacognition in humans and animals (Smith & Washburn, 2005) involves monitoring deliberative processes, allocating cognitive resources, and regulating cognitive strategies (Flavell, 1979). Metacognition in LIDA will be implemented by a collection of appropriate behavior streams, each with its own metacognitive task. Metacognitive control adds yet another level of flexibility to an agent’s decision making, allowing it to function effectively in an even more complex and dynamically changing environmental niche. Metacognition can play an important role in the moral decision making of humans, who may reflect on the assumptions implicit in the values and procedures they apply. However, it would be necessary to implement a fully deliberative architecture before tackling metacognition for any artificial agents, including LIDA. Deliberation in humans often involves language. Of course, metacognition and language have proved to be very difficult challenges for artificial intelligence. Although the LIDA model suggests an experimental approach to the challenge posed by language and cognition, detailing that approach is beyond the scope of this study. Let it suffice to say that in the conceptual LIDA model, language comprehension is dealt with by word nodes and appropriate links in perceptual memory, leading to structures in the workspace that provide the semantic content of the words. We believe that language generation can be accomplished by schemes in procedural memory whose instantiations produce words or phrases. Given the complexity that language and language creation introduce to the cognitive architecture, the designers of LIDA have tabled this problem until the comprehensive LIDA model has been fully implemented computationally.

3.6. Volitional decision making Volitional decision making (volition for short) is a higher‐level cognitive process for conscious action selection. To understand volition, it must be carefully distinguished from (a) consciously mediated action selection, (b) automatized action selection, (c) alarms, and (d) the execution of actions. Each of the latter three is performed unconsciously. Consciously, planning a driving route from a current location to the airport is an example of deliberative, volitional decision making. Choosing to turn left at an appropriate intersection along the route requires information about the identity of the cross street acquired consciously, but the choice itself is most likely made unconsciously—the choice was consciously mediated, even though it was unconsciously made. While driving along a straight road with little traffic, the necessary slight adjustments to the steering wheel are typically automatized actions selected completely unconsciously. They are usually not even consciously mediated, although unconscious sensory input is used in their selection. If a car cuts in front of the driver, often he or she will have turned the steering wheel and pressed the brake simultaneously with becoming conscious of the danger. An alarm mechanism has unconsciously selected appropriate actions in response to the challenge. The actual turning of the steering wheel, how fast, how far, the execution of the action, is also performed unconsciously though with very rapid sensory input. Although heavily influenced by the conscious broadcast (i.e., the contents of consciousness), action selection during a single cognitive cycle in the LIDA model is not performed consciously. A cognitive cycle is a mostly unconscious process. When speaking, for example, a person usually does not consciously think in advance about the structure and content of the next sentence, and is sometimes even surprised at what comes out. When approaching the intersection in the earlier example, no conscious thought need be given to the choice to turn left. Consciousness serves to provide information on which such action selection is based, but the selection itself is done unconsciously after the conscious broadcast (Negatu & Franklin, 2002). We refer to this very typical single‐cycle process as consciously mediated action selection. A runner on an unobstructed sidewalk may only pay attention to it occasionally to be sure it remains safe. Between such moments he or she can attend to the beauty of the fall leaves or the music coming from the iPod. The running itself has become automatized, just as the adjustments to the steering wheel in the earlier example. In the LIDA model, such automatization occurs over time with each stride initiating a process that unconsciously chooses the next. With childhood practice, the likelihood of conscious mediation between each stride and the next diminishes. Such automatization in the LIDA model (Negatu, McCauley, & Franklin, unpublished data) is implemented via pandemonium theory (Jackson, 1987). Sloman (1998) has emphasized the need for an alarm mechanism such as that described in the earlier driving example. A neuroscientific description of an alarm entails a direct pathway, the “low road,” from the thalamus to the amygdala, bypassing the sensory cortices, the “high road,” and thereby consciousness (Das et al., 2005). The LIDA model implements alarms via learned perceptual memory alarm structures, bypassing the workspace and consciousness, and passing directly to procedural memory. There the appropriate scheme is instantiated directly into sensory–motor memory, bypassing action selection. This alarm mechanism runs unconsciously in parallel with the current, partly conscious, cognitive cycle. The modes of action selection discussed above operate over different time scales. Volition may take seconds, or even much, much longer. Consciously mediated actions are selected roughly 5–10 times every second, and automatized actions as fast as that, or faster. Alarm mechanisms seem to operate in the sub 50‐ms range. In contrast, the execution of an action requires sensory–motor communication at roughly 40 times a second, all done subconsciously (Goodale & Milner, 2004). The possibility of hitting a 90‐mph fastball coming over the plate, or of returning a 140‐mph tennis serve, makes the need for such sensory–motor rates believable. We now return to a consideration of deliberative, volitional decision making, having distinguished it from other modes of action selection and execution. In 1890, William James (1890) introduced his ideomotor theory of volition. James uses an example of getting out of bed on a cold winter morning to effectively illustrate his theory, but in this age of heated homes we will use thirst as an example. James postulated proposers, objectors, and supporters as actors in the drama of acting volitionally. He might have suggested the following scenario in the context of dealing with a feeling of thirst. The idea of drinking orange juice “pops into mind,” propelled to consciousness by a proposer motivated by a feeling of thirst and a liking for orange juice. “No, it’s too sweet,” asserts an objector. “How about a beer?” says a different proposer. “Too early in the day,” says another objector. “Orange juice is more nutritious,” says a supporter. With no further objections, drinking orange juice is volitionally selected. Baars (1988, Chapter 7) incorporated ideomotor theory directly into his GWT. The LIDA model fleshes out volitional decision making via ideomotor theory within GWT (Franklin, 2000b) as follows. An idea “popping into mind” in the LIDA model is accomplished by the idea being part of the conscious broadcast of a cognitive cycle, that is, part of the contents of consciousness for that cognitive moment. These contents are the information contained within the winning coalition for that cycle. This winning coalition was gathered by some attention codelet. Ultimately, this attention codelet is responsible for the idea “popping into mind.” Thus, we implemented the characters in James’ scenario as attention codelets, with some acting as proposers, others as objectors, and others as supporters. In the presence of a thirst node in the workspace, one such attention codelet, a proposer codelet, wants to bring drinking orange juice to mind, that is, to consciousness. Seeing a let’s‐drink‐orange‐juice node in the workspace, another attention codelet, an objector codelet, wants to bring to mind the idea that orange juice is too sweet. Supporter codelets are implemented similarly. But how does the conscious thought of “let’s drink orange juice” lead to a let’s‐drink‐orange‐juice node in the workspace? Like every higher‐order cognitive process in the LIDA model, volition occurs over multiple cycles and is implemented by a behavior stream in the action‐selection module. This volitional behavior stream is an instantiation of a volitional scheme in procedural memory. Whenever a proposal node in its context is activated by a proposal in the conscious broadcast, this volitional scheme instantiates itself. The instantiated volitional scheme, the volitional behavior stream, is incorporated into the action‐selection mechanism, the behavior net. The first behavior in this volitional behavior stream sets up the deliberative process of volitional decision making as specified by ideomotor theory, including writing the let’s‐drink‐orange‐juice node to the workspace.7 Our fleshing out of ideomotor theory in the LIDA model includes the addition of a timekeeper codelet, created by the first behavior in the volitional behavior stream. The timekeeper starts its timer running as a consequence of a proposal coming to mind. When the timer runs down, the action of the proposal contends in the behavior net to be the next selected action, with the weight (activation) of deliberation supporting it. The proposal is most likely to be selected barring an objection or an intervening crisis. The appearance of an objection in consciousness stops and resets the timer, whereas that of a supporter or another proposal restarts the timer from a new beginning. Note that a single proposal with no objection can be quickly accepted and acted upon. But might this volitional decision‐making process not oscillate with continuing cycles of proposing and objecting as in Eric Berne’s (1964)“what if” game? Indeed it might. The LIDA model includes three means of reducing this likelihood. The activation of a proposer codelet is reduced each time it succeeds in coming to consciousness, thus decreasing the likelihood of its winning during a subsequent cognitive cycle. The same is true of objector and supporter codelets. The LIDA model hypothesizes that supporting arguments help in decision making in part by giving the supported proposal more time in consciousness, allowing more time off the timer. As a second means of preventing oscillation, impatience is built into the timekeeper codelet. Each restart of the timer is for a little less time, thus making a decision easier to reach. Finally, a metacognitive process can watch over the whole volitional procedure, eventually decide that it has gone on long enough, and simply choose an alternative. This latter process has not yet been implemented.