Brain cells, electrodes, and tiny Peter Singer (image from here).

Pain is something the brain does. Nociception sends information about tissue damage (1) through the spinal cord, where such information can be modulated (2). However, pain doesn't really become that nasty, unpleasant experience until it weasels its way into your limbic system (4). Image retrieved from here.

Is pain merely a mathematical construct? Image from here.

Is "do cockroaches feel pain?" a question for neuroscientists studying cockroaches, or for social scientists studying humans? Animated version here!

50,000 cultured brain cells sit in a petri dish. Through a combination of electronic sensors, software engineering and robotic sculpture , the physiology of the cells interacts with the psychology of some patrons of an art gallery [1]. From this transaction, judgments arise - the audience might report feelings of being watched, of play, or simply of remotely observing an oblivious 'seizure machine.' One particular type of audience member, the Animal Ethicist, might even wonder if we should be worried that the culture of brain cells (as a former animal) might be in pain.While in most cases it is fairly straightforward to determine that a human is in pain, when one starts to asks if non-humans (or even humans with severe communication problems, such as locked-in patients) are in pain, it is common to turn to neuroscience for help. The idea is that while mental states (such as pain and suffering) can only be deduced from behaviorthe behaviors are 'wired up correctly,' mental states are(or non-contingently, to borrow the language of Dr. Martha Farah [2]) related to brain states. Thus, the tools of neuroscience can give us direct access to the amount of pain an organism is experiencing, bypassing a body that might hide this information from us (which could happen because of injury, or because the body was never equipped with a human face). We are obligated to perform this scientific investigation, as we have an obligation to prevent pain and suffering. At this point, we are working with the following assumption:1) “Neural systems are the substrate of private, internal 'mental' events (such as “suffering”), which can be a source of moral value”Amusingly enough, this assumption can lead us to a mathematical formulation of suffering. The logic is as follows: the human brain, as well as perhaps other brains, has the capacity for suffering. We can be somewhat precise about this, and say that this capacity is defined as the 'natural' ability to orchestrate behaviors that have been identified as correlates of suffering (such as freezing, favoring limbs, calling for help, and 'pained' facial expressions), in response to stimuli that are 'naturally' associated with suffering (extended bouts of pain, either physical of social). The circuitry within the brain that provides this capacity, while not yet fully understood, should in principle be a physical, deterministic system. As part of a physical, deterministic system, this circuitry should be describable as a set of mathematical relationships. Furthermore, as we earlier decided that mental states are onlyrelated to brain states, it is this mathematical relationship (which the 'suffering-behaviors' merely point to) that is the essence of suffering [3].The implications of this are delightfully absurd: if suffering is a mathematical relationship, then any system that implements that relationship (a rat, a culture of brain cells, or a deviously engineered toaster) should qualify as being able to suffer, and under some ethical systems (here I am alluding to those of the animal ethicists Peter Singer and Richard Ryder ) thus enter into the realm of morally relevant beings. That is, it becomes a moral obligation to prevent these entities from suffering, no matter how silly or alien their “suffering” might appear, simply due to the physical laws that govern part of their behavior.Before we get too carried away (and start passing laws against posting such equations in public), let us carefully revisit that assumption number 1. This statement can be problematized from a variety of perspectives (it's Cartesian , for God's sake!), but lets stick to a neuroscientific perspective. First, we've supposed the existence of private, internal events (“ qualia ”) that can't be directly measured, and have to be accounted for in addition to the (public) neural events that we can measure. Secondly, we must also deal with the fact that we've given these mysterious, ghostly events control over value, effectively tying one mystery to another. Compare the above assumption then, to the one below:2) “Neural systems are the substrate of public, embedded 'social' events (such as “suffering”), which can be a source of moral value”This small change in wording has solved the two problems outlined with assumption 1. First, we no longer have to contend with an awkward metaphysics that describes two kinds of events (internal, unmeasurable “mental” events and our normal, measurable “physical” events)- instead, all events are now public and capable of being measured (which, as scientists, causes us a sigh of relief). Secondly, we no longer have value popping out of nowhere. Instead, suffering has moral value as it is a social event: a social, 'embedded' subject is needed to judge that suffering to has occurred, and while doing so judges this suffering to be 'bad.'This can be seen as emphasizing (after Dr. Grant Gillett[4] and Dr. Daniel Goldberg[5]) the social components of subjectivity (specifically, handing the definitions of subjective experiences to the social realm), while denying that subjectivity exists outside of the social realm (after Dr. Daniel Dennett[6]).One disadvantage of this perspective is that it limits what we can expect to learn about pain if neuroscience focuses exclusively on "pain-pathways," while ignoring empathy. This social perspective on suffering suggests that multiple entities (or at least multiple systems in the same entity) must be interacting before we can talk about suffering, or subjective states at all[7]. Instead, whatever abstractions or circuits that we come across within a single brain must be considered as the building blocks of subjectivity and morality, and not equivalent to such. This implies an obligation to use value-neutral descriptions of neural states and circuits, or risk confusing “the substrate for the reality,” in the words of Dr. Gillett[4]. Thus, we can't look at a culture of rat neurons sitting in a dish and evaluate their subjective state by using the techniques of neuroscience- the question is one for the audience.Lastly, one advantage of this conception of suffering: by not rejecting the social component of suffering, we are forced to accept suffering as being a thick, value-laden concept that escapes reduction to a sterile set of equations, or carnal set of 'brain states.' Suffering instead is seen as a function of not just the individual, but also the ever-changing culture the individual exists within.[1] Zeller-Townson, RT. (2012). Why use Brain Cells in Art? The Neuroethics Blog. Retrieved on April 29, 2013, from http://www.theneuroethicsblog.com/2012/09/why-use-brain-cells-in-art.html[2] Farah, Martha J. "Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals." Neuroethics 1.1 (2008): 9-18.[3] Unless you hold that the fact that these equations are being expressed through neurons, rather than other physical systems, is morally salient- but that is another blog post. I'm adopting a functionalist stance in this post.[4] Gillett, Grant R. "The subjective brain, identity, and neuroethics." The American Journal of Bioethics 9.9 (2009): 5-13.[5] I'd like to thank Dr. Daniel Goldberg for pointing out these particular accounts of pain (via twitter, of all places!) Goldberg, Daniel. "Subjectivity, consciousness, and pain: The importance of thinking phenomenologically." The American Journal of Bioethics 9.9 (2009): 14-16.[6] Dennett, Daniel C. Consciousness explained. ePenguin, 1993.[7]This isn't to say that someone can't suffer (or experience other, more pleasant qualia) in isolation. I am merely suggesting that the judgment that this subject makes, "I am suffering," is as much a function of the subject's brain state, as it is the social environment that shaped the subject's operational definition of 'suffering.' However, I am saying that if no one applies the label "suffering" to a state (say, the struggles of a virion as it battles your immune system), ever, then it hasn't suffered.Zeller-Townson, R. (2013). A Social Account of Suffering. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2013/04/a-social-account-of-suffering.html#more