Think about how medical treatment currently works—it invariably involves some kind of intervention from the outside. Of course, there’s preventive care, or simply taking care of yourself, by paying attention to diet, exercise and so on—taking care of yourself relies upon the body’s own metabolic interactions: ingesting too much sugar induces certain biochemical reactions that ultimately lead to weight gain or diabetes (which in turn affects the operation of certain organs such that…). Intervention, whether surgical or pharmaceutical, starts with the assumption that the body can no longer manage itself—decisions about lifestyle can no longer prevent certain metabolic interactions, or the failure of certain organs to “process” the results of such interactions (which itself would be a metabolic failure). And, so, some “cause” is introduced from the outside that targets in order to suppress or enhance certain metabolic interactions. A lot of this seems to be ad hoc—quite often, it seems that no one is sure why a particular treatment works as it does—we can just verify, within some margin of error, that it does usually have the desired effect. And then it becomes necessary to monitor the body and run separate tests checking for “side effects,” i.e., consequences for other metabolic processes than the one being targeted for suppression or enhancement.

In other words, there is, in current medical research and practice, no totalizing engineering approach to human health, an approach that would transcend the natural/artificial distinction and make the organic metabolisms self-regulating even in response to breakdowns in normal metabolic operations. In fact, if we had such an approach, there wouldn’t really be “breakdowns”—the “mechanism” introduced into or, better, elicited from, the metabolic organization one is born with would include sensors that detect, well in advance, the signs that such “breakdowns” were likely in a given organ or process—and trigger, automatically, counteracting metabolic activity that, furthermore, is coordinated with metabolic activity throughout the body so effects somewhere don’t disrupt satisfactory operations elsewhere. Such mechanisms would most likely be “placed” or “induced” near the genetic level of human functioning, somewhere along the line where genotype produces phenotypes. Maybe these mechanisms would work, in part, by leading the human organism to spontaneously reject the unhealthy and embrace the healthy—for example, by inducing disgust at those foods it would be worst for you to eat right now and hunger for those foods that would be most beneficial.

The kind of mechanism I am discussing here, and which is really not all that hard to imagine becoming reality in the coming decades, would essentially be an “app.” An app is an interface that creates a relation between a user and the cloud. The kind of biological app that is the object of my speculations here would place the individual human body in relation to the entire, continually updated, database of biological, chemical and medical research produced by global research; included in this database would be the archived information about all human bodies, past and present, upon which information (gathered by the vary apps that are plugged into our bodies, which also become part of the archive) the individual app would draw in controlling the totality of internal metabolic activity. It’s hard to see how one could be against such developments—both in the sense that it’s hard not to see it as a tremendous improvement in human well-being and in the sense that it’s hard to imagine what could stop it. We might still die, because organs and functions might still “wear down” beyond the capacity of our total health app to complete reverse, but even death would be modulated so that we “ease into it” (as people occasionally do now, after a long, well-lived life) in a relatively painless, predictable way.

We could say that things get more complicated when we take into account that “health” includes “mental health,” and “mental health” is always going to be assessed by criteria that are at least in part historical, cultural and therefore political. But it may be that advances in research connecting brain states to mental conditions can help limit the abuse of treating people outside of the norm in terms of taste, interests or opinions as thereby “abnormal.” We do, apparently (suspending, for now, the justified skepticism regarding what anyone claims to know to a scientific certainty right now), know that schizophrenia, for example, is very directly correlated with, and therefore likely “caused by” identifiable abnormal brain states. (Otherwise, how could we have drugs that modify the experience of schizophrenics?) Anyway, it’s hard to imagine resisting developments in this area either. I recently had occasion to read a paper, written by a student at a fairly elite liberal arts college, in which the issue of mental illness came up, and noticed that where the word “normal” (or “healthy”) would previously had been the word “neuro-typical” now is. One can see how the distinction between “neuro-typical” and “neuro-atypical” would replace the distinction between “normal” and “abnormal” (or “healthy” and “sick”) in a victimary as well as more strictly medical framework. If we locate different behaviors within a range of brain activity somewhere upon a bell curve, then judgment is removed while the question of treatment can be made more “consensual.” Perhaps a highly neuro-atypical individual can be made cognizant of how his brain activity contributes to his idiosyncratic behavior or thinking and, as long as that individual is not unduly disruptive or dangerous, he might not only be permitted to “embrace” his neuro-atypicality but compel others to respect it as well. It’s easy to see the emergent ethic here: if you’re not ready to enforce medical intervention or, after the total health app has been installed, the cloud finds this person to be safe enough to leave to his habitual functioning, then you need to adjust to him just as much as you expect him to adjust to you.

All questions about “the good,” then, would become questions of designing apps that would “materialize” or “concretize” the cloud in a particular way. I’ve been using health care as a particularly illustrative example, but all of human life is taking shape along these lines: all practices are becoming apps, or, at least are, or could easily be, “appified.” Perhaps there will be decisions made on the cloud level and those made on the app level. At the cloud level, it would be determined, say, how neuro-atypical people can be permitted to be (depending upon a whole range of “factors”); on the app level, individuals would coordinate their neuro-atypicality with other individual types and institutional imperatives. Of course, there are apps that establish a very simple relation between the user and his environment—e.g., an app that lets you know the nutritional content of all the food in your refrigerator. But a lot of apps take the form of social experiments. Traffic apps fall into this category: if you use something like Waze to navigate, you not only rely on other people (who must be incentivized in some way) to warn you of speed traps, accidents up ahead, road work, potholes, etc., but you participate in a kind of paradoxical activity because the more people that are aware of present traffic conditions the more their knowledge will transform those conditions. So, a good traffic app would have to know how many people use that app, how they adjust their driving behavior accordingly, and what are the consequences of a certain number of people coming to learn that traffic will be lighter along one path than another an hour from now. Won’t that make traffic heavier along that path, and wouldn’t the app need to account for that?

Cloud policies will be determined by those in the clouds, but we can think about app policy as providing the feedback any cloud policy would depend on. What I want to bring out here is the difference between app practices and “normal” political practices. Normal politics can be seen, by analogy, as a very crude version of the interventionist practices characteristic of contemporary medicine. Something has “gone wrong,” and you try to apply some arbitrary principle (“equality,” “democracy,” “freedom,” ‘balance of powers,” some institutional “best practice,” whatever) to “fix” it. Needless to say the situation in politics is far worse than in medicine—no one really knows what the cause and effect relations, along with all the “side effects,” of any policy (always introduced, it should be noted, in a highly compromised, proviso-ridden way—almost as if you couldn’t prescribe a drug to lower cholesterol without that drug also including some mood-adjustment “amendment”) are—certainly not beyond the very short term (if you give people money, they will have that money, at least for 5 minutes; if you bomb a building, you will destroy the building, etc.)

“Appy” practices, meanwhile, would mimic the kind of self-regulation we could see to be already at work or possible within existing practices in some enhanced and explicit form. The goal is to act on the “genetic,” or generative, or scenic level and help bring order into the institutional “stack.” Much of this kind of work will have a satiric character. Take, for example, the way interventionist politics deals with the media—both sides, left and right, complain that it is “biased” and influenced by (or “in the pocket” of) “special interests” of one kind or another. Of course, there’s a lot of truth in such accusations. But they always are predicated upon a fantasy of a disinterested press serving a general public sharing the same perspectives and interests. The more media companies and institutions are independent centers of power, the less they can be anything other than information laundering extortion rackets whose sole purpose is to wield power over and on behalf of selected enemies and friends. Direct mouthpieces, whether of the government or specific institutions, would be more reliable, because at least you could imagine why that source wants to provide you with this information. But there’s little point to simply saying this, either—it doesn’t really help in filtering the vast swarm of information and disinformation swirling around us.

A more “appy” approach would be to attribute a plausible purpose to the media—say, to help people decide more intelligently whom they should vote for—and then translate all media pronouncements into something like “this source says that you do—or should—vote for politicians for X reason.” Let’s say some cable news anchor accuses or purports to prove that a particular politician has “lied” (always at some specific distance from some “truth”—attested to by someone, who we either know or don’t know much about, who has “attested” to other things of varying degrees of truthfulness; always in a specific context, always in a relation to other things said which may be more or less true, always given certain assumptions about what the “liar” actually knows and doesn’t; always with specific presumed consequences, and so on). Well, then, that source is telling you that you’re the type of person who is less likely to vote for someone who lies in that way, to that degree, in that context, with those consequences, and so on.

Our “app,” then, would generate a mapping of all politicians (maybe past as well as present) who have told that precise “type” of lie (of course, what counts as a particular “type” of lie is the kind of question the app would draw upon the cloud to answer), along with, perhaps, politicians who have told “types” of lies that more and less closely approximate that type, with varying criteria introduced to determine which kinds of “lies” are “worse” (in which conditions, etc.); along with which “types” of voters voted for and against all those different “types” of lying politicians. Now, the point of an “appy” practice of political pedagogy is not to produce such a map but to allude to or indicate or enact its possibility and its necessity if that particular report of that politician’s “lie” is to meansomething. In this way we learn to introduce, to use the idiom of contemporary media and politics, as “poisonous” a “pill” as possible into the relations between rulers, media and public. In the end, we would want to narrow down the question into a very specific “slice” of the “stack”: what are people with specific responsibilities saying, how and why, and how are those of us further downstream of those responsibilities listening to what they say—and how can we do so in a way that brings the way we listen into closest possible conformity with what our own modicum of power (the reach of our apps) best enables us to do. This is the app we try to install; or, this is the mode of being as an app we wish to install. The politics of planetary computation is the politics of converting users into interfaces. I’m a political app insofar as, in listening to me, you become an interface yourself by creating a slice of the stack as a grammatical stack: a failed imperative, prolonged into a question, issuing in a declarative revealing a new ostensive from which you derive a new imperative. At the very least you can generate a wider range of responses (hear a broader range of imperatives) to being told a politician has lied (why was he obliged to tell the truth on that occasion, anyway?) well beyond the reactive interventionist one, demanding he be “held accountable”—and into appier regions, as we start to build up self-regulatory inhibitors and activators that come to take in more of the system, in its totality of utterances.