Despite what some philosophers will tell you, morality is clearly a work in progress. It changes over time. My father was born into a world in which “fornication” was considered immoral. Now, not only do most people not regard it as immoral, many have trouble even understanding how anyone could ever have regarded it as immoral. Such is the way things change.

There is a complex relationship between our moral code and the present state of technology. It is surely no accident that the seismic shifts in sexual mores occurred in the wake of the discovery of safe, effective birth control technology.

There is another technological change looming on the horizon, which it seems to me stands a good chance of changing everyday morality in fundamental ways. We can already see the technology at work in apps like Airbnb and Uber, which seek to eliminate the “trust” problem between contracting parties by allowing them to rate one another. Uber allows passengers to rate drivers, but just as importantly, allows drivers to rate passengers. Act like a boor in the back seat of an Uber driver’s car, and you may find that fewer drivers are willing to pick you up next time you try to summon a ride (actually, what happens is that the high-rank drivers will pass on you, leaving you to the drivers who themselves have a low ranking). Similarly, if you make a mess of an apartment that you’ve rented through Airbnb, fewer people will be willing to rent to you in the future. This is sometimes called an online reputation system.

Now imagine an online reputation system, not for cars and apartments, but for life generally, where you get to “score” based on any and all aspects of your behavior. What is this guy like? Is he nice, or nasty? Is he cooperative, or is he a jerk? What’s he like on a date? Did he do well in school? Does he show up for work on time? (substitute she throughout as well). Imagine a system that tracks this, with certain inputs from teachers, employers, etc., and other from the public – like Yelp reviews. There is already a dating app that allows women to rate men (lulu), but it still requires the man’s permission in order to create a profile (which limits its usefulness at filtering out rapists, which is one of the most obvious potential applications). But there’s no way to stop someone from creating a system that permits an involuntary profile to be created, and it seems to me it’s just a matter of time before someone does.

Of course, it may not be private individuals that do it. The government of China has announced its intention to create a “social credit” system, with a rating for all individuals – like a credit score, blended with a secret police file, then generalized to include all sorts of information about pro-social behaviour (other than just repayment of loans). The pitch is that it can be used for dating, as well as employment (but also, of course, by police and authorities). Right now the government is letting private firms do the pilot projects, but the plan is ultimately to create a unified system under state control. Most Chinese, it should be noted, do not appear to be overly fussed about this (which I don’t find particularly surprising, although it has attracted some incredulous commentary in the Western media).

The question is, should we be fussed about this? Like most Westerners, my natural reaction is negative; I find the whole idea to be shockingly intrusive. One the other hand, it would obviously be useful, and it would eliminate an enormous number of collective action problems. As Uber and Airbnb have shown, the problem of trust-between-strangers created enormous transaction costs, which were preventing billions of dollars worth of valuable transactions from taking place, that could have been occurring. So many apartments sitting empty! So many cars sitting idle! Lots of people want places to stay, or need rides. The problem was that people couldn’t trust one another, and so the transaction didn’t take place. Solve the trust problem (though an online payment system, as well as the rating system), and suddenly an entire market appears where previously there was none.

Now just imagine how many other transactions aren’t taking place, right now, because people can’t trust one another… Think of how dramatically different your life would be, how many different activities you would engage in, if you had some way of immediately knowing, right away, upon meeting a stranger, whether you could trust that person, and if you had the power to alert everyone else, should that person abuse your trust. Imagine that your phone simply displayed the “social credit” score of everyone in proximity to you… in the bar, on the subway, in the classroom. I don’t think it would be an exaggeration to say that social life would be completely transformed.

There is also, it should be noted, a traditional left-wing fantasy about a cashless economy that runs entirely on reciprocity, with something like a “social credit” score (or like reddit’s “karma”) to track each individual’s contributions. Many of these socialist schemes, particularly those that aim to reduce or eliminate “consumerism,” are also shockingly intrusive – consider, for instance, the “Parecon” proposal, which suggests that all of your consumption demands should be subject to deliberative approval by your neighbours. The charitable interpretation of these schemes is that their supporters have not really thought through very carefully what it would actually be like to live under them. In any case, I am curious what supporters of such schemes think of the incipient Chinese social credit system.

Incidentally, this sort of information-sharing about people’s character is a structural feature of small-scale societies, where it was accomplished largely through gossip. This went hand-in-hand with very intrusive forms of social control. In the small-scale societies in which humans evolved, everyone knew everything about your business, and your reputational “score” was common knowledge in all interactions. Gossip, however, as a social control mechanism, does not survive the transition to large-scale societies and urban living. It’s just too hard to keep track of what everyone is up do (this is the point of Robin Dunbar’s claims). So you wind up interacting will all sorts of people that you don’t really know anything about.

In this way, modern technology is really just recreating the conditions of small-scale societies on a mass scale. In retrospect, it may turn out that we lived in an enchanted time, where humanity had figured out how to make the leap from small scale to large scale societies, but had not yet figured out how to reimpose the social control systems that operate on the small scale. This creating something of a golden age of individual freedom – where you could act in anti-social ways without serious consequence. When I contemplate the possibility of a comprehensive social ranking system, I’m struck by the sentimental attachment I have to the ability to act immorally without anyone knowing.

Recall the passage in John Stuart Mill’s On Liberty, in which he deals with the issue of alcohol sales. He is arguing against a temperance “Alliance,” which asserted that the state could prohibit conduct whenever it violated the “social rights” of others:

A theory of “social rights,” the like of which probably never before found its way into distinct language: being nothing short of this—that it is the absolute social right of every individual, that every other individual shall act in every respect exactly as he ought; that whosoever fails thereof in the smallest particular, violates my social right, and entitles me to demand from the legislature the removal of the grievance. So monstrous a principle is far more dangerous than any single interference with liberty; there is no violation of liberty which it would not justify; it acknowledges no right to any freedom whatever, except perhaps to that of holding opinions in secret, without ever disclosing them: for, the moment an opinion which I consider noxious passes any one’s lips, it invades all the “social rights” attributed to me by the Alliance. The doctrine ascribes to all mankind a vested interest in each other’s moral, intellectual, and even physical perfection, to be defined by each claimant according to his own standard.

Mill’s argument argue here proceeds through a sort of reductio, but I wonder how much of that reductio is due to the undesirablity of the outcome he describes, and how much is due rather to its infeasibility. In other words, I wonder how much of the “liberty” we have enjoyed has been a consequence of technological limitations, which did not so much limit the state as prevented people from effectively supervising and controlling each other.

Another possibility, of course, is that morality may simply become less strict. Right now what we have is a relatively strict moral code, with extremely lax enforcement. Perhaps it’s because the code is so hard to enforce that we make it fairly strict. When the probability of apprehension was very low, punishment was more strict, and the number of excusing conditions were fewer. But suddenly, as probability of apprehension increases dramatically, it seems to me this puts a lot of pressure on the code to become laxer (just as our tolerance for “youthful folly” is under pressure to expand, now that we have a generation for whom a perfect record of the thoughts and opinions of one’s 17-year old self is being preserved for eternity).