As artificial intelligence makes its way to all areas of life, the most prominent people aiming to explain the technology and translate it for the public tend to be scientists, businessmen and often, Americans. Gaspard Koenig is different. A French philosopher, he runs GenerationLibre, a think-tank that promotes classical liberal values of individual freedom. And he brings vibrant intellectual energy to the debate.

In his latest book, “The End of the Individual: A Philosopher’s Journey to the Land of Artificial Intelligence” (Éditions de l'observatoire, 2019), Mr Koenig argues that society should be cautious about the power of AI not because it will destroy humanity (as some argue) but because it will erode our capacity for critical judgement. He already sees that happening, as people blindly follow algorithmic recommendations, be it to watch a film or use a map. He frets this will only get worse.

What can we do? Mr Koenig defies the mantra of Silicon Valley and believes we should not be afraid to unplug from the network or to stray from the aggregated data that funnel us into a new form of utilitarianism, which presumes that what is best for the majority of consumers is right for us as individuals.

An excerpt from “The End of the Individual”, translated into English from the original French, appears below. After that follows a short interview with Mr Koenig to explore what AI means for liberalism.

***

Machine decisions, human responsibility

From “The End of the Individual: A Philosopher’s Journey to the Land of Artificial Intelligence” by Gaspard Koenig (Éditions de l'observatoire, 2019). English translation below from the original French.

The central question of AI is not superintelligence or the end of work but the capacity of AI to render autonomous judgments, as society delegates more decisions to the machine. It is a renunciation of the concept of free will and it reflects, in effect, a new scientific consensus that stretches from psychology and neuroscience to behavioural economics. Yet it jeopardises the very foundations of liberal societies in terms of law, the economy and of course democracy.

If I seek to rehabilitate the idea of free will, it is not as a mysterious force, which would be placed in an untenable position against modern science. Rather, free will is more like the ability to exercise an inner deliberation that, little by little, constitutes the individual in his or her originality. A decision is worth less by its result than by the process by which it was made. Our decisions over time form our personality. In the age of the algorithm, we still have to make deliberate choices that are essential to forming our individuality. […] The act of reaching a decision creates a “moral responsibility” that makes it possible to avoid manipulation, as explains Daniel Dennett, a philosopher of cognition. This idea is crucial to help resist simplistic algorithmic recommendations. “I refuse Amazon’s book suggestion not because I won’t like it, but because I am me!” This reflex of pride is intimately linked to a desire for autonomy. While Yuval Noah Harari, a historian of humanity and technology, argues that AI knows us better than we know ourselves, I believe it prevents us from becoming ourselves. While Harari prides himself on renouncing the individual and abolishing the self in meditation, I propose that we reaffirm the unique beings that we are—or at least, leave open the possibility of becoming one in the future. These metaphysical considerations are indispensable to guide our relationship with technology.

In political philosophy, the drive for autonomy is at the heart of the criticism by John Stuart Mill against Jeremy Bentham, whose influence is all over Silicon Valley. In his book “Utilitarianism”, Mill introduces at the very outset a qualitative difference among pleasures, thus freeing himself from the Benthamian model that calculates utility in a uniform way.

There are different ways to seek happiness based on various ends, which makes it impossible to set a common standard. In his essay on John Stuart Mill, Isaiah Berlin proposes to establish a "right to err" as a corollary of the ability of each person to seek, to transform and to improve oneself.

In the context of AI, the right to err is the right to abstain from social networks, the right to not accept recommendations, the right to refuse notifications—in short, it is the right to not obey the common definition of utility and instead, to follow one's own path, including against one’s self-interest. This is the profound meaning of the much-discussed “privacy”: it is not merely to protect individuals from the prying eyes of others (after all, most of the time the data are anonymous and processed by blind algorithms). Instead, it is to allow people to situate themselves outside the network, indifferent to “nudges”. We can assume that the right to err will have short-term negative effects on the community, though it is the basis for real progress in the future. […]

The rehabilitation of free will depends on us re-appropriating our data. Today we are digital serfs, giving up the data we produce in exchange for free services, of questionable value, provided by our new overlords. We post a billion photos a day on Facebook. Yes, a billion. Once processed by algorithms (that increasingly include facial recognition), this treasure trove of data generates quarterly profits in the order of billions of dollars for Facebook.

What percentage goes to the original producer? Zero. Not only can we not negotiate with our lord the use of our data, but like the peasants of yore, we are prohibited from selling it on the market, too. In May 2018 Oli Frost, an ingenious British millennial, put ten years of his Facebook posts up for auction on eBay—but was forced to withdraw it because this violated Facebook’s terms of service. […]

It is surprising that we passively accept this digital feudalism. Doubtless the 12th-century serfs did not think for a minute that they could challenge the rights of a lord under his dais, as sacred as today’s tech entrepreneur on a TED Talk stage. But since everything is accelerating, the revolution might also come faster. I advocate that we establish a property right on personal data, which is something that currently does not exist anywhere in the world, to put an end to this plunder. It puts individuals who produce the data into the value-chain of the digital economy, allowing them to monetise—or not—the data according to contractual terms they can choose, using intermediaries that negotiate with the platforms on their behalf.

The key is that by treating individual data as a form of personal asset. Just like a landowner is free to cultivate weeds, we could refuse to submit information to the AI grinder. A property right, after all, includes the right to misuse one’s property! Everyone could choose what they hide and for how long, what they hand over freely and to whom, what they sell and at what price. By creating this zone of individual sovereignty, one can return to being oneself.

In terms of AI, the ownership of personal data will lead to a loss of efficiency for the community as a whole. If everyone can rejoice that Facebook’s profits are winnowed by the loss of precision of targeted advertising, we have to accept that this sub-optimal outcome will affect public services too. The smart city will not be so smart, the autonomous cars not so autonomous, the intelligent meters not so intelligent. By introducing the possibility that individuals can disengage, data ownership will lead to more traffic accidents, carbon emissions and romantic breakdowns. It will prevent society’s smooth advancement.

Yet in this way it will restore the possibility of evolution—as well as regression. It is a heavy responsibility for legislators to accept immediate and concrete hazards in exchange for a vague promise of future progress. But it is the price to pay to choose autonomy over nudges and the Enlightenment over a blessed servitude.

____________________

From “The End of the Individual: A philosopher’s journey to the land of artificial intelligence.” Copyright © 2019 by Gaspard Koenig. Used with permission of Éditions de l'observatoire. All rights reserved. Translated from the original French: “La Fin de l'individu: Voyage d'un philosophe au pays de l'intelligence artificielle” (Éditions de l'observatoire, 2019).

***

An interview with Gaspard Koenig

The Economist: Will AI erode human freedom or enhance it?

Gaspard Koenig: As a technology AI undoubtedly represents an advancement, which has been in the making for the past 70 years and can now provide tools for personal emancipation, broadening our horizons. Far from replacing human intelligence, which consists of biological mechanisms deeply ingrained in our flesh and blood, it merely automatises the way our own intellectual outputs are processed.

But as AI is deployed commercially today, with deep-learning systems fed by personal data and nudging human behaviours, I find it deeply infantilising. We increasingly feel like pawns governed, willy-nilly, by algorithms using parameters we cannot understand (nor modify, obviously) and issuing recommendations “for our own good”. Peter Thiel goes so far as to say that “AI is communist”. I would argue that it takes the milder form of Tocqueville’s “democratic despotism,” where we have become the despots of ourselves and the slaves of efficiency. This is a deliberate commercial choice, not something pre-destined. Innovations based on the blockchain, among others, could rebalance the technology towards the individual. The Economist: You argue that AI undermines free will. But three centuries ago it was said that science would destroy religion—and that didn't happen. Perhaps free will will be just fine in the age of AI? Mr Koenig: Technological revolutions always have deep cultural implications. Take the printing press. It greatly affected the religious dogma by leading the way to the Reformation (and arguably to the Enlightenment). But it also prompted a call for regulation, such as Beaumarchais in 18th-century France campaigning for authorship rights. I fear that today, the effects of AI on society are underestimated. Philosophers and computer scientists need to work together much more. The rejection of free will is part of an academic consensus forged by experimental psychology, behavioural economics and neurosciences. If we want to get a handle on AI, we have to address this. What really matters is not whether our minds are “pre-determined” but how we maintain the capacity for internal deliberation. Only then will we able to draw regulatory conclusions. As with printing, I believe the key element is the extension of the domain of property rights.

The Economist: AI seems to strengthen the dominance of the nation-state relative to the individual. What can be done to narrow this asymmetry of power?

Mr Koenig: AI strengthens the dominance of whoever controls the data. In the West, giant platforms are making the nation-state nearly obsolete. They are becoming the ultimate vehicle of social norms. Isn’t it striking that rules concerning freedom of speech are practically entrusted to social networks, with the consent of powerless governments? Algorithms are eroding the principle of collective deliberation.

In China however, the centralisation of data into public hands reinforces the power of the Communist Party. I was struck to see how explicitly the Chinese tech giants known as the BATX—for Baidu, Alibaba, Tencent and Xiaomi —are working for the government, implementing social-control policies and happily sharing data with the authorities.

As a liberal, I am not satisfied with either model. I want to find ways to redistribute power. And it starts with the re-appropriation of our personal data.

The Economist: If we need a new liberalism for an age of AI, what principles would that entail?

Mr Koenig: Silicon Valley is obsessed with utilitarianism. The governing principle of most apps is to maximise the happiness of the maximum number of users. That’s why the concept of “community” is so dear to them: what counts is not to offer the best product to a given client as in the industrial age, but to nudge users in a way that satisfies most of them.

True liberals in the humanist tradition should understand the threats posed to liberty by this paradigm. If Facebook implements its Libra currency, it would become the most powerful entity that ever existed—and should be fought as such. The fascination for buzzwords such as “start-up” or “disruption” seems to anaesthetise our critical thinking. Moreover, the blatant faults of today’s governments lead us to resist public policy in general, thus forgetting that Adam Smith was the founder of political economy. Nearly a century after the Walter Lippmann Conference in 1938, which sought to reorient liberalism amid the depression, it is time to reinvent liberalism again and to recalibrate the state to enable individual autonomy and discourage oligopolies. That should entail, among other things, considerations on a universal basic income.

The Economist: You believe we need to establish a property right on personal data. Won't that just fuel a hyper-financialisation in every corner of human activity?

Mr Koenig: Property rights classically entail three elements: usus, fructus, abusus—that is, the rights to use, profit from and dispose of property. The principle of fructus would allow us to be compensated for the value of our data, thus forcing Facebook and others to pay us for the raw material we provide. This is not so much “financialisation” as it is a fair rebalancing of the economic value chain. We would move from today’s digital feudalism, where the lord gives us free services in exchange for all the data that is harvested, to proper capitalism based on contractual terms. As always, property rights protect the individual against the abuse of central power.

But then there is also usus and abusus. Property rights allow individuals to ignore the market. Nobody forces you to sell your house, even if you underexploit it. The same applies to data: through a personal data wallet, we would decide which data we are willing to share, with whom, to which end and under what conditions. Platforms would have to accept our terms and conditions, not the other way around.