Greenfield : My emphasis would be away from regulation, to education. You can regulate ‘til you’re blue in the face; it doesn’t make it any better. I think that, although, I sit in the House of Lords, as you know, and although we had debates on all the various regulations on how we might ensure a more benign and beneficial society, what we really should be doing is thinking proactively about how, for the first time, can we shape an environment that stretches individuals to their true potential.

Schmemann : Picking up a bit where Susan was, Evgeny, in your book you talk a lot about the political uses and misuses of the Internet. You talk about cyber-utopianism, Internet-centrism, and you call for cyber-realism. What does that mean?

Morozov : For me, Internet-centrism is a very negative term. By that I mean that many of our debates about important issues essentially start revolving around the question of the Internet, and we lose sight of the analytical depths that we need to be plumbing.

The problem in our cultural debate in the last decade or so is that a lot of people think the Internet has an answer to the problems that it generates. People use phrases like, “This won’t work on the Internet,” or, “This will break the Internet,” or, “This is not how the Internet works.” I think this is a very dangerous attitude because it tends to oversimplify things. Regulation is great when it comes to protecting our liberties and our freedoms — things like privacy or freedom of expression or hate speech. No one is going to cancel those values just because we’re transitioning online.

But when it comes to things like curation, or whether we should have e-readers distributed in schools, this is not something that regulation can handle. This is where we will have to make normative choices and decisions about how we want to live.

Popova : I think for the most part I agree with Evgeny. I think much, if not all of it, comes down to how we choose to engage with these technologies. Immanuel Kant had three criteria for defining a human being: One was the technical predisposition for manipulating things. The second was the pragmatic predisposition — to use other human beings and objects to one’s own purposes. I think these two can, to some degree, be automated, and we can use the tools of the so-called digital age to maximize and optimize those.

His third criterion was what he called moral predisposition, which is this idea of man treating himself and others according to principles of freedom and justice. I think that is where a lot of fear comes with the Digital Age — we begin to worry that perhaps we’re losing the moral predisposition or that it’s mutating or that it’s becoming outsourced to things outside of ourselves.

I don’t actually think this is a reasonable fear, because you can program an algorithm to give you news and information, and to analyze data in ways that are much more efficient than a human could. But I don’t believe you could ever program an algorithm for morality.