0:33 Intro. [Recording date: February 12, 2019.] Russ Roberts: My guest is futurist and author Amy Webb.... Her latest book is The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity.... Your book is a warning about the challenges we face, that we're going to face dealing with the rise of artificial intelligence. What is special about the book, at least in my experience reading about AI [Artificial Intelligence] and worries about artificial intelligence is that it doesn't talk about AI in the abstract but actually recognizes the reality that AI is mostly being developed within very specific institutional settings in the United States and in China. So, let's start with what you call the Big Nine. Who are they? Amy Webb: Sure. So, what's important to note is that when it comes to AI, there's a tremendous amount of misplaced optimism and fear. And so, as you rightly point out, we tend to think in the abstract. In reality, there are 9 big tech giants who overwhelmingly are funding the research--building the open-source frameworks, developing the tools and the methodologies, building the data sets, doing the tests, and deploying AI at scale. Six of those companies are in the United States--I call them the G-Mafia for short. They are Google, Microsoft, Amazon, Facebook, IBM [International Business Machines], and Apple. And the other three are collectively known as the BAT. And they are based in China. That's Baidu, Alibaba, and Tencent. Together, those Big Nine tech companies are building the future of AI. And as a result, our helping to make serious plans and determinations, um, for I would argue the future of humanity. Russ Roberts: And, just out of curiosity: I don't think you say very much in the book at all about Europe. Is there anything happening in Europe, in terms of research? Amy Webb: Sure. So, the--you know, there's plenty of happening in France. Certainly in Canada. Montreal is one of the global hubs for what's known as Deep Learning. So this is not to say that there's not pockets of development and research elsewhere in the world. And it also isn't to say that there aren't additional large companies that are helping to grow the ecosystem. Certainly Salesforce and Uber are both contributing. However, when we look at the large systems, and the ecosystems and everything that plugs into them, overwhelming these are the 9 companies that we ought to be paying attention to.

3:18 Russ Roberts: So, I want to start with China. I had an episode with Mike Munger on the sharing economy and what he calls in his book Tomorrow 3.0. And, in the course of that conversation, we joked about people getting rated on their social skills and that those would be made public--how nice people were to each other. And we had a nice laugh about that. And I mentioned that I didn't think that that was an ideal situation--that people would be incentivized that way to be good people: despite my general love of incentives, that made me uneasy. And in response to that episode, some people mentioned an episode of Black Mirror[?]--the video series--and also some things that were happening in China. And I thought, 'Yeh, yeh, yeh, whatever.' But, what's happening in China--it's hard to believe. But, tell us about it. Amy Webb: Sure. And, let me give you a quick example of one manifestation of this trend, and then sort of set that in the broader cultural context. So, there's a province in China where a new sort of global system is being rolled out. And it is continually mining and refining the data of the citizens who live in that area. So, as an example, if you cross the street when there's a red light and you are not able to safely cross the street at that point--if you choose to anyway, as to jay-walk--cameras that are embedded with smart recognition technology will automatically not just recognize that there's a person in the intersection when there's not supposed to be, but will actually recognize that person by name. So they'll use facial recognition technology along with technologies that are capable of recognizing posture and gait. It will recognize who that person is. Their image will be displayed on a nearby digital--not bulletin board; what do you call those--digital billboard. Where their name and other personal information will be displayed. And it will also trigger a social media mention on a network called Weibo. Which is one of the predominant social networks in China. And that person, probably, some of their family members, some of their friends, but also their employer, will know that they have--they have infracted--they have caused an infraction. So, they've crossed the street when they weren't supposed to. And, in some cases, that person may be publicly told--publicly shamed--and publicly told to show up at a nearby police precinct. Now, this is sort of important because it tells us something about the future of recognition technology and data. Which is very much tethered to the future of artificial intelligence. Now, better known as the Social Credit Score, China has been experimenting with this for quite a while; and they are not just tracking people as they cross the street. They are also looking at other ways that people behave in society, and that ranges from whether or not bills are paid on time, to how people perform in their social circles, to disciplinary actions that may be taken at work or at school, to what people are searching on--you know, on the Internet. And the idea is to generate some kind of a metric to show people definitively how well they are fitting in to Chinese society as Chinese people. This probably sounds, to the people listening to the show, like a horrible, Twilight Zone episode-- Russ Roberts: It sounds like 1984, is what it sounds like to me. It's not like, 'I wonder if that's a good idea.' It's more like, 'Are you kidding me?' Amy Webb: Yeah. And so like, when I first heard about this, my initial response was not abject horror. I was curious. I was very curious. Russ Roberts: [?] Amy Webb: But like, here's what made me curious: Why bother? I mean, China has 1.4 billion people. And if the idea is to deploy something like this at scale, that is a tremendous amount of data. And you have to stop and say to yourself, 'Well, what's the point?' So, this is where some cultural context comes into play. So, I used to live in China. And I also used to live in Japan. And, they are very different cultures, very different countries. One distinctive feature of China is a community-reporting mechanism that is sort of embedded into society. And going back many thousands of years--you know, China is an enormous--it's a huge piece of land. And you've got people living throughout it; in fact, they are so spread apart, you have, you know, significantly different dialects being spoken. So, one way to sort of maintain control over vast masses of people spread out geographically was to develop a culture--sort of a tattle-tale culture. And so, throughout villages, if you were doing something untoward or breaking some kind of local custom or rule, that would get reported--you would get reported. Sort of in a gossipy way. But, you would get reported; and ultimately that person that heard the information would report that on up to maybe a precinct or a feudal manager of some kind, who would then report that up to whoever was in charge of the village or town; and then you would get into some kind of actual trouble. This was a way of maintaining social control. And so if you talk to people in China today, a lot of people are aware of monitoring. What I find so interesting is that at the moment, the outcry that we see outside of China does not match the outcry that I have observed--or actually to the lack of outcry--that I have observed in China. Now, there's one other piece of this really important: This is that using AI in this way ties in to China's Belt and Road Initiative [BRI]. And you might have heard about the BRI. This is sort of a master plan--it's a long-term strategy that helps China optimize what used to be the previous Silk Road--trading route. But it's sort of built around infrastructure. What's interesting is that there's also a digital version of this--the sort of digital BRI--where China is partnering with a lot of countries that are in situations where social stability is not a guarantee. And so, they are starting to export this technology into societies and places where there isn't that cultural context in place. And so, you have to stop and wonder and ask yourself, 'What does it mean for 58 pilot countries to have in their hands a technology capable of mining and refining and learning about all of their citizens, and reporting any infractions on up to authorities?' You know, in places like the Philippines, where free speech right now is questionable, this kind of technology, which does not make sense to us as Americans, may make slightly more sense to people in China, becomes a dangerous weapon in the hands of an authoritative, an authoritarian regime elsewhere in the world.

11:14 Russ Roberts: It reminds me, when you talk about the tattle-tale culture--of course, the Soviets did the same thing. They encouraged people to inform on--telltale sounds like a child reporting an insult. It's a monitoring mechanism by which authoritarian governments keep people in line. And you talk about the lack of outcry. Well, one reason is, is that you are worried that your social score is going to be low. Outcrying is probably not a good idea. Amy Webb: That's right. That's right. Russ Roberts: You should mention also, which I got from your book, that: It's not just like it's awkward, it's kind of embarrassing, you have a low score. These scores are going to be--going to be used, or being used?--to deal with people get credit, whether they can travel? Is that correct? Amy Webb: Right. So, again. It's China. So, we can't be 100% of the information that's coming out, because it's a controlled-information ecosystem. But from what we've been able to gather, in all of the research that I've done, you know--I would suggest that it's already being used. It's certainly being used against ethnic minorities like the Uighurs. But we've seen instances of scoring systems being used to make determinations about school that kids are able to get into. You know, kids who, through no fault of their own may have parents that have run afoul, you know, in some way, and earned demotions and demerits on their social credit scores. So, it would appear as though this is already starting to affect people in China. And, again, my job is to quantify and assess future risk. So, as I was doing all of this research, my mind immediately went to: What are the longer term downstream implications? I think some of them are pretty obvious. Right? Like, some people in China are going to wind up having a miserable life as a result of the social credit score--the social credit score as it grows and is more widely adopted to some extent could lead to better social harmony, I guess; but it also leads to, you know, quashing individual ideas and certain freedoms and expressions of individual thinking. But, the flip side of this is: If it's the case that China has BRI--and it's investing in countries around the world not just in infrastructure but in digital infrastructure like fiber and 5G and communications networks and small cells and all the different technologies, in addition to AI and data, isn't it plausible that some time in the near future, our future trade wars aren't just rhetoric but could wind up in a retaliatory situation where people who don't have a credit score can't participate in the Chinese economy? Or, businesses that don't have credit scores can't do, can't trade. Or countries that don't have--if we think about like a Triple A Bond rating, you know, what happens if this credit scoring system evolves and China does business with, only with countries that have a high-enough score? We could quite literally get locked out of part of the global economy. It seems far-fetched, but I would argue that the signals are present now that point to something that could look like that in the near future.

15:03 Russ Roberts: Well, this is going to be a pretty paranoid show--episode--of EconTalk. So, I'm okay with that kind of fear-mongering, because it strikes me as quite worrisome. And I think we have to be, as you hinted at, you have to be open-minded that maybe this will make a better Chinese society, as defined by them. You know, the Soviets wanted to create a new Soviet man--and woman. They failed. But now, with these tools maybe there will be a new Chinese man and woman who will be harmoniously living with their neighbor, never jaywalking, and never gossiping, and smiling more often. Who knows? But, it's not my first, default thought about how this is going to turn out. I think that-- Amy Webb: No, but you kind of--you have to start with--I want to point out that I am not like a dystopian fiction writer. I'm a pragmatist. So, this--I am not studying all of this for the purpose of scaring people. What I would argue is, I have studied all of this, and used data, and modeled out plausible outcomes; and it is scary. It really is. Because you have to, again, connect the dots between all of this and other adjacent areas that are important to note. The CCP [Chinese Communist Party] in China is-- Russ Roberts: the Communist Party-- Amy Webb: yep--is facing some huge opportunities but also big problems. The Chinese economy may technically be slowing, but it's not a slow economy. There's plenty of growth ahead. And, if that holds--and there's no reason why at the moment it wouldn't--you know, Chinese society is about to go through social mobility at a scale never seen before in modern human history. And as that enormous group of people moves up, they are going to want to buy stuff. They are going to want to travel. So, you know, that potentially causes some problems, because the more wealth that is earned, the more agency people feel, the more opinions they start having about how the government ought to be run. And, you know, the CCP effectively made the current President of China, Xi Jinping, effectively President for life. And 2049--which seems far off but in the grand scheme of things isn't really that far into the future--is the 100th anniversary of the founding of the CCP. China is very good at long-term planning. Now, they've not always made good on fulfilling promises. But they are good at planning. Russ Roberts: Yes, they are. Amy Webb: Right? So, I don't see all of this as flashes in the pan, and 'AI's kind of a hot buzzy topic right now.' I'm looking at the much longer term and the much bigger picture. That's what makes me kind of concerned.

18:02 Russ Roberts: I think that's absolutely right. One other institutional detail to make clear for listeners is that the Chinese Internet is roped off, to some extent--to quite a large extent. They are developing their own tools and apps. And, talk about the three companies in China that are working on AI and how they work together in a way that American companies are not. Amy Webb: So, here's another interesting facet of the Big Nine and AI is on a sort of a dual-developmental track. In China, Baidu, Alibaba, and Tencent were all formed sort of in the late 1990s, early 2000s; and their origin stories are not all that different from our big, modern tech giants like Amazon and Google and Apple. The key distinction is that our big tech companies were formed out for the most part in Seattle, Redmond, and Cupertino--California and San Francisco. Where, the ecosystem was able to blossom: there's plenty of competition. And there was plenty of talent. California has fairly lenient--in some ways--fairly lenient employer/employee laws which has made it very easy for talent to move between companies. And, if you are somebody who studies innovation, you know, the sort of lack of--the limited or lack of regulation, the ability for people to move around-- Russ Roberts: letting people make enormous amounts of money when they succeed and losing all of it when they fail-- Amy Webb: Right. Right. Right. But, the lack of safety net, the lack of a central, federal authority, if you will, is partly what enabled these companies to grow. And to grow fast. And to grow big. Which is why we also see a lot of overlap. So, Google, Microsoft, Amazon, and IBM [International Business Machines] own and maintain the world's largest cloud infrastructure. So, if you own a website or you are a business owner or you are making a phone call, at some point you are accessing one of their clouds. You know--we have competing, for the most part, we have competing operating systems for our mobile devices. For the most part, we still have competing email systems. And that's because without a central authority dictating one of the companies was going to do which thing, they all sort of did it. They went alone. When it went on their own and built their own things. So, now we have tremendous wealth concentrated among just a few companies who own the lion's share of patents; who are funding most of the research. And, for the most part, Silicon Valley and Washington, D.C. have an antagonistic relationship. That is not the case in China. So, in China, when the big tech companies were being formed there, you don't do anything in China without also in some way creating that business in concert with Beijing--with the government. You've got to pull patents--I'm sorry--you've got to pull permits. You have to abide by various regulations and laws. People are checking in on you. So, while Baidu, Alibaba, and Tencent may be independent financial organizations, in practical terms they are very much working in lockstep with Beijing. Alibaba, for those of you not familiar with the company, is very similar to Amazon. So, it's a retail operation. Tencent is very similar to our social media: so, sort of Twitter meets gaming and chat. And, I'm sorry--and Baidu is sort of search--is the sort of Google-esque company of the bunch. When China--when the Chinese government decided that AI was going to be a central part of its future plans--and this was decided years ago--it also decided that Tencent was going to focus on health; that Baidu was going to focus on cloud; and that Alibaba was going to focus on various different data aspects. I'm sorry; and Baidu was also going to focus on AI and transportation. So, it's not as though these companies came to these additional areas of research and work on their own. It was centrally coordinated. And that's a really, really important thing to keep in mind. If we've got a central government, a powerful government that is now--that has this long-term vision and is centrally coordinating, what's happening at a top level with the research and the movements of these companies, suddenly you have a streamlined system where you don't have arguments about regulation; you don't have the companies at each other's throats--like we've seen in the United States, Apple suddenly calling for sweeping privacy regulations because, to be fair, it's sort of--they are already far ahead and it gives them a competitive advantage. You don't see all that infighting in China. So, we have some fundamental differences. And the real challenge is that while we're trying to sort all this out in the United States, you have a streamlined central authority with three very powerful companies who are all now collaborating in some way on the future. In addition to a bunch of other top-level government initiatives to repatriate academics; to bring back top AI people; but also to do things like start educating kindergarteners about AI. There is a textbook that is going to roll out this year throughout China teaching kindergarteners the fundamentals of machine learning. I mean, you know--whereas in the United States, you know, some of our government officials, you know, up until very recently denied AI's capabilities; and only yesterday--so this is February 11th--President Trump issued an Executive Order to, I guess--I mean, there's a handful of bullet points on what AI ought to be, but it wasn't a policy paper. There's no funding. There's no government structure set up. There's not--I mean it--you see where I'm getting at?

25:07 Russ Roberts: Well, yeah--let me push back against that a little bit. You know, China is growing tremendously; as you point out, they are going to, presumably, they are already in one of the greatest transformations in human history from the countryside to the city, from a low standard of living to a much higher standard of living. And most of that's wonderful, and I'm happy about it. We don't know exactly what their ambitions will be or are outside of their own borders, and therefore what the repercussions are for us. As you suggest, they are doing a bunch of stuff. But the fact that they are top down and planning and organized, and we are chaotic and disorganized--so, just to take an example, you know, there's n companies in America, more than 4; I don't know how many there are--working on various aspects of driverless autonomous vehicles. There's Uber; there's Lyft; Apple; Google; there's Waymo. There's a lot going on here. And a lot of that will turn out. That's the nature of creative destruction; and capitalism. Some of those investments won't pan out. It will--the gambles will fail and lose, and people lose all their money. And, in general, historically, that chaotic soup of competition serves the average person and the people who are innovators quite well. The fact that China has, say, Baidu focusing on that and no one else having to worry about it, could be a bug, not a feature. I'm not convinced that China teaching kindergarteners machine learning is going to turn out well. Could be a mistake. Could be an enormous blunder. They are not allowing kind of experimentation, trial and error, that in my view is central to innovation. So, I think it remains to be seen how successful their walled garden with top-down gardening going on from the government's vision of what they want AI to serve, is going to work out. It might. It could. And it could be hard--the outcomes might be really bad for not just the Chinese but for other people. But it might just kind of fail. And, I'm not even convinced that their growth path is going to continue the way it has in the past. A lot of people just assume that because they have grown dramatically over the last 25 years they'll keep growing dramatically. There's a lot of ghost cities in China; there's a lot of overbuilding. I'm not so sure they have everything under control. So, I think you have to have that caveat as a footnote to those concerns. Amy Webb: I completely agree with you. I would say that, for years, especially in the United States, we've been indoctrinated into thinking that China is a copy-paste culture rather than a culture that understands how to innovate, and to some extent I think that that is the result of that heavy-fisted, top-down approach to business. What I'm concerned about is not whether China succeeds financially. Here's what I'm concerned about. The challenge with artificial intelligence is that it's already here. It is not--there's no event horizon. There's no single thing that happens. It's already here. And it's been here for a while. And, in fact, it powers--you know, artificial [?] intelligence now powers our email; it powers the anti-lock brakes in our cars. You know. And essentially, this new Third Era of computing that we are in, if we assume that the First Era was tabulation--so that would have been Ada Lovelace in the late 1800s--and a Second Era was programmable systems, which would have been those early IBM mainframes on up to the, you know, desktop computers that we use today. This next Era is AI. And AI, while we've seen it anthropomorphized in movies like Her and on shows like Westworld, at its heart, AI is simply systems that make decisions on our behalf. And they do that using tools to optimize. So, the challenge is that, right now, systems are capable of making fairly narrow decisions. And the structures of those systems, and which data they were trained on, and how they make decisions and under what circumstances, those decisions were made by a relatively few number of people working at the BAT [Baidu, Alibaba, Tencent] in China and at the G-Mafia here in the United States. And the problem is that these systems aren't static. They continue to learn. And they--you know--they join, literally millions and millions of other algorithms that are all working in service of optimizing things on our behalf. Which is why I agree with you that if we are talking about a self-driving future, it's good to have competition, because--for all the usual reasons. Right? We get better form factors[?]; we get better vehicles; we get better price points. But when we are talking about systems that are continuing to evolve, that grow more and more powerful the more data they have access to and the more compute they are given--more computer power. And as we move into the more technical aspects, there are things like Generative Adversarial Networks, which are specifically designed to play tricks, to help systems learn more quickly. We are talking about slowly but surely ceding control over to systems to make these decisions on our behalf. And, that is what concerns me. What concerns me is that we do not have a singular set of guardrails that are global in nature. We don't have norms and standards. I'm not in favor of regulation. On the other hand, we don't have any kind of agreed-upon ideas for who and what to optimize for, under what circumstances. Or even what data sets to use. And China has a vastly different approach than we do in the United States, in part because China has a completely different viewpoint on what details of people's private lives should be mined, refined, and productized. And here in the United States, a lot of these companies have obfuscated when and how they are using our data. And, the challenge is that we all have to live with the repercussions.

32:10 Russ Roberts: Yeah, I'd agree with that. Up to a point. I want to give you a chance to talk about some scary examples. I think the--I'll just say, up front, that for me, underlying this whole problem--there are many different proximate causes and concerns. But there is, it seems to me, a very significant lack of competition. We can talk about how much competition there is in the United States relative to China. But certainly--the concern for me here in the United States is that the Big Six[Big Nine?] here in the United States will stay the Big Six[Big Nine?]. Which will give them leverage to do a bunch of things that you or I might not like. I do want to add that whatever we do to regulate or constrain them, via culture or whatever, allows for the possibility that they don't stay the Big Six[Big Nine?]. And I think one of the challenges of any way to deal with these problems is that, if you're not careful, you are going to end up creating a cartel that--it's de facto right now, but that can change. But if you make it de jure, you're going to end up with much worse outcomes than I think we're going to have. But, to concede your point about concern: I do think the Silicon Valley ethos of ask for forgiveness rather than permission--because right now there's no one you have to ask permission for, generally. Users are not paying much attention. There's very little regulation of how your private data is being used. Obviously something happened on January 1st, 2019 because I get a lot of annoying bars on my websites saying 'Will you accept cookies?' and I stupidly always click 'Yes,' like I'm sure most people do. And now they've complied with whatever required them to do that, and they're moving along. So, you know, I do think that there are some serious issues here. And you give some examples in the book of where these corporations--or China--have done things, and they really pay a price for it. They just keep going. The Facebook/Cambridge Analytica problem. The example you give of China pressuring Marriott the way their website was designed in terms of territorial recognition of China's sovereignty over various places that are somewhat up in the air. Those are serious issues, I think. And, more importantly, they are just the tip of the iceberg. So, talk about a couple of those things that you are worried about, that I think are alarming. And, normally, the marketplace would punish these folks; but not much does. Amy Webb: So, I love what you just said, which is that the market--so it's curious, right? Why has the marketplace not punished the Big Nine? Or at least the G-Mafia, right? Or at least Facebook? Russ Roberts: They've been punished a little bit. I think their users are down. I'm thinking about deleting my Facebook page. And I'm sure--and I've switched to DuckDuckGo for my searching. It's a really small step. But these are things that maybe people are starting to do in a little, slightly bigger, numbers. Amy Webb: Maybe. But, again, like I don't have access to the whole world's data. Thank God. But--and you--let's just reveal our biases: like, you and I are digitally savvy people. Russ Roberts: You're kind, Amy. Amy Webb: Well, but you are. I think the fact that you even know what DuckDuckGo is, that you are somebody who is using it, I think is quite telling. But, for how long have we continued to hear--like, how many breaches have we heard about of our trust, right, over the past 12 months? And we continue to hear outcries, and people continue to be really upset. And we just don't see significant drops in numbers that would suggest the marketplace is punishing companies the way that they might in other circumstances. I think that's curious. And I think the reason is not because Google, Amazon, Apple, IBM, Facebook, and Microsoft make our lives a little bit better, but rather that our lives don't work without these companies. Now, it's possible--you could argue that maybe Facebook could maybe quietly go away, and for some organizations and companies that run part of their businesses using that platform it would be pretty annoying. But life would go on. We don't function--modern society in America literally does not function without Amazon, Google, and Microsoft. Huge parts of the business world do not function without IBM. And, if you look at mobile phone and personal device usage, like, most Americans are using, in some way, Apple. So, the problem is: We can get all angry--like, we can get as angry as we want. But we don't have a choice, which is-- Russ Roberts: But is that true? I've got to challenge that. Just for a second-- Amy Webb: Yeh-- Russ Roberts: Sorry for interrupting. I might give you an example. I just bought an Apple XR. I don't know how you pronounce it--10R--the phone. I love it. It's fantastic. When I bought it, I forgot, that, actually my ear buds that I like are not going to work with the new phone because it doesn't have a jack. So, when I was at the store, and I asked if they had an adaptor, they did. To my relief, it was under $10. I was expecting--Apple, in the old days, it was the kind of thing they charged $32 dollars for; and you'd go, 'I've got a habit, I'd just pay $32.' I was kind of thrilled: I think it was $7.95. I was shocked at how reasonable it was. But, of course, the other view was, 'You're are telling me they are going to force you to buy an adaptor? Because you can't use your old earbuds?' And the answer is, 'Yeah. They're going to do that.' And I was happy to pay the $7.95. In fact, you could argue for people that don't have earbuds or are just going to use the ones that Apple provides with the phone, they shouldn't have to pay the implicit $7.95. So, it's all okay. And most of us, most of the time, are happy with the deal. Right? We're happy with--we don't care. That's the problem for me. One of the problems, besides the competition. My problem with your claim is that, most of us, just: 'It's fine. Okay, it's not great. [?] ask occasionally, we card data.' But most of us just live with it. Like you, I'm increasingly alarmed-- Amy Webb: and-- Russ Roberts: but I think it's hard for the average person listening: 'What's all the fuss about? I like Facebook. I love Google. I love--' These are companies that we don't just, like, 'Yeah, it's pleasant.' They make our lives sing. And most of the time, we are happy. So, what's the worry? Amy Webb: I hear you. And so this is honestly--this is not just about privacy. I would argue this is about future competition and choice. And that is one of the things that concerns me most. So, let me paint a picture for you. A couple of months ago Amazon had a big Press announcement. They were talking about Alexa, and the developer kid--they were making a bunch of highly technical announcements. And at the very, very tail end of this Press event, they, almost as a footnote, revealed a brand new product. And that was an Amazon Basics Microwave. Did you hear about this? Russ Roberts: Only in your book. Keep going. I had not heard about it. Amy Webb: Right. Well, because it didn't make news. And, the couple of places--like, it showed up on Gizmodo, and a couple of, like super tech blogs. And the big deal about the $60 Amazon Microwave was that it has Alexa. And so that you can talk to Alexa. And, for the most part, that elicited snark. Right? Russ Roberts: Yeah. Who needs it? Amy Webb: What--right. 'Typical Americans: we can't bring ourselves to push the buttons on our microwave to pop our popcorn. We are so lazy, we need to talk to it.' And again, this was one of those--this was one of those times when I said, 'But wait a minute. Why would they do that? Why go through the headache and the heartache?' I mean, it's hard to launch a product. It's hard to launch a product that exists already in the marketplace that has a fairly significant twist which is going to cause you to have to educate consumers? Like, 'Why bother?' Right? And here's where I arrived. If one of Amazon.com's core functions at the moment is selling us stuff--like popcorn, right--we've noticed that lately you can subscribe to all different types of things. Why would Amazon do that? Because people tend to run out of things, and this helps them not run out. However, it also ensures, if I am subscribing to popcorn, that I'm not going to buy it at my local grocery store. So, now let's think this through. If I'm somebody who buys microwave popcorn, and I pop that popcorn in my Amazon, Alexa-powered microwave, one of the pieces of data that I'm revealing to Amazon is not just that I am a subscriber to popcorn, but that I, you know--how much popcorn I popped. So that Amazon can track how many bags I've gone through. And rather than sending me a monthly box of popcorn, which may not be enough, or may be too much, depending on the month, this is a way for Amazon to mine and refine my data in order to optimize that popcorn delivery specifically for me. And how magical would it be if Amazon knew exactly the moment that I was about to run out of popcorn and sent me a replenishment? Now, again, this doesn't sound like a bad thing, on the face of it, right? Russ Roberts: Sounds pretty good. Amy Webb: Like, it would be pretty amazing, if Amazon knew when I was going to run out of all my stuff and it just showed up for me. Russ Roberts: It's the end of suffering. We never have to go through that popcornless night at the movies on that big-screen TV at home. Amy Webb: So, now let's connect some other dots. Amazon has entered into a joint venture with JP Morgan and Berkshire Hathaway. And, it's no secret that Amazon, and Google, and Apple also, as well as IBM, are all looking at health care. They are all somehow involved in the health space. So, isn't it plausible that some day in the future, with all of my Amazon devices, Amazon has looked at my FitBit or whatever fitness device I've been wearing--has been monitoring my caloric intake, has seen that I haven't gotten on my, you know, fancy bicycle-- Russ Roberts: Amazon Basics Bicycle-- Amy Webb: That's right. And I put that bag of popcorn in the microwave; and guess what? The microwave won't pop it. Because, it has determined that I don't get to eat that popcorn today. Again, that's the kind of thing where I really do think it's going to show up; it's going to sneak up on us. And, I don't think that Amazon is hell-bent on making sure that all Americans are thin and svelte. I don't think that's what this is about. I think, again, we've got small groups of people trying to optimize decisions on behalf of us all; and these are the kinds of things that don't get thought through in advance. They are the kinds of decisions that people make and then ask for forgiveness later on. And as long as we're on this topic: Currently, our voice-based systems, as well as some of these other AI systems, are not inter-operable to some extent, because they use different programming languages. To some extent because they are literally on different types of silicon and they are parts of different ecosystems. So, if you are somebody who currently has a house full of Google home-connected devices and you try to introduce an Amazon device, they don't necessarily talk to each other. Conversely, if you are an Amazon home with a bunch of Alexa devices which I now realize--if you are listening to this in your house, I've probably set off your devices 15 times in the past three minutes-- Russ Roberts: 'Alexa,' 'Alexa,' 'Alexa'-- Amy Webb: I apologize. But, like, think this through. Isn't it plausible that in our lifetimes, in the very near future, because we didn't have some kind of forethought, we're going to wind up in Amazon homes? or Google homes? or, you'll be an Apple household, where all your devices are with just one of those ecosystems and our data are tethered to them. I mean, think of how much of a pain in the neck it is to change mobile operating systems: If you've ever tried to go from Android to Apple or vice versa, it's hard. Now we're talking about all this other data--the ambient data that's part of your daily life. All of it. Plus, we didn't even talk about health and diagnostics and all of these other things that are all tied into these systems. And if those data sets become heritable, you know, we're talking about a future situation in which your family could be an Apple family, or an Amazon family, or a Google family. And your children may decide they want to marry into other Google families, or other Apple families, because it's too much of a pain in the neck to swap otherwise. I know that sounds like science fiction, but it's very much within the realm of plausibility. Russ Roberts: So, I just have to add--digress for a second here. Having said 'Alexa' a few times, I'm just going to mention Marty Feldman and 'Blucher,' for people who are Young Frankenstein fans; and if you want to look that up, folks at home--we'll probably put a link up to it, I guess; we'll deal with that.

46:19 Russ Roberts: So, I want to take your example seriously. It sounds comical, but I don't think it is. And I think it's actually quite important. I'm going to give you a version of it that you refer to in the book and see if you think it's of this nature. So, right now, I use Gmail--even though I use DuckDuckGo for search. I do use Gmail; and I use Google calendar, and I have said this before--I love that when I make a plane reservation, it puts it on my calendar automatically. I'm a sucker for that. Like talking to the microwave. I'm embarrassed, but I do like it. I think it's cool. And it's convenient. And it saves a little bit of time. The other thing that happens with Gmail that I happen to really like is it started adding these possible responses: 'Thanks so much!' 'No, I don't think so.' 'Oh, great!' And, about 1 out of 5, I just click the box that automates the response to an email; and I think, 'Well, that's pleasant. That's exactly what I would have said.' Sometimes I click the box and then I add a few words, or I take away the exclamation point or add the exclamation point. And, you know, sometimes I think, 'Well, that's not exactly what I want to say but I'm going to say it anyway. I'll just click the box.' And, I think this kind of--I would call it corporate nudging, which you reference in the book quite a bit--is what's--it's the slippery slope. So, it starts off: 'You sure you want popcorn today? You've had 3 bags this week.' And you're still able to hit 'Yes' and override it. But, is it possible that there would be a day where, because of my health care payments, and I've got a bargain on my health care insurance if I allow Google to cut me off from, or Amazon to cut me off from popcorn and pay an extra fee for that--there's all kinds of things there that strike at the heart of how we live our lives. So I definitely agree with you. Where I think I'm a little more optimistic than you is that I imagine our culture is going to change. Now, of course, it's going to change in ways that--it's already changed an enormous amount. I think young people feel very different about, say, privacy, than older people. They feel very differently about digital life, virtual life, relative to brick-and-mortar life, real life. So, it's already changing. A lot of these things that you and I might find alarming thinking about them, maybe people in the future will just go, 'Ehh. So they cut me off from my popcorn. It's for my own good.' Now, I look at that, and I think that's a diminution--reduction; I can't say the word, 'diminuition'?--diminution of human agency and life and choice. And I really don't want AI making my decisions about who to date and what career to take and how I ought to spend my weekend, right? So, right now they might say, 'Here's some restaurants you might like,' or, 'Here's a movie you might enjoy,' or 'Here's a book.' And most of those I love, because I find out about books and movies I didn't know about. But are we really going down a path where it controls what I do? You could argue, I guess, it already does. Amy Webb: Well, so that's the--so again: How did we wind up at this point? Why would somebody think--this, see, constantly ask these questions. So, why would somebody have thought to make that? And you could argue that one of the things that the modern Internet brought us was tyranny of choice. Right? And we have access--you know, when I think of when I first moved to Japan in the '*ahem, ahem, ahem,*' mid-1990s, long time ago, you know, there was no Internet where you could buy stuff. There was an Internet; but e-commerce was very, very early. And if I wanted Crest toothpaste, I had to FAX my request to the foreign buyers' club and wait for a month. The fact that you can now order that on Amazon--you know, as well as like any other thing that you-- Russ Roberts: Express. You've got a lot of choices in some cities-- Amy Webb: Right. You could argue that using AI to make recommendations was simply an antidote to the tyranny of choice which we created for ourselves in the early days. And, one could certainly argue that that's not necessarily a bad thing. I mean, the big joke at Netflix now is that Netflix will literally green-light everything. Right? Which is why there's sooo much stuff on Netflix. Russ Roberts: It looks that way. Amy Webb: To the point now where--it's hard to know--if you compare Netflix 3 or 4 years ago, it's hard to find, to surface[?] great content. So, that's one side of the coin. The other side of the coin is: Nobody asked me what I wanted. And somebody somewhere made a decision that this nudging is best for me. And, let me give you a concrete example of how that manifests in my life, in the real world. I have not been in a car accident. I think I'm a pretty safe driver. I don't tend to break the rules. The car that I drive, when I back into my driveway, the sound automatically turns itself down. So, I have a-- Russ Roberts: On the radio-- Amy Webb: On the radio. So, I have a parking pad. I don't live on a busy street. I have a garage that's tucked pretty far away. And I always back in. And, somebody decided that it was best for me, as the driver, to automatically turn that radio down, regardless of what I'm listening to, regardless of what kind of driver I am--any time that I've got my car in reverse. There's no law saying--there's no Federal mandate or law requiring that. There's no statistic--as far as I know, there's not enough data saying that an accident will be prevented or some huge number of accidents will be-- Russ Roberts: [?] Amy Webb: you know what I mean? So, like, just somebody thought that would be a good feature. And I can't override it. That may seems like--that may not seem very important. To me, this is like a paper cut. And, the challenge with paper cuts is that you get one or two, and you don't sort of notice them. Maybe they are annoying for 5 seconds, and then you kind of just learn to live with it? Right? And you don't kind of notice it any more? What we are talking about with AI and these systems built by relatively small groups of homogenous people who are making decisions intended to govern us all, working at 6 companies in the United States and 3 in China--the problem is that we are going to start experiencing paper cuts at a fairly rapid clip. You have one or two cuts, not a big deal. Suddenly your entire body is covered in paper cuts, and your life is very different. You know, you may still be alive; but, I mean, stop and like visualize and think about what that would feel like. Suddenly life is nothing like it was before. You are miserable. And you don't have any way to override those paper cuts, because they just keep coming back, seemingly out of nowhere. That's the kind of future that I'm hoping to prevent.

53:27 Russ Roberts: So, the normal way--I want to--I have two thoughts on that. And I'm not sure they are right. But two thoughts. One is, my thought about how culture changes. You know, if you put a, if you put my grandfather, born in 1898, into the modern world, he'd find it very difficult. There would be a lot of things he wouldn't recognize. There would be--in just a hundred years--20 years--when he was a young man of 20 in 1918, roughly a hundred years ago: Being 20 now is really different. It would be weird for him to watch people walking down the street looking at their phones all the time. He'd think they were probably mentally disturbed. Many of them would be talking while they are walking, with their earbuds. And it would be jarring. And, more than just jarring--you could explain some of it to him--just the things that gave him pleasure would be different. And maybe not available. Which is part of your point, right? The freedom to do all kinds of things. Some of them small, like listening to the music as you back into your driveway at the same volume. Some of them large, like you say: It's coming. There will be things coming. So, one of you says is that, 'If they come, maybe people aren't as bothered by it as we are,' in thinking about them. The second issue is, if you make a really bad decision--and a lot of your book echoes some of the concerns of Cathy O'Neil in her book, Weapons of Math Destruction; and she was a guest on EconTalk; we'll put a link up to that episode. As you say, it's a very homogenous group. Mostly white men designing these things. But if you don't--historically, in a capitalist system, if you don't design things well and take into account that people aren't like you, you don't do very well. Right? If you think everybody is like you and likes to sit and code all night in your room, you are going to be a lousy designer of products for people. What's scary to me--and the concern that I share with you--is that I'm not sure that the profit-and-loss motive is doing a really good job of constraining those choices. And I see it in lots of ways. Some of which you talk about in your book; some of which I see elsewhere. The freedom that Amazon and Google and Apple have to do things that are, just kind of, funky--I can't even describe them. The things that--normally a company couldn't do that, because they'd lose so much money, they'd go out of business. But there's an enormous cushion for these companies, in terms of their profitability. And so, let's turn--it's--I'll just tell listeners before we started this, Amy, that I said, 'We'll spend the first half on what the problem is, and the second half on what to do about it.' And we are now, oh, 55 minutes in. And so if we can go a little over an hour that would be great, talking about what to do about it. So, normally you wouldn't do anything about it. You'd say, the profit motive and competition will constrain these kind of ridiculous stakes, and forms of arrogance, and tribal weirdness that this culture has produced out of Silicon Valley and Redmond and elsewhere. But, I don't see it happening. So, what I naturally look to is: How do I inject a little more competition into the system? How do I change the incentives that these folks face to do a better job? Taking account of what I want, not what they want. Amy Webb: Yeah. So this is where things get a little complicated. And, you know, I just want to be very clear: I don't think Big Tech is the enemy. I don't think that the G-Mafia are the villains. In fact, I think they are our best hope for the future. You know. And, introducing competition at this point may not elicit the same type of responses that you might see in other market sectors, in other industries. And I think part of the reason for that is that the technology that these companies build and maintain is the invisible infrastructure powering everyday life. It's not a single widget, or even a series of widgets-- Russ Roberts: Fair enough-- Amy Webb: And I think the challenge is that if you try to, for example, introduce competition in the Cloud Space, which might be the, you know--or even try to break up Amazon, a la Baby Bells, from years ago-- Russ Roberts: right-- Amy Webb: And I've actually heard that suggested before--you know, the challenge is that the technology that Amazon Web--like, the AWS [Amazon Web Services], the infrastructure and the technology that that entire system relies on and therefore--huge parts of the government and our largest businesses, that our customers--the challenge is that that technology bleeds over into other aspects of Amazon's core functions. There aren't solid walls. And so, if it's the case that at this point competition is not possible, then what are some other ways forward? You know, this March--so, very--I think it's March--pretty soon from now, is the 30th Anniversary of Tim Berners-Lee's, Tim Berner-Lee's [sic], seminal paper and suggestion to CERN [Conseil Européen pour la Recherche Nucléaire, European Organization for Nuclear Research] that sort of outlined the core premise of the Internet. And everybody--the idea--everybody at the time that saw that thought it was kind of a boring but interesting idea. And the challenge is that nobody thought that through--sort of, if the Internet becomes something beyond universities connecting to each other to share research, it becomes something else. And technology always becomes something else. Right? Then, how do we mitigate that? How do we prevent against plausible risk? Right? And, one way, I think, that I think that we could think about the future of AI, is to treat it, you know, similar to a public good, the way that we might treat air. Right? And I know that's a complicated--that's complicated, and I know it sends some shock waves into economists who would argue with me that I'm totally off base, and you can't possibly apply that. But, the public good concept I think works because it, first of all tells us that we all have a stake: that we are not just going with the flow. And it also then helps us think about global guardrails. And that, then--you know, I know it sounds like I'm angling for regulation. I'm not. I'm angling for widespread collaboration, with some very specific, agreed-upon tenets[?]. So, you know, principles that go beyond the obvious. Like, make sure that AI is safe. But that, you know, that everybody on the planet would agree to things like, whenever an investor invests money in AI for whatever reason, a part of that investment must be allocated to, um, making safety a priority. Or, cleaning up one of the training databases. Um, things like that. And having some kind of global body--again, I'm not usually in favor of huge government and big bureaucracies, but I think in this particular case, we can't just assume that these companies who have motivations that I don't think are always in line with what's best for humanity--we can't assume that they are going to take care of the stuff on their own. I'm sure your listeners know--like, a couple of weeks ago, Google had to assure, reassure investors, that enormous spend on R&D, was worthwhile. Like, people got spooked. You know, when we're talking about game-changing, huge, technologies, and research areas like AI--and we have no basic--we have no Federal funding. We have no basic funding research, or not anywhere near enough, in some of these areas, outside of military expenditures. Somebody has got to do it. And the challenges that investors expect, um--some kind of return on investment or some kind of shiny new widget that gets revealed, you know, on a quarterly basis, as though you can schedule big R&D breakthroughs--you know, we have to--so, if there was some global agency that acted a little bit more like the IAEA [International Atomic Energy Agency], with the caveat that I am not saying AI as a weapon--you know, then we would have some mechanism to think this through. We would need some kind of--you know, going back to those questions on tribalism and culture--I think we need to have some kind of global human culture or values atlas that is going to take time to build but describes and is not static. All--how we interpret things, culturally, how we relate to each other. Because, ultimately, these systems don't just live within the geographic boundaries of our countries. They travel. So, um, yeah: I think that there are a lot of solutions that are, you know, top down. But we individuals have to take some responsibility as well. Which means, we have to get smarter about what data we are shedding and when and how and where and why. We have to demand transparency. And I think it's possible for the big tech companies to be more transparent, without sacrificing IP [? Internet Protocol?]. You know? And our universities, I think, have to take more responsibility and shift their curricula to include difficult questions, not just any single ethics class, so that they weave questions and worldviews and, you know, other things into their core curricula. So this is like a--there is no single fix here. The good news is that there is something for all of us to do, and collectively if we can get it together, but to shift the developmental track of AI, I think the optimistic scenarios are possible. I really do. My concern is that everybody is going to say, 'I don't feel the pain all that much yet, so I'm cool waiting.'

1:04:04 Russ Roberts: Well, the first step is to pay attention. And I love your book for encouraging me to pay attention. And others, and anyone else who is listening, I think it does a great job of that. I think the solution--challenge, is quite--this is where it gets complicated: There's no--I can't think of a single example where this kind of global collaboration works out well. To me, it's like the United Nations. It's a really great idea; it's a beautiful idea, you have a nice quote from Isaiah, on the front of about beating your ploughs, your swords into plough-shares. And it just it--the distance between the ideal and how it works into practice is so vast that my view is probably better to not have to have it at all. But I can understand that you can debate that. But, I'm not optimistic that a "global collaborative effort" would work in any way that would make you happy at the end of the day. I want to try to suggest--well, maybe I'm wrong. But I want to try to suggest a different approach and see if you think there's anything to it. So, you said it's like a public good. You're talking to an economist. I don't have any problem with that language. I think what is certainly has public good aspects to it is the role of digital stuff in our lives. You can't say, 'Well, I want a digital world like mine,' and your digital world would be like yours. We kind of consume that one air that you are talking about; and I think that's very à propos about how to think about this. But is it possible, is it imaginable, that we could have a different way of interacting with each other digitally than the current way, that would allow a little more of what we might call a privatization? Or more choice? Or more options? So, right now, underlying all of this, is that this idea that some really bright people figured out some really clever ways to use knowledge about us to make money. And it's especially clever because it's free to us. On the surface. It's not literally free. It's not free in lots of ways, by the way. So, I used to--I used to say all the time, 'Well, Google's free. What's the big deal?' Well, it's not literally free in any sense. It's true I don't make a payment each time I do a search. But it turns out that of course Google uses the information that I use when I search, and access to me, in all kinds of ways, to charge people for access to me. And instead of me getting to charge access. And it's there pipe; so I kind of get it. So that's the way it's worked out. But we could imagine a different world--either through regulation--not my first choice either, obviously--but I think technologically--I want to come back to Arnold Kling said in a blog post recently. He said, 'You don't like Facebook, how they handle privacy? Make a better one.' And you could say, 'Well, that's really hard to do. It's almost impossible. Everybody's already locked in.' And network effects. And blah, blah, blah. But I think there--we have a lot of really smart people. And one way to get around these kind of scary, dystopian concerns is for people to say, 'I don't like the way the Internet is designed. I want a different one.' And, 'People smarter than me--I can't figure it out. People write about it occasionally that blockchain could be the basis for a different kind of Internet. I try to read the articles; they don't make sense to me. My fault.' But, I imagine that that could happen. And it seems to me that's the right way to fix this problem, and to build in a different relationship between me and these companies that create services for me that actually--they are exploiting. Amy Webb: I think there is, for--if we are talking about the realm in which we as individuals have personal relationships with parts of the Big Nine, then yes. I think it is plausible, not impossible but certainly challenging, for somebody to develop an alternative to Facebook--you know, that promises initially to somehow get around a lot of the challenges that Facebook has had. At the end of the day, though, we are still humans. And, the parts of the digital infrastructure that we seem to complain about will follow us. This is the same reason why I don't think that colonizing--like everybody who wants to colonize Mars--it's like, 'That's terrific. It's a wonderful idea. It's not going to solve your problems.' The problems that you have on this planet are going to follow you to the next planet. Right? So, I think if we are talking about the realm of personal technology, sure. Some of these issues can be solved. Somebody can certainly start another Twitter. I would welcome somebody starting another Twitter that has a different approach to speech. So, that's fine. I'm actually concerned about these systems that mine our data in a much more broad sense. And not just our personal data, but our companies' data, our local traffic data. You know--all of these systems that are learning from us in real time. And, ultimately, these narrow artificial intelligence applications are beginning to gain some momentum. There are some terrifically interesting research out of a group called Deep Mind, which is a subset of Google. And, you know, I read one of their most recent papers. They've trained a system called AlphaGo--AlphaZero is the new version of the algorithm. Which is now capable of going from zero knowledge to learning how to play several games at once. And that may not seem all that thrilling to listeners. But, what it portends--and what's really, truly remarkable about this research--is that, without humans working hard to train systems, these systems are now capable of training themselves. And also creating child AIs to perform some of the tasks for them. And they are doing them in ways that defy our understanding. When we say 'artificial intelligence,' I think that that's actually a misnomer, because it assumes that the systems that we are building that are now propagating on their own remotely resemble the way that we think. We don't actually understand enough of our own human brains. What's probably a better term is 'alien intelligence,' not 'artificial intelligence.' And semantics matter. 'Artificial intelligence' makes us feel as though we still have some agency. My concern is that, as these systems propagate, they become more and more alien to us in ways that we don't understand. And at some point they start making more important decisions where the stakes are higher, on behalf of us all. And there is a God in that system; and that is the original group of people who created it. Upon which the foundation was built, and all the learning took place. So, if it's the case that we are in the midst of that transition at the moment, I'm hoping that enough people wake up: that we do not close our eyes just as the machines are gaining awareness. And that we ourselves wake up, and that we demand a change in the developmental track. And that doesn't mean that these companies can't make plenty of money. And it certainly doesn't mean that the companies are evil, or even that the people who work in these companies have some kind of nefarious plan. I believe--you know, Chinese, government withstanding--notwithstanding--I believe that the people who are in this, who are working on trying to solve humanity's grandest challenges. But they are doing so within ridiculous constraints that have to do with, um, the market, and the whims of investors, and what direction the wind is blowing in Washington, D.C. and who has decided maybe this is the year for regulation. That--those are my concerns. The personal relationship that we have to Facebook is of course a piece of this. But it's the bigger picture that ought to concern us all.