Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation.

Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build.

“They took two years to design, essentially, an AI automatic résumé scanner,” Crawford said. “And they found that it was so biased against any female applicant that if you even had the word ‘woman’ on your résumé that it went to the bottom of the pile.”

That’s a classic example of what Crawford calls “dirty data.” Even though people think of algorithms as being fair and free of human bias, Whittaker explained, biased humans are the ones who create the data sets and the code that decides how that data should be evaluated; that doesn’t mean AI is useless, but she and Crawford said we need to be interrogating how it is being made and deployed in the real world.

“The harms are not evenly distributed, but this is in our lives, right?” Whittaker asked. “There are license-plate profiling AIs that are sort of tracking people as they go over different bridges in New York. You have systems that are determining which school your child gets enrolled in. You have automated essay scoring systems that are determining whether it’s written well enough. Whose version of written English is that? And what is it rewarding or not? What kind of creativity can get through that?”

You can listen to Recode Decode wherever you get your podcasts, including Apple Podcasts, Spotify, Google Podcasts, Pocket Casts, and Overcast.

Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Kate and Meredith, recorded in front of a live audience at the Studio Theatre in Washington, DC.

Kara Swisher: We’re having a competitive leather coat situation here. Yours is cooler.

Meredith Whittaker: We both win.

We both win, that’s true. The audience wins. So I wanna start off talking about, let’s talk about what you guys do, because you all are studying AI in a much more critical fashion and where it goes. So first of all, talk about what the AI Now Institute is and how you look at what your goals are.

Kate Crawford: Yeah, happy too. I mean, essentially we started really about four years ago by realizing ... by looking around internationally and realizing there wasn’t a single AI institute that was focused on the social, political, and ethical implications of these tools. And so Meredith and I realized that we had to make our lives a lot harder and actually do it ourselves, and so now we head the AI Now Institute at NYU, and it’s really the world’s first institute to really center these concerns, and we created it essentially as an interdisciplinary institute. That we can’t be resolving these issues just from computer science and from engineering departments, we actually need a much bigger lens.

We need to be drawing on social science, on humanistic disciplines, on law, philosophy, as well as anthropology, sociology, criminal justice. If you actually wanna build tools that affect social institutions, you need to have experts in the room, but you also need affected communities. People who are likely to see the downsides. So that was the inspiration for creating AI Now and it’s been keeping us pretty busy.

All right. Meredith, you also work at Google?

Meredith Whittaker: Yes, I’m here in my AI Now capacity today, but I do have a dual affiliation.

All right, explain what you do in both places. So how did you look at starting this? Because you’re at a company which pretty much controls AI right now at this point.

Meredith Whittaker: It’s one of the big players.

One of the big players, yeah.

Meredith Whittaker: We can say ... I mean, my path was through industry, right? I had been at Google for over a decade. I ran a research group there and I think Kate and I came to very similar conclusions that were fairly heterodox during my days in industry, through very different paths.

So Kate has been in academics. She’s one of the founders of the field. She’s been setting this up for over a decade. I worked on large-scale measurement systems, so I was really at the ... How do you deploy servers across the globe and create the kind of data that would be meaningful, right? How do you make data that has a certain type of meaning? And then how do you ensure that meaning?

So I was right, as Kate and I joked, the sort of epistemic guts of these questions. What is the ground truth? And it was constantly slipping. And I had the dumb luck of being in a place where I was watching the ascent of AI. I was watching people take data that I knew was faulty or fallible or incomplete, and begin to pump it into AI systems and make claims about the world that I didn’t believe were actually credible or verified.

So basically, not to get too technical on you, but the crap in, crap out rule, right?

Meredith Whittaker: Yeah. Crap in, even weirder crap out.

I did not go to computer school for that, but go ahead. Move along.

Meredith Whittaker: So Kate and I met, and I was so relieved. We met on a bus on the way to a conference, and suddenly there was someone who was speaking this language, and helping me think through ideas that I had felt fairly alone in thinking about, and we started talking about this, and we shared a similar set of concerns, right? If these technologies are being threaded through some of our most sensitive social institutions, what are the guardrails?

What are the guardrails when we begin to automate criminal justice based on the assumptions of people in a conference room in Silicon Valley? What are the guardrails when we begin to automate education? When we begin to do automatic essay scoring and eye tracking for students to determine attentiveness or intelligence, right? How do you make sure these aren’t replicating patterns of discrimination?

Right. So let’s talk about that issue: data. Because you just said something in a room in Silicon Valley by a certain group of people, which is typically the same group of people that are putting them in. I speak of pretty much ... the data is there. It’s mostly white men. Younger. Correct? Is that correct?

Kate Crawford: Yeah. Still to this day.

To this day. So here you have this issue where the data’s going in, and whether the data’s correct... Let’s talk about the issue of data and data as gold in Silicon Valley now. Talk about the systems and how they’re created and how you can get faulty ... how it moves that way.

Kate Crawford: Yeah. This is actually one of the big areas for our research of AI Now, is really lifting up the hood on AI systems and looking at the sometimes quite weird and sticky and gooey training data that goes into the pipes. And some of the ways you do that is really by looking at where does that training data get sourced from? So I’ll give you an example. One of the studies we recently published looked specifically at predictive policing data. And we thought, well ...

Explain this. Predictive policing...

Kate Crawford: Yeah. So for those of you who’ve seen how there are these kinds of heat maps that are used that sort of basically isolate areas in cities where basically police can predict that crime might occur, or in some cases it’s a person-based list to say, “This person looks like they’re the sort of person who might commit a crime,” looking at their social network. We can ask really hard questions about whether these things work, but one of the things we wanted to really dig into was, “Where’s the data coming from? Where do they get the data?”

The original data.

Kate Crawford: Yeah. The original data to train these systems. To say, “Hey, check out this person,” or, “Check out this neighborhood.” So we ended up looking at 13 jurisdictions across the US that were specifically under legal orders because of biased or illegal or unconstitutional policing. So that means essentially courts have already said, “You got a real problem with what’s going on here. You should actually be really changing your police practices.” But guess what? The data that was being created by things like planting evidence, or racially biased policing, was being piped into predictive policing systems. We found multiple cases — Chicago being one of the most obvious — where you could see that the data coming from what was essentially corrupt police practices was informing supposedly neutral and objective predictive policing platforms.

So bad policing data was going into creating more bad information.

Kate Crawford: So if we have dirty data actually forming our predictive policing systems, you’re ingraining the sort of bias and discrimination that we’ve seen over decades into these systems that in many ways just are above repute. Because people say, “Oh, well, it’s neutral, so it must be completely fine.” And so you see these vicious circles emerging because essentially the training data itself ...

Explain training data, again … Training data is, you teach the systems, and then they learn, right? So what you teach them with at the beginning is how they learn at the end. It’s like people, I guess.

Meredith Whittaker: Yeah. It’s like people, but people have a bit more nuance and complexity. Taking the most basic and canonical example, right? You show a machine learning system 100 million pictures of cats, but you’ve only shown this machine learning system cats that were colored white, right? That system would then recognize cats, but probably misrecognize darker-colored cats, right? You can show the machine learning system any large corpus of data. It models the world through that data. That’s all it will ever know. That’s all it can ever see.

Which has already been a problem with basic search at Google, for example.

Meredith Whittaker: It is not infallible.

Right.

Meredith Whittaker: It only reflects what’s in the data, which is why this question of, is the data coming through biased policing practices that have a record of arrest, that is actually a record of corruption, is really important because once that is filtered through one of these systems, people take it as the product of a smart computer, that it’s infallible, that it is sort of mathematical wizardry and probably not to be contested. Whereas if they looked at the practices that were creating that data, you would realize there was something really wrong with those and you actually need to change it.

All right, so that’s one example of dirty data. And so what you do is cleanse it, or the right term, you have a data cleansing?

Kate Crawford: Well, see, that’s currently one of the things the industry’s really contending with right now, which is, how to create what are called fairness fixes. How do we clean up the data? How do we make neutral and fair AI? Well, the more we’ve been doing this research, the more concerns we have about this sort of idea of a simplistic tech fix, because in the end, you’re talking about cultures of data production and if that data is historical, then you are importing the historical biases of the past into the tools of the future.

So essentially, training data from the past is deciding how decisions will be made by AI systems. That’s gonna be a real problem. First of all, if you can’t see into the training data, if it’s a proprietary company that doesn’t have any transparency protocols, you can’t see the data.

Black box, that’s what the … that is, proprietary companies like Google and the two leaders in AI right now or three really would be …

Kate Crawford: Sometimes I say the big five and sometimes they say the big seven, the number keeps changing.

But it’s Google, Facebook, Amazon ...

Meredith Whittaker: IBM, Apple.

... Microsoft, right? Yeah. And then China, we’ll get to that.

Meredith Whittaker: And then China, t’s a whole other story. We’ll get there.

They got a lot of issues there.

Kate Crawford: Exactly. So yeah, and I loved your example of search, because search data is a really good thing to look at. Some of you might’ve seen one of the classic search tricks you could do, this is what, a couple of years ago.

“CEO.”

Kate Crawford: Yeah. We ran the CEO example in the lab that I was like, “Wow, it’s all white dudes.” And in fact the first female CEO that came up ...

And it actually is, but move along.

Kate Crawford: Did you have a look ...

No but it actually is in real life. But go ahead and move on.

Kate Crawford: Yeah. What is it, we have like 9 percent female CEOs? You didn’t even have that in search data, right?

Right.

Kate Crawford: And the first female CEO that came up in these searches that we were running at the time, was Barbie CEO and you’re like, “Okay, that’s a problem.” And it’s funny because it’s like a whack-a-mole problem right now. Right? So industry is like, “Oh, okay, we see a problem.” That we’re actually, if you look up “physicist” right now, you’ll still see some differences. So again, around professions around these cliche stereotypes around gender and race, you keep seeing them get reflected and try to fix it.

That’s because of what you search is what you search for and then what you get.

Kate Crawford: Well, that’s actually a really complex set of issues. Partly it’s that feedback effect that people are searching for a generic image and they might choose a male doctor, for example. But sometimes it’s because of where those images are coming from. So if you’re scraping it from very particular types of photo sets, like Getty, for example, really pushed for more diverse images of people in these classic photo sets, because it had become really cliched in terms of what you could get.

So long story short, search is really complicated and people are trying to fix it, but it’s much harder than you might imagine. And there keeps being more and more layers of the onion that really have to be looked at.

So instead, I think we have to ask different questions around, “Okay, how do we think about data construction practices? How do we think about how we represent the world and the politics of AI?” Because these systems are political, they’re not neutral. They’re not objective. They are actually made by people in rooms. And that’s why it matters who’s in the room, who’s making the system, and what types of problems they’re trying to solve.

Talk about the “who’s in the room” part, Meredith, because that’s one of the issues that has been brought up again and again around what happens with AI. Probably, you’d argue the AI is the biggest growth area for tech going forward. One of them besides ... Self-driving would be one, there’s a whole bunch of things, automation, robotics, but AI is the really big next direction of the future.

Meredith Whittaker: Yeah.

Talk about that.

Meredith Whittaker: Well, we don’t have much data, but the data we do have is unequivocal. And our daily experience as women in tech confirms this data and then some. There was a Wired study that came out last year, and it said that around 12 percent of the papers that were submitted to the big machine learning AI conferences were submitted by women. Right? So you’re looking at a field that is even less diverse than the very un-diverse computer science field.

Right now we have about 15 percent women getting computer science degrees. This is down from 10 years ago. It’s down from 30 years ago, when you had rough parity, right? So you’ve actually seen the field as it has grown in power and prominence ...

So that’s just gender. Then there’s people of color, there’s all kinds of things.

Meredith Whittaker: Again, one of the issues is that we aren’t seeing enough data on this and enough sort of emphasis on the urgency of this problem, but one anecdotal evidence is Timnit Gebru, who is a preeminent machine vision researcher, she’s a woman of color, and when she first went to NeurIPS, which is the biggest machine learning conference, she said she was one of six black people out of 8,000.

So she was the co-founder of Black in AI, she’s been doing a huge amount of work, sort of spearheading this with a couple of colleagues to make a lot more space for black people to participate in machine learning. And there’ve been sort of initiatives that have grown out of that, but that is emblematic of a huge problem because ...

I’m gonna play the Silicon Valley people I cover all the time. “Meredith, why does that matter? It’s really about standards and quality.”

Meredith Whittaker: Who gets to define quality, Kara?

All right. I mean, but they say that. It’s a meritocracy, right?

Meredith Whittaker: And I would say ...

I mean, come on, it’s a meritocracy, only the best rise to the top, and it just happens to be 79 percent white men.

Meredith Whittaker: ... drive the Tesla to the $2 million yacht.

What is the problem? Why is that not happening, from your perspective? Why isn’t it more diverse? Yeah.

Meredith Whittaker: I think there are a lot of ways we could diagnose that, but I think the cost of diversity is pretty clear, right? The people who bear the costs of discrimination, of exclusion, of racism within these companies are the same people who bear the costs of bias of errors and of sort of, I would say, oppressive uses of AI outside of these companies. So there is making a direct causal link as something that we’re gonna need more research to begin to put together.

But there ... it is very clear that the people who are benefiting from these systems match a specific demographic profile, and the people who are being harmed by these systems are those who have historically been marginalized.

Can you talk about those benefits and harms, in the new society with AI making decisions and in some cases, it does notice inefficiencies and things like crops or weather, it’s hard to have bias in those. Those things have benefits, correct? You all have an AI Now Institute, so you must like AI. Talk about the benefits. Where does it work really well and where doesn’t it?

Meredith Whittaker: Well, I would say, again, the answer to that question — and this is not to be cagey — is how are you measuring benefit? And that’s one of the key areas I think we need to look at more closely, right? So in increasing crop yield, that might be a huge benefit, but is that coming at the expense of soil health? Is that coming at the expense of broader ecological concerns? Is that displacing communities that used to live on that land? I’m sort of making up these examples as questions you’d want to ask before you sort of claim blanket benefits from these technologies.

Similarly, if you’re looking at harms, who’s measuring harms to whom, right? A lot of the issues come from setting one objective function. So this is sort of one goal for the AI system, and one we’ve seen a lot lately that is fairly problematic is engagement in social media. That is the only goal that is sort of sought after in a lot of these board rooms by people who are creating these systems. Right? How do we get more engagement? Well, the collateral damage of considering engagement as a sole goal has been falling around us for some years now.

Right.

Meredith Whittaker: I think we ...

This is when you’re talking about the architecture of, I had a really great podcast with Nicole Wong, who used to be chief legal counsel at Twitter and Google, and she was involved with building these or helping build this architecture. And one of the things she talked about was the architecture of, say, a Google search or something on Facebook. And one of the things that was interesting is you can build on ... Initially, you can build on context, accuracy, and speed and you get pretty good results when you do that.

But when you start to engage ... When the pillars you build are engagement, virality, and speed, we end up with Alex Jones. That’s where we go. That’s where we go.

Meredith Whittaker: Exactly.

I mean, it does because that’s what, as you can see, there’s a very good article today in Bloomberg about that. They build it for that, to create it, and therefore that’s what happens. And then they’re surprised that it happens. I do wanna get to the benefits because there are benefits to not having these systems run just by humans or not. Just not at all.

Kate Crawford: Well, this is the big question right now. So people say, “Look, if we have AI in, say, the criminal justice system, won’t that be less bias?” We’ve got real problems in terms of our court systems, in terms of policing. Won’t this make things better? And the question we ask is, “Okay, let’s look at the evidence for that.” What we need is a research baseline. So the reason we exist is to go and do that empirical research so that those claims are tested.

One of the things that sort of keeps us up at night is if you think about the way that we check that our current systems are fair in, say, criminal justice is that we have a system of appeals. We have a system of rulings. You actually have a thing called due process, which means you can check the evidence that’s being brought against you. You can say, “Hey, this is incorrect.” You can change the data. You can say, “Hey, you’ve got the wrong information about me.”

This is actually not how AI works right now. In many cases, decisions are gonna be made about you. You’re not even aware that an AI system is working in the background. Let’s take HR for a classic case in point right now. Now, many of you have probably tried sending CVs and résumés in to get a job. What you may not know is that in many cases, companies are using AI systems to scan those résumés, to decide whether or not you’re worthy of an interview, and that’s fine until you start hearing about Amazon’s system, where they took two years to design, essentially, an AI automatic résumé scanner.

And they found that it was so biased against any female applicant that if you even had the word “woman” on your résumé that it went to the bottom of the pile. I mean, it was extraordinary. And it tells us two things. One, it’s actually much harder to automate these tools than you might imagine, because Amazon’s got some pretty great engineers. It’s not like they don’t know what they’re doing.

Right. And really why would they wanna do necessarily right?

Kate Crawford: Can save a lot of money. A lot of people wanna create ...

No. Why they wanna put AI, but why would they want that to be the outcome?

Kate Crawford: Well, they didn’t. They definitely did not want that to be the outcome, which is why they didn’t release the tool, but it also tells you something about the pile of résumés that they had. What were they training it on? What was the training data? Surprise, surprise: a lot of white dudes in basically their entire engineering pool. So these tools tell us something.

But of course, the next big thing in HR is gonna be even weirder. So now if you do an interview, this is a really weird system called HireVue. You might’ve heard of this one. A lot of companies are using it, Goldman Sachs are using it, Unilever’s using it, to my knowledge. And essentially, while you’re being interviewed, there’s a camera that’s recording you, and it’s recording all of your micro facial expressions and all of the gestures you’re using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers. Might sound like a good idea, but think about how you’re basically just hiring people who look like the people you already have. And even that’s not gonna help.

Right, Goldman Sachs.

We might have a problem with that. Right? So this is one of those things where you’re like, no one even gets to look at that system because they’re like, “Oh no, it’s proprietary. Sorry guys.” It’s actually helped me ... “It’s a perfectly good system, it’s neutral, it’s objective.” We have to be much more critical of these systems. I mean, that’s really what Meredith and I do, and what we stand for, is saying, “We will do the research to actually test these systems.” Which is why it’s so important that we can audit ...

There’s also efforts on the behalf of industry to test themselves, right? Now, Google has started its own advisory panel, right? Facebook has its content moderation panel that they’re hoping will save them. It won’t, newsflash.

Meredith Whittaker: Newsflash.

Newsflash.

Kate Crawford: Psychic.

That’ll be a column in the New York Times in 23 minutes. Talk about that. You’re not on the Google advisory panel, and you’re at Google, and you’re an expert in this. Is that correct?

Kate Crawford: I’m not on the panel.

And why is that?

Kate Crawford: You’d have to ask the people who put together the panel.

All right. Talk about these panels. Maybe you can’t. I mean, what ... go ahead. You start.

Kate Crawford: I’m happy to talk about it. If you feel ...

No, go ahead. I won’t say ...

Meredith Whittaker: I’m getting the ray... So, it’s not just Google, right? You were saying that ...

No, it’s all of them, they want to create these panels.

Meredith Whittaker: Axxon, the creator of police tech, AI-enhanced body cameras and police surveillance drones has an ethics board. Salesforce, it constituted something along the lines of an ethics board, right in the wake of a kind of crisis where a lot of their workers and a lot of other people were asking them not to sell tech to ICE, right? Facebook is sort of creating these ethics panels in the wake of massive global ... “Controversy” is a very diplomatic word for what’s going on at Facebook.

But, in a sense, what you’re seeing is that these panels are ... We use the term ethics washing, where there are serious and significant questions that are at the doorstep of this industry right now. Are you going to harm humanity and, specifically, historically marginalized populations, or are you going to sort of get your act together and make some significant structural changes to ensure that what you create is safe and not harmful?

I think, in the wake of these controversies, there has been kind of ethics theater, almost. We actually look at this in our 2018 report, where we looked into these a little bit. All of these questions around, “What do these boards actually do,” right? Are product decisions run by them? Can they cancel a product decision? Do they have veto power otherwise? Is there any documentation on whether their advice was taken or whether it was not?

And, are they qualified?

Meredith Whittaker: And, are they qualified? Who chooses who’s on the board? There’s a kind of recursive question here about who’s guarding whom. I think, ultimately, it’s a great step that we’re seeing these issues be taken seriously.

I will say, four years ago when we started doing this, it was a lonely room. There weren’t that many people who were concerned. There were a lot of people who would argue that these were not problems. Now, that is not the case. These issues are serious and they’re being taken seriously. But what we don’t see is real accountability. What we don’t see are mechanisms of oversight that actually bring the people who are most at risk of harm into the room to help shape these decisions.

Let’s talk about that accountability. Kate, we’re here in Washington. They love to make rules here.

Meredith Whittaker: Yeah, they do.

They’re not very good at it.

Kate Crawford: We’d like a few more, I think, in this space.

Talk about that. What is the regulatory outlook? Because they’re still trying to figure out how to deal with social media, they’re still trying to deal with how to deal with privacy, sort of basic stuff.

Kate Crawford: Yeah. We still don’t have any kind of federal privacy law, which is kind of extraordinary in this day and age. Well, it’s interesting. I mean, I think some of the most exciting steps have been happening at the state level. We saw California pass the strongest privacy bill of the country. It’s actually kind of amazing. We’re starting to see ...

This goes into force in 2020.

Kate Crawford: That’s right.

Some people don’t think it’s strong enough. Compared to Europe, it certainly isn’t.

Kate Crawford: It was, certainly, watered down from its original framing. We’re starting to see that happen again around issues like facial recognition. You’re seeing multiple states moved towards actually saying, “No, we need to regulate facial recognition.” For very good reasons, because this technology can be deeply troubling in the way that it’s being used. But again, there’s a lot of fights going on about how strong those rules should be.

It’s interesting. We made a recommendation, again, from the AI Now Institute, based on this research saying that, “Look, we think notice and consent isn’t enough. We think things like facial recognition actually need to be something that we’ll debate and take seriously.” Communities should be allowed to say, “Hey, no. We don’t want this in our backyard.” Rather than just being told, “Hey, you’ve walked into public space so, basically, you’ve already consented.” I mean, that presents a set of concerns. These are the debates that are happening, right now, at a state level.

But then, internationally, you see a lot more movement. So of course, we had GDPR, the General Data Protection Regulations, come into effect in Europe in 2018. And, you know, it’s interesting. It’s not a perfect piece of legislation, but it has had impact internationally. What is interesting now is that the US is facing a decision. Are you going to be regulated by other countries saying, “Hey, we’re not going to accept this,” or are you going to give protections to US citizens?

I think this is the key moment to start making that regulation, to make it real. The question is, who gets a seat at the table to decide what that’s gonna look like? This is going to be one of the most important things that happens in the next five years, is how AI is going to be regulated, and all of those adjacent technologies.

Who should play a role in that, from your perspective? Obviously, federal regulators.

Kate Crawford: Absolutely.

Who?

Kate Crawford: I mean, realistically, we see this as being something that needs to be an evidence-led process. What we want to see is more ...

Because that’s popular these days.

Kate Crawford: Yeah. You know, it’s like, “Can you actually show us how this technology works? Do you understand what that is?” I mean, we saw some things. You remember Mark Zuckerberg in front of Congress. That wasn’t one of those shining moments of seeing how regulators really understand AI. I think we can do better.

“These terms of service are very, very confusing to read.”

Meredith Whittaker: They are.

Kate Crawford: But you did make it in your college dorm room, so, that was great.

Yeah.

Kate Crawford: But, I’ll say that there are some really interesting senators, right now, who are asking different questions. They’re looking at algorithmic accountability. That’s really key to see. They’re having different conversations about privacy that realize that it’s not just about individual privacy, it’s about our collective privacy. It’s the fact that, if you make a decision in a social media network, that can affect how data from all of your contacts is being extracted, as well. I think there’s an increasing level of literacy, and that’s something that’s super important. We need to really support that with more ...

And what about the federal agencies? Now it looks like the FTC is possibly getting more funding, there’s some great ideas by Senator Klobuchar — funding of the FTC stronger with fines and things like that. Which federal agency should be the agency that should ... Should there ... Because I think Nancy Pelosi was talking about an agency of AI, really, to monitor data.

Kate Crawford: This is the big debate. There’s already been a debate about this, for many years now, which is like, “Do you try to give more strength to existing agencies, or do you create a new super agency for AI?” This is something we looked at, in detail, in our research last year. We made some recommendations, specifically about ... At this point, because we need some regulations, I’d say, quite urgently ... We need to empower existing agencies to do what they’re doing, but to also include looking at AI. Right?

I mean, if you’re the FAA and you’re focused on, “Okay, how do we think about safety and planes?” You’re the right agency with the right expertise to be thinking about how AI starts to impact your particular domain. Same thing goes for the FTC. Same thing goes for many agencies where we want to say, “Hey, give them the power to look at these issues.” Maybe one day we’ll get a super agency, but we can’t wait that long.

Meredith Whittaker: Just a big plus one to what Kate is saying, and the work here ... I mean, I would emphasize that. I would say it’s also, I think, important to delegate responsibility to experts who are coming from outside the AI domain. Because, at this point, a lot of these questions actually aren’t AI questions. They aren’t about, “Are you using a deep neural net to do this?” It’s about, “Under what policy is this implemented? In what context is it implemented? Was it trained on data that reflects that context? Is it going to be used in ways that are transparent, that are contestable, that are safe? How is safety proven?” Right?

These are all actually expertise that people from, say ... In the health care domain, you would want doctors, you would want nurses unions, you would want people who understand the arcane workings of the US insurance system. You would need them all at the table, on equal footing with AI experts, to actually create the practices to verify and ensure that these systems are safe and beneficial.

So, I want to finish with some of the questions we have. I don’t know the time, I don’t have a clock here. We can talk about the US and what a mess it is, but China ...

Meredith Whittaker: Yes.

Yes. Facial recognition, social scores, heavy into AI. Obviously, Kai-Fu Lee wrote a book that talks about this and how fast they’re moving past us in this area, because of the interest in data. Their ability to collect data is unfettered, and their citizens allow it. Whether they’re going out of stores, what they’re buying in stores, not just looking at what you’re doing on Facebook, but what you pick up and put down in stores.

The surveillance is everywhere. It’s a surveillance economy, as far as I can tell. But it benefits from that, because they get all kinds of insights, because you do. That’s not biased insights, that’s all data. It does reflect the world of how people are moving, whether it’s transportation, whether it’s anything, they can have data on everything. Is that something we’ve got to think about? Because here’s a country that’s just going to be swimming in data.

Kate Crawford: Well, it’s interesting. There’s two points here. I think when we look at things like the social credit score, which as you know ...

Could you explain that for those that ...

Kate Crawford: Yeah. Social credit score, it’s a really interesting system. It isn’t fully implemented until 2020, so a lot of it is a sort of speculative debate at the moment. But what we’ve seen already is that these scores are being used, basically, to track everything you do online. If you spend a lot of time doing online gaming, if you pay your bills on time, then your score will go up. If you don’t pay your bills on time, if you say something negative about the government on a forum, then, your score goes down.

If your score is low, it impacts your ability to do everything from buying a train ticket to getting your kids into this school that you want to go to, to getting the job that you want. It’s profoundly connected to all of these other sorts of things that you’d want to do in everyday life. That score ...

It moves to the physical world.

Kate Crawford: Exactly.

If you jaywalk, if you spit.

Kate Crawford: Exactly. If you jaywalk, if you do anything that seems to be rude in public space, then again, your credit score goes down.

This was an episode of Black Mirror. But, move along ...

Kate Crawford: Yeah. As they all are, you know?

I always say to Silicon Valley people, “Just imagine what you’re making, your product as an episode of Black Mirror, and then don’t do it.”

Kate Crawford: Sometimes, yeah. It’s like, who decided Black Mirror was like a design spec? I’m like, “No, guys! That’s not what they’re trying to say.”

Yeah. So look, here’s the thing about the social credit score: It’s really creepy. What really disturbs so many of us is that it could really change people’s opportunities in life. You’ve already seen many, many people blocked from domestic travel. We’ve got real concerns about what’s happening to the Uyghur population in China around now. This is really scary, from a human rights perspective.

But here’s what isn’t told as much. The US has many similar systems that are either in place or about to be in place in the next couple of years. I’m sure you read the news that, for example, in New York, insurers have been given full permission to look at your social media to decide how to modulate your insurance rates. That sounds very similar to the sorts of things that we’re concerned about in China.

I think, sometimes, there’s this tendency to say, “Oh, China’s the bad guy, and that would never happen here.” Actually, we have to do a lot of work to make sure that these tools aren’t used in oppressive ways that threaten civil rights, because we’ve got some real issues if we don’t really stop pushing back on some of these.

We will say, “This is the AI decision, so that’s it.”

Kate Crawford: That’s the problem.

”Bias,” or, “You’re biased,” or ...

Kate Crawford: Exactly. Bias is one problem, but it’s by no means the only one. Sometimes the real question is just, “Should we be using AI in this context at all? Even if it works, would that be okay?” That’s the question we have to start asking. It’s not just, “Let’s fix it so that it’s working great. Then everything is fine.” The question is, “Is it actually an appropriate technology in this context?”

If you had to make the argument why it would be, why would that be?

Kate Crawford: Well, look ... You know, it’s interesting. If I look at something like how we can reduce power costs, right? If we look specifically at the environment and climate change, there’s been some really interesting work that’s being done using AI systems to say, “Hey, we can actually modulate the use of the electricity grid to make sure we’re much more efficient. We can look at how ...” I mean, think about how much energy is wasted in cities and in giant server farms, which we’ve done lots of research in, as well.

Looking at data centers, I mean, we can do stuff there. That excites me. We’ve actually got big challenges, particularly on the environment side, where we can do real work. But the minute this stuff touches complex social systems, you are looking at way messier terrain. That’s when you need to think in a much more nuanced way about how you might be affecting people’s lives.

Right. Meredith, China.

Meredith Whittaker: Yeah. Well, Kate said it beautifully, but I want to highlight another distinction between China and the US along those lines, right? China, you have a party and a more or less centralized state, although it’s very factionalized, as well, that openly acknowledges, “These are the uses we’re putting AI to. This is what it’s going to do.” The social credit score is in law, they’ve written down what it is. It is pretty transparent about the application and the purpose, there isn’t much subterfuge.

In the US, there’s currently no law governing the application of facial recognition. There is no way for us to know, if we walk in a store, that we are being profiled by a facial recognition system. Even though there is a new facial recognition product that is being sold to different retail stores that offers to capture the image of shoplifters and then ban them across stores they’ve never been to, right? If I steal in Target and then I tried to walk into a Walmart, Walmart’s gonna be like, “Oh, stealer! Get out!” Right?

You’re looking at a set of practices, in that case, that is pretty similar to a social credit score in China, but under different auspices. The public is not aware, there is no process of acknowledgement or consent there. This is happening sort of under the cover of proprietary private sector tech that is actually not disclosed to the people that it’s going to affect.

Kate Crawford: And sometimes it’s in your house. I mean, if you see the story this week where, basically, there’s a rent-controlled building in Brooklyn where they’re just installing facial recognition cameras. None of the residents are getting a say in this, and they’re all starting to protest and say, “We don’t want to have facial recognition in our homes. This makes us feel like animals, like we’re being tagged.”

And of course, you know, it’s in more low-income communities that are being gentrified. It’s this story around, we have to look at these sort of deep social and economic contexts, to understand why and how these tools are being used.

Right. Okay, last question. I have questions from the audience. Does Silicon Valley get this? They’re very, very sorry, right now, I’ve noticed.

Meredith Whittaker: Yeah.

But, they’re really sorry.

Meredith Whittaker: Big mood.

There’s super sorry. I spent the day at Facebook, and they’re super sorry.

Meredith Whittaker: Sorriest.

Kate Crawford: Sorriest.

Yeah. Google’s not that sorry. But go ahead.

Meredith Whittaker: Well, I’m going to do a thing where ... There are a lot of people who don’t get it, across the board, full stop. There are a lot of people who benefit from not getting it. That’s a problem.

There are a huge number of people who are getting it, right? You are seeing workers, across tech, take personal risks to protest the decisions of their employer. We’re seeing that as one of the few checks we’ve actually had on these systems.

Let me be clear, Meredith was one of the leaders of the Google walkout, which was about ... Google paid a sexual harasser $90 million to leave.

Meredith Whittaker: It’s up to $135 million on my ...

Right. Okay. They paid him a lot of money, and this was protests over lots of things — arbitration, all kinds of things — and how Google handled sexual harassment issues. But go ahead. So sorry.

Meredith Whittaker: I think there are a lot of people in these companies, they don’t want to be complicit. They’re close enough to the tech to know where it fails and what it is good for and what it’s not good for. They are doing a lot of work to try to steer this ship in another direction. That’s actually giving me a lot of hope for Silicon Valley writ large, is that there are these forces who are ...

A lot of these people are comfortable, right? They could have easy lives, but you’re seeing tens of thousands of people, instead, turned to face those with power over them and say, “This is not okay. We actually need to think more clearly about these decisions. We need to think more clearly about the cultures we’re creating. We need to think more clearly about the implications of our technology on geopolitics, on our social well-being.” I think that is something that gives me hope that there’s actually the possibility of change.

That’s because, in Silicon Valley, the workers do have power, because there’s not enough of them and these are high-paying jobs, these are high-skilled jobs. That said, retaliation. Do you feel that?

Meredith Whittaker: I’m here with my AI Now hat on. I would say I continue to do my work, and I continue to sort of act in accordance with my personal ethical compass.

Do you feel like the leaders, though, are understanding? Or do understand it? They do benefit from the way the system is built.

Meredith Whittaker: Probably both. You know, I am a researcher, right? We’ve founded a research institute. A lot of what I do is look at the patterns of behavior across these companies. Are we seeing structural change that would actually result in significant improvements? Or, clear answers to some of these problems? I think we’ve seen some of that.

Kate Crawford: It’s interesting, too. I mean, this is one of the things that we think is super important is, “How do you protect people inside companies, who are going to be the whistleblowers, who are going to tell us things that we need to know, and who are actually going to do this sort of organizing work?” One of the things that’s super important is to start saying, “Hey, this is going to be important for journalism, this is going to be important for research, it’s gonna be important for history, that we understand how these systems work.”

Really, being able to create structures where workers can unionize, where they can disclose, where they can actually hold to account the companies that they work for, I think this is going to be increasingly important. It’s something that we’ve done quite a lot of research on.

Last question, and then we’ll get some questions from the audience. I’ve come to the conclusion recently that — because we know most of these people, and I don’t find them to be particularly evil in terms of ... we don’t have like a chemical manufacturer rubbing his hands together going, “Ha ha, I have won!” That kind of thing. It’s more like, “Oh no.”

I’ve come to a conclusion that perhaps they’re incompetent. You know, the leaders are actually incompetent to the task. Not stupid, but incompetent. They did not understand what they have created and now don’t know what the hell to do.

Kate Crawford: I guess I’d put it a little differently. I’d say this field has worshiped at the altar of the technical for the better part of 60 years, and at the expense of understanding the social and the ethical. We’re seeing the fruits of that prioritization.

It’s interesting, because if you go back in the history of AI, to the beginning, in the 1950s and ’60s it was a much more diverse field. You had anthropologists sitting at the table with computer scientists. It was this vision of how do we construct a world that we want to live in?

A fair world.

Kate Crawford: And, we missed a vote for a couple of decades there by making that much narrower a conversation. And right now, as we have these real issues of homogeneity in Silicon Valley, we need to open those doors up, but we also need to get people in the room who are the ones who are most likely to be seeing the downsides of the system. We have to center affected communities and not just engineers on big salaries.

And that’s against the backdrop of AI becoming smarter and smarter as we move along. I mean, someone was saying dolphins right now and they’re moving up the ...

Kate Crawford: I think that’s not giving dolphins enough credit. I think, actually, dolphins are pretty smart.

I think we’re actually at a way earlier stage then you might imagine. I think people are like, “Oh, we should be worried about the super intelligence.” And, I’m like, “Guys, no. It is so far from that.” We’re talking about basic 101 stuff, to a degree to which, yeah, AI systems can tell the difference between a cat and a dog.

But there are a lot of things it cannot do, and particularly the way that humans are classified by AI systems would curl your toes. I mean, some of this stuff is like really terrifyingly basic and often wrong. I actually think dolphins are a step ahead at this point. We’re probably at the protozoa level at this point.

As you know, when I interviewed Elon Musk a couple of years ago — who has talked about these issues of dangers of AI — he said he thinks ... We were talking about the Terminator ideas that go in movies and everything and he said, “No, eventually they’re going to treat us like house cats. We’re just house cats. We’ll be house cats to these systems and they don’t want to kill us, necessarily. They just don’t care.”

Meredith Whittaker: He’s wrong.

Kate Crawford: It’s funny. I mean, sometimes we call this ...

He was very colorful. Oh, they want to kill us. You say they want to kill us.

Kate Crawford: It’s true.

Meredith Whittaker: I think he’s wrong across the board. I think the premise is faulty, but it is a great distraction from the very real harms of faulty, broken, imperfect, profitable systems that are being mundanely and obscurely threaded through our social and economic systems.

Kate Crawford: We’ve called this the apex predator problem, which if you’re already an apex predator and you have all the money and all the power in the world, what’s the next thing to worry about? “Oh, I know! Super-intelligent machines, that’s the next threat to me.” But if you’re not an apex predator, if you’re one of us, we’ve got real problems with the systems that are already deployed, so maybe let’s focus on that.

Right. Including apex predators. Okay. All right, questions from the audience. Questions. Right here.

Audience member: I was interested in your comment about GDPR in contrast to where the US is in terms of privacy or tech regulation generally and how it’s important who gets to make the decisions. Is it our democratically elected government and representatives here or someone somewhere else with different kind of public policy considerations? So one of the challenges that I think US regulators often have to grapple with is, vis a vis a lot of these industries, we’ve often prided ourselves on the idea of permissionless innovation. So the speculative harm about what the heavy hand of government can come in and do to these new emerging fields ...

Innovation. Ruining innovation.

Audience member: Yes. Yeah. So, how do you balance that? How do you think about the tension between the idea that, in some ways, we want this technology to be here in the United States because we think we have the best values and we’ll get to the right answers to these questions, versus, we have to put some rules to the road in place prophylactically because an ex post facto enforcement machine isn’t going to protect people.

Right. This idea of innovation. I mean, this is why we have section 230.

Audience member: Yeah.

Which now, we’ve seen has been in place a little too long, which is the immunity for most platforms to anything they do. It’s a get out of jail free card, essentially, for the internet industry. So, what do you think about that, the idea? Because we do want, this idea, this is an issue ... When I did a podcast with Mark Zuckerberg, he’s like, “Well Kara, you know, we got China coming on strong and then you’ve got us and you know …” I called it the “Xi or Me” argument. And I was like, “I don’t want either of you! Where’s the third choice?”

Meredith Whittaker: Is there a third option?

Is there a third option?

Meredith Whittaker: A false dichotomy.

Right. Although, I mean, he won on that one, but it was kind of interesting that I hear it from Silicon Valley all the time.

Kate Crawford: But this is another false dichotomy too, and I love this question because so often we hear, it’s like, “Innovation or rules of the road.” Like, we either have some type of guardrails or we have thriving AI. It’s like, actually guys, no. You will have thriving AI when we have guardrails, when we have safety, when we have protections.

People will be much more likely to want to trust these tools when we know that it’s not gonna discriminate against us or harm us or cause other forms of ongoing structural problems. So, I think it’s this tendency to see innovation as God and everything else is restraining the power, and it’s like, no. We actually will only have AI that is worthy of the name when it is really designed in harmony with the ways in which we wanna live.

Meredith Whittaker: Innovate on ethics. Innovate on accountability. Innovate on clear guardrails. Otherwise, I think it is really interesting how innovation has become basically tethered to rising share prices for a couple of Silicon Valley companies, right? Is that what we mean by innovation? Or can we think about innovation and begin to redefine the term in ways that actually match our values more broadly?

Also it may not, in fact, be innovation, because I think we’re at our lowest startup creation cycle in 30 years right now.

Meredith Whittaker: Yeah. Using Kara’s framework on some spun-up AWS instance is not necessarily that innovative.

Yeah, I think our innovation issues have to do with other things besides this, because it has to do with government research money, it has to do with ... But we are, I think, at a low of startup creation right now, and it has to do with large giant companies dominating. Like, Google buys up every worthy AI company, if not Facebook, and Amazon does, and so the whole culture doesn’t ... Someone’s not gonna displace them, essentially.

Meredith Whittaker: I mean, you can ask the question, if we begin to scratch the surface of the political economy of the AI industry, you see a number of AI startups. They’re all over the place. But ask any one of them where do they host their infrastructure, right? Who runs their servers? It is Amazon, it is Microsoft, or it is Google. Right?

You can scratch below the surface a little more and say, “Actually, what kind of AI are you building?” Oftentimes, these companies are, in fact, just sort of repackaging models as a service that are sold by the big tech players. So again, where is this actually accruing to? Who is actually creating AI? Who has the capabilities to create AI, is a question I think we need to answer as we answer the questions around, you know, what would responsible AI look like? And what should these guardrails look like?

Okay. Another question.

Meredith Whittaker: Over here. Kara, over here.

Okay, and then over here.

Audience member: Hi everyone. Amanda Farnan with Politico. Thank you all for being here, hosting this important conversation today. I would love to bring it back to the Elon Musk comments and how this negative outlook on AI is quite easy for people who do not understand the opportunity to go down that rabbit hole of, “Oh, the negative ideas, the negative future. I don’t wanna talk about it.” How do you suggest opening up the conversation to people who either don’t want to understand or don’t have the opportunity or the conversation in their daily lives? How can we open that conversation to them? And what would you recommend?

You mean get it beyond the Arnold Schwarzenegger, “I’m here to kill you,” kind of thing from the future. Okay.

Audience member: Hopefully we are all able to get beyond that conversation as well.

Right. Right. Well, Elon also thinks that we live in a simulation, so it doesn’t really matter. Which would explain a lot about the Trump era, but go ahead. It’s a bunch of futuristic people playing a game. We’re all just a game.

Meredith Whittaker: Could’ve been more fun. I will answer this. I think part of the way you begin to get more people in the room is to focus not on the technical wizardry on the shiny cover of some Wired article, but ...

Or that it’s gonna kill us all.

Meredith Whittaker: Or that it’s gonna kill us. These sort of speculative and hype-filled proclamations that focus on tech wizardry, right? So the super intelligence or the next deep neural net which is better than humans — which is a claim that Kate has examined and we’re looking at — but it is actually affecting and shaping all of our lives in different ways, right? The harms are not evenly distributed, but this is in our lives, right?

There are license-plate profiling AIs that are sort of tracking people as they go over different bridges in New York. You have systems that are determining which school your child gets enrolled in. You have automated essay scoring systems that are determining whether it’s written well enough. Whose version of written English is that? And what is it rewarding or not? What kind of creativity can get through that?

You have systems that are being used ... We have an example that is fairly chilling in Arkansas of an algorithmic system that was brought in to distribute Medicaid benefits, and this system was allocating the number of hours of home care treatment that very ill patients got. So these are ... There was one patient, Tammy Duckworth, who had cerebral palsy and needed a lot of help, right? She needed help getting into bed and just help having a dignified life at home. A case worker shows up with this new system, enters her info, it dropped her hours from something like 12 hours a day to eight. And I don’t have the exact numbers, but it was enough that it seriously affected her quality of life. This was the difference between having a dignified life, living at home, getting the care she needed to survive, and not getting that.

Now, thankfully, there was a lawyer who took that to court, contested the algorithm, found that actually there was a major implementation flaw. There were all sorts of other problems, but neither the case worker on the scene nor Tammy had the ability to contest that decision or override it. So I would answer these are profound and material harms that are happening. These are profound and material implications. Whether you get hired or not depends on whether you have an eye twitch in a hiring interview video, right?

These are things that actually affect us, right now, and I think we all have a stake in talking about them. Our experience matters just as much as a technical design doc or a Wired article about the super intelligence, and I think part of the job to steer us toward a better future with these technologies is to begin to re-center the conversation around what the lived experience of having these technologies shape and direct our resources and opportunities is.

The stories. The stories of the effect. Okay. Stories, you’re talking about Tammy or the others, yeah.

Meredith Whittaker: Yeah and-

Up here.

Audience member: Hi. Just a quick question. I’m curious, in looking back in history, I’m sitting here spinning and saying, “Is there something that throughout different technology evolutions and disruptions, back through whenever, that’s even analogous?” Like, as you guys look at the way this will sort of percolate through society, do you look back at any other kind of technology transitions in history for lessons learned and things like that? And what are those?

Kate Crawford: Oh, I love this question. Yeah, we do. In fact, one of the things that we have is we really focus on deep historical research because I think we can learn a lot, exactly, from moments in history, where these big general-purpose technologies were flooding into society and decisions had to be made about how to use them.

One of the examples that I think people really go to a lot is nuclear, right? So we had this extraordinary potential for generating energy, but also terrifying horrors if it’s actually used as a weapon. So, what we saw was a really profound international conversation about how are we gonna regulate this technology, this capacity? And we had the creation of things like the IAEA, the international inspections body, that could say, “Hey, we should be able to inspect how you’re creating this, what you’re working with, do you have weapons facilities, do you have energy facilities?”

This was a big international effort and that’s a very difficult thing to do right now if you look at what the international governance conversation looks like, it’s much bleaker. And speaking from the US, it’s real bleak right here, right now. So how do we think about, like, what does an international governance conversation look like? I noticed that Mark Zuckerberg mentioned this in his most recent op-ed telling us how we should regulate the internet. He was like, “You know, we need global governance.” I’m like, “Well, that’s great. But how are we gonna get there right now?” Because historically, I think we’re in a very different moment.

So, I think you’re right. We can look to these kind of key moments of technologies that really changed the way we lived, but we also have to look at, what were the governance structures? And how do we get there? And that’s one of the big questions hovering over AI right now is, you can come up with local regulation but you’re really talking about technologies that are planetary in scale.

But the only thing that’s very different is you have never been tracked so beautifully ever in human history. Like, everyone here has a phone. You’re all, it’s not just biased data, there’s so much data pouring out of this room right now, for example, it’s insane what’s happening. And so that’s the difficulty. It’s the amount of data that’s being collected about you right now.

Kate Crawford: And the granularity of it.

The granularity of it. Everything.

Kate Crawford: And the intimacy of that. That’s the part that I think socially, we’re still catching up to what that’s gonna mean. What that means for the private sector, what it means for the public sector, what it means for government. These are really big, hard, difficult questions.

Because everyone’s opted into this.

Kate Crawford: Well, in some cases, without knowing.

Right. Right.

Kate Crawford: Was that informed consent, Kara?

I do know that my phone is one of the best relationships I’ve ever had in my life, but go ahead.

Kate Crawford: It definitely sticks with us.

All right. One more question.

Audience member: Hi. Thanks for having me here. My name’s Kendall Spencer, I’m a second-year law student at Georgetown. I think a lot of what we’ve talked about today has sort of come back to this issue of being able to ask the right questions. For so many years in America, we’ve always focused on this idea of innovation, innovation, without really deciding what it is we want these technologies to be able to do, which has been one of the core issues. So as we start to get to this point of trying to find a correct and efficient regulatory regime for this type of stuff, I’m wondering if ... What’s really the right trajectory here? Is it better to, instead of trying to regulate AI as a whole, to maybe think about, well maybe we can regulate the type of information that AI is processing. Maybe that’ll be a more effective way to do it, in terms of if we’re gonna fit this under any umbrella, whatsoever.

Kate Crawford: And that’s part of what’s happened in Europe, is exactly that conversation with GDPR. There are actually a lot of initiatives that are trying to do just that, is to say, “What kind of data? In what context? Held for how long?” These are really good questions that we need to pursue a lot further, but what’s interesting is that that’s happening at a much slower rate than these technologies are being released into the world and essentially being live tested on populations all the time.

So what we have is this kind of race now between how do you actually have those conversations with sufficient knowledge about how these tools really work, when a lot of these things are protected by trade secrecy, a lot of these tools, you’re not gonna know how it’s working and what data is being collected. So there’s a real knowledge problem here as well as that sort of speed problem of how do we catch up to what these technologies are doing?

So I would say to you, yeah, we definitely need to do that, to have those conversations, but we’ve got some real barriers in place and they’re the barriers that Meredith and I are most interested in addressing. Like, how do we create transparency? How do we create accountability? How do we have the public conversations about how we want these tools to work or not?

Meredith. Last word?

Meredith Whittaker: Yeah. I would agree with that. I think — real talk for a second — you began that question beautifully by asking what do we want this innovation to do? And right now, this innovation is produced by a handful of private companies. They are the only companies that have the resources to build this kind of AI. There isn’t a way to kind of bootstrap this from a startup in a garage. That’s just not how this technology works, and whatever else their calibration is, they’re looking for shareholder value, right? So I think there is a bigger question.

This isn’t a government initiative, like the internet was.

Meredith Whittaker: No. This is not a government initiative, like the internet was. There are very different incentive structures baked into the DNA of how those who are in the position to produce this technology are thinking about what it does and who it benefits. So, this is again, not to say this is sort of “bad” people. This is to look at, are these the incentives we want to govern a technology that is this pervasive and this powerful? And if not, what are the changes we would need to put in place? And I’m gonna leave it with that question.

All right. Fantastic. Kate and Meredith.

Sign up for the newsletter Recode Daily Email (required) By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. Subscribe