﻿

“To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information… In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment.” AI safety via debate

Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.

On today’s episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille.

We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

You can find out more about Geoffrey Irving at his website Here you can find the debate game mentioned in the podcast. Here you can find Geoffrey Irving, Paul Christiano, and Dario Amodei’s paper on debate. Here you can find an Open AI blog post on AI Safety via Debate. You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Geoffrey Irving about AI safety via Debate. We discuss how debate fits in with the general research directions of OpenAI, what amplification is and how it fits in, and the relation of all this with AI alignment. As always, if you find this podcast interesting or useful, please give it a like and share it with someone who might find it valuable.

Geoffrey Irving is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. Without further ado, I give you Geoffrey Irving.

Thanks again, Geoffrey, for coming on the podcast. It’s really a pleasure to have you here.

Geoffrey: Thank you very much, Lucas.

Lucas: We’re here today to discuss your work on debate. I think that just to start off, it’d be interesting if you could provide for us a bit of framing for debate, and how debate exists at OpenAI, in the context of OpenAI’s general current research agenda and directions that OpenAI is moving right now.

Geoffrey: I think broadly, we’re trying to accomplish AI safety by reward learning, so learning a model of what humans want and then trying to optimize agents that achieve that model, so do well according to that model. There’s sort of three parts to learning what humans want. One part is just a bunch of machine learning mechanics of how to learn from small sample sizes, how to ask basic questions, how to deal with data quality. There’s a lot more work, then, on the human side, so how do humans respond to the questions we want to ask, and how do we sort of best ask the questions?

Then, there’s sort of a third category of how do you make these systems work even if the agents are very strong? So stronger than human in some or all areas. That’s sort of the scalability aspect. Debate is one of our techniques for doing scalability. Amplification being the first one and Debate is a version of that. Generally want to be able to supervise a learning agent, even if it is smarter than a human or stronger than a human on some task or on many tasks.

Debate is you train two agents to play a game. The game is that these two agents see a question on some subject, they give their answers. Each debater has their own answer, and then they have a debate about which answer is better, which means more true and more useful, and then a human sees that debate transcript and judges who wins based on who they think told the most useful true thing. The result of the game is, one, who won the debate, and two, the answer of the person who won the debate.

You can also have variants where the judge interacts during the debate. We can get into these details. The general point is that, in my tasks, it is much easier to recognize good answers than it is to come up with the answers yourself. This applies at several levels.

For example, at the first level, you might have a task where a human can’t do the task, but they can know immediately if they see a good answer to the task. Like, I’m bad at gymnastics, but if I see someone do a flip very gracefully, then I can know, at least to some level of confidence, that they’ve done a good job. There are other tasks where you can’t directly recognize the answer, so you might see an answer, it looks plausible, say, “Oh, that looks like a great answer,” but there’s some hidden flaw. If an agent were to point out that flaw to you, you’d then think, “Oh, that’s actually a bad answer.” Maybe it was misleading, maybe it was just wrong. You need two agents doing a back and forth to be able to get at the truth.

Then, if you apply this recursively through several levels, you might have tasks where you can’t recognize whether an answer is good directly. You can’t even recognize whether a counterargument is good. Maybe a counter-counterargument, then you could recognize it. If you do sort of three steps of Debate, back and forth with two agents, you get to the truth.

Depending on the task, you need some number of these steps to pin down what the true answer is. Broadly, what’s happening is that the debaters, when they’re giving these counterarguments, they’re skipping over a lot of the space where maybe the human doesn’t understand what would happen, but they know that it would go a certain way. Like, if you’d see two expert players playing a game of Go, they’re constantly ignoring a larger number of moves which are just very bad and would waste a lot of time if you try to analyze them, and they’re just looking at kind of the most important part of the arguments about what is the best move in the game.

For all of the experiments, there are simple image tasks, like MNIST. MNIST is a dataset of digits, and then also some preliminary kind of human experiments with just human debaters. On MNIST, an example is you have a sparse classifier that learns to recognize MNIST from just seeing a few pixels, so you might train it by picking random masks of just a few pixels of an MNIST image. This classifier is very bad, because it only sees a small part of the image. It might get accuracies below 50%. I forget exactly the number, I could look them up.

Then, you train debaters that choose which parts of the image to show to the judge. You fix the judge at this kind of weak training process, and then the debaters will play a game where they try to pick pixels that convince the judge of what they want to argue for. There’s kind of two ways to decide … In the first paper, we had one debater that was incentivized to tell the truth, so it knew the correct class. Like say, we’re looking at a seven, and the other debater had to pre-specify which digit it was going to lie for. It might say, “I’m going to lie and say things were an eight.”

The game would be each of these agents pick a single pixel, and you sort of iterate for, let’s say six rounds, picking one pixel after another, and then the judge sees those pixels, and chooses who wins the debate. The judge has seen kind of a small part of the evidence. The result of that is that you can boost a weak classifier up to sort of higher performance, even if the agents are incentivized only to win the game. In this case, if you pick one debater to be the honest player and one debater to be the liar, the honest player wins a significant majority of the time, much more than the classifier would win on random play.

Lucas: The judge here, sorry, was not a human?

Geoffrey: The judge here is not a human. It’s just a classifier trained to do a bad job at MNIST, because it sees only a little bit of information. It’s trained to convergence, but its input is just a few pixels of an image.

Lucas: Then, so the pixels that are then revealed by the debaters, each pixel is sort of the argument.

Geoffrey: Each pixel is the argument.

Lucas: Just to provide a little bit more framing, there’s this feature of debate, you can apply it to a very large domain of things that you’d be surprised about if you expand the notion of what it means to debate to showing pixels or something like this.

Geoffrey: It’s actually more important to debate in natural language. The end goal here is we want to extract a strengthened, kind of improved version of human performance at a task. The way we go about this, either in amplification or in debate, is we sort of factor through reasoning. Instead of trying to train directly on the task, like the answers to the task, you might have some questions and some answers, and you could train directly on question/answer pairs, we’re going to build a task which includes all possible human reasoning in the form of, say, in this case, debates, and then we’ll train the agents to do well in this space of reasoning, and then well pick out the answers at the very end. Once we’re satisfied that the reasoning all works out.

Because humans, sort of the way we talk about higher level concepts, especially abstract concepts, and say subtle moral concepts, is natural language, the most important domain here, in the human case, is natural language. What we’ve done so far, in all experiments for Debate, is an image space, because it’s easier. We’re trying now to move that work into natural language so that we can get more interesting settings.

Lucas: Right. In terms of natural language, do you just want to unpack a little bit about how that would be done at this point in natural language? It seems like our natural language technology is not at a point where I really see robust natural language debates.

Geoffrey: There’s sort of two ways to go. One way is human debates. You just replace the ML agents with human debaters and then a human judge, and you see whether the system works in kind of an all-human context. The other way is machine learning natural language is getting good enough to do interestingly well on sample question/answer datasets, and Debate is already interesting if you do a very small number of steps. In the general debate, you sort of imagine that you have this long transcript, dozens of statements long, with points and counterpoints and counterpoints, but if you already do just two steps, you might do question, answer, and then single counterargument. For some tasks, at least in theory, it already should be stronger than the baseline of just doing direct question/answer, because you have this ability to focus in on a counterargument that is important.

An example might be you see a question and an answer and then another debater just says, “Which part of the answer is problematic?” They might point to a word or to a small phrase, and say, “This is the point you should sort of focus in on.” If you learn how to self critique, then you can boost the performance by iterating once you know how to self critique.

The hope is that even if we can’t do general debates on the machine learning side just yet, we can do shallow debates, or some sort of simple first step in this direction, and then work up over time.

Lucas: This just seems to be a very fundamental part of AI alignment where you’re just breaking things down into very simple problems and then trying to succeed in those simple cases.

Geoffrey: That’s right.

Lucas: Just provide a little bit more illustration of debate as a general concept, and what it means in the context of AI alignment. I mean, there are open questions here, obviously, about the efficacy of debate, how debate exists as a tool within the space, so epistemological things that allow us to arrive at truth, and I guess, infer other people’s preferences. Sorry, again, in terms of reward learning, and AI alignment, and debate’s place in all of this, just contextualize, I guess, its sort of role in AI alignment, more broadly.

Geoffrey: It’s focusing, again, on the scalability aspect. One way to formulate that is we have this sort of notion of, either from a philosophy side, reflective equilibrium, or kind of from the AI alignment literature, coherent extrapolated volition, which is sort of what a human would do if we had thought very carefully for a very long time about a question, and sort of considered all the possible nuances, and counterarguments, and so on, and kind of reached the conclusion that is sort of free of inconsistencies.

Then, we’d like to take this kind of vague notion of, what happens when a human thinks for a very long time, and compress it into something we can use as an algorithm in a machine learning context. It’s also a definition. This vague notion of, let a human think for a very long time, that’s sort of a definition, but it’s kind of a strange one. A single human can’t think for a super long time. We don’t have access to that at all. You sort of need a definition that is more factored, where either a bunch of humans think for a long time, we sort of break up tasks, or you sort of consider only parts of the argument space at a time, or something.

You go from there to things that are both definitions of what it means to simulate thinking for long time and also algorithms. The first one of these is Amplification from Paul Christiano, and there you have some questions, and you can’t answer them directly, but you know how to break up a question into subquestions that are hopefully somewhat simpler, and then you sort of recursively answer those subquestions, possibly breaking them down further. You get this big tree of all possible questions that descend from your outer question. You just sort of imagine that you’re simulating over that whole tree, and you come up with an answer, and then that’s the final answer for your question.

Similarly, Debate is a variant of that, in the sense that you have this kind of tree of all possible arguments, and you’re going to try to simulate somehow what would happen if you considered all possible arguments, and picked out the most important ones, and summarized that into an answer for your question.

The broad goal here is to give a practical definition of what it means for people to take human input and push it to its inclusion, and then hopefully, we have a definition that also works as an algorithm, where we can do practical ML training, to train machine learning models.

Lucas: Right, so there’s, I guess, two thoughts that I sort of have here. The first one is that there is just sort of this fundamental question of what is AI alignment? It seems like in your writing, and in the writing of others at OpenAI, it’s to get AI to do what we want them to do. What we want them to do is … either it’s what we want them to do right now, or what we would want to do under reflective equilibrium, or at least we want to sort of get to reflective equilibrium. As you said, it seems like a way of doing that is compressing human thinking, or doing it much faster somehow.

Geoffrey: One way to say it is we want to do what humans want, even if we understood all of the consequences. It’s some kind of, Do what humans want, plus some side condition of: ‘imagine if we knew everything we needed to know to evaluate their question.”

Lucas: How does Debate scale to that level of compressing-

Geoffrey: One thing we should say is that everything here is sort of a limiting state or a goal, but not something we’re going to reach. It’s more important that we have closure under the relative things we might not have thought about. Here are some practical examples from kind of nearer-term misalignment. There’s an experiment in social science where they send out a bunch of resumes to job applications to classified ads, and the resumes were paired off into pairs that were identical except that the name of the person was either white sounding or black sounding, and the result was that you got significantly higher callback rates if the person sounded white, and even if they had an entirely identical resume to the person sounding black.

Here’s a situation where direct human judgment is bad in the way that we could clearly know. You could imagine trying to push that into the task by having an agent say, “Okay, here is a resume. We’d like you to judge it.” Either pointing explicitly to what they should judge, or pointing out, “You might be biased here. Try to ignore the name of the resume, and focus on this issue, like say their education or their experience.” You sort of hope that if you have a mechanism for surfacing concerns or surfacing counterarguments, you can get to a stronger version of human decision making. There’s no need to wait for some long term very strong agent case for this to be relevant, because we’re already pretty bad at making decisions in simple ways.

Then, broadly, I sort of have this sense that there’s not going to be magic in decision making. If I go to some very smart person, and they have a better idea for how to make a decision, or how to answer a question, I expect there to be some way they could explain their reasoning to me. I don’t expect I just have to take them on faith. We want to build methods that surface the reasons they might have to come to a conclusion.

Now, it may be very difficult for them to explain the process for how they came to those arguments. There’s some question about whether the arguments they’re going to make is the same as the reasons they’re giving the answers. Maybe they’re sort of rationalizing and so on. You’d hope that once you sort of surface all the arguments around the question that could be relevant, you get a better answer than if you just ask people directly.

Lucas: As we move out of debate in simple cases of image classifiers or experiments in similar environments, what does debate look like … I don’t really understand the ways in which the algorithms can be trained to elucidate all of these counterconcerns, and all of these different arguments, in order to help human beings arrive at the truth.

Geoffrey: One case we’re considering, especially on kind of the human experiment side, or doing debates with humans, is some sort of domain expert debate. The two debaters are maybe an expert in some field, and they have a bunch of knowledge, which is not accessible to the judge, which is maybe a reasonably competent human, but doesn’t know the details of some domain. For example, we did a debate where there were two people that knew computer science and quantum computing debating a question about quantum computing to a person who has some background, but nothing in that field.

The idea is you start out, there’s a question. Here, the question was, “Is the complexity class BQP equal to NP, or does it contain NP?” One point is that you don’t have to know what those terms mean for that to be a question you might want to answer, say in the course of some other goal. The first steps, things the debaters might say, is they might give short, intuitive definitions for these concepts and make their claims about what the answer is. You might say, “NP is the class of problems where we can verify solutions once we’ve found them, and BQP is the class of things that can run on a quantum computer.”

Now, you could have a debater that just straight up lies right away and says, “Well, actually NP is the class of things that can run on fast randomized computers.” That’s just wrong, and so what would happen then is that the counter debater would just immediately point to Wikipedia and say, “Well, that isn’t the definition of this class.” The judge can look that up, they can read the definition, and realize that one of the debaters has lied, and the debate is over.

You can’t immediately lie in kind of a simple way or you’ll be caught out too fast and lose the game. You have to sort of tell the truth, except maybe you kind of slightly veer towards lying. This is if you want to lie in your argument. At every step, if you’re an honest debater, you can try to pin the liar down to making sort of concrete statements. In this case, if say someone claims that quantum computers can solve all of NP, you might say, “Well, you must point me to an algorithm that does that.” The debater that’s trying to lie and say that quantum computers can solve all of NP might say, “Well, I don’t know what the algorithm is, but meh, maybe there’s an algorithm,” and then they’re probably going to lose, then.

Maybe they have to point to a specific algorithm. There is no algorithm, so they have to make one up. That will be a lie, but maybe it’s kind of a subtle complicated lie. Then, you could kind of dig into the details of that, and maybe you can reduce the fact that that algorithm is a lie to some kind of simple algebra, which either the human can check, maybe they can ask Mathematica or something. The idea is you take a complicated question that’s maybe very broad and covers a lot of the knowledge that the judge doesn’t know and you try to focus in closer and closer on details of arguments that the judge can check.

What the judge needs to be able to do is kind of follow along in the steps until they reach the end, and then there’s some ground fact that they can just look up or check and see who wins.

Lucas: I see. Yeah, that’s interesting. A brief passing thought is thinking about double cruxes and some tools and methods that CFAR employs, like how they might be interesting or used in debate. I think I also want to provide some more clarification here. Beyond debate being a truth-seeking process or a method by which we’re able to see which agent is being truthful, or which agent is lying, and again, there’s sort of this claim that you have in your paper that seems central to this, where you say, “In the debate game, it is harder to lie than to refute a lie.” This asymmetry in debate between the liar and the truth-seeker should hopefully, in general, bias towards people more easily seeing who is telling the truth.

Geoffrey: Yep.

Lucas: In terms of AI alignment again, in the examples that you’ve provided, it seems to help human beings arrive at truth for complex questions that are above their current level of understanding. How does this, again, relate directly to reward learning or value learning?

Geoffrey: Let’s assume that in this debate game, it is the case that it’s very hard to liar, so the winning move is to say the truth. What we want to do then is train kind of two systems. One system will be able to reproduce human judgment. That system would be able to look at the debate transcript and predict what the human would say is the correct winner of the debate. Once you get that system trained, so that’s sort of you’re learning not direct toward, but again, some notion of predicting how humans deal with reasoning. Once you learn that bit, then you can train an agent to play this game.

Then, we have a zero sum game, and then we can sort of apply any technique used to play a zero sum game, like Monte Carlo tree search in AlphaGo, or just straight up RL algorithms, as in some of OpenAI’s work. The hope is that you can train an agent to play this game very well, and therefore, it will be able to predict where counter-arguments exist that would help it win debates, and therefore, if it plays the game well, and the best way to play the game is to tell the truth, then you end up with a value aligned system. Those are large assumptions. You should be cautious if those are true.

Lucas: There’s also all these issues that we can get into about biases that humans have, and issues with debate. Whether or not you’re just going to be optimizing the agents for exploiting human biases and convincing humans. Definitely seems like, even just looking at how human beings value align to each other, debate is one thing in a large toolbox of things, and in AI alignment, it seems like potentially Debate will also be a thing in a large toolbox of things that we use. I’m not sure what your thoughts are about that.

Geoffrey: I could give them. I would say that there’s two ways of approaching AI safety and AI alignment. One way is to try to propose, say, methods that do a reasonably good job at solving a specific problem. For example, you might tackle reversibility, which means don’t take actions that can’t be undone, unless you need to. You could try to pick that problem out and solve it, and then imagine how we’re going to fit this together into a whole picture later.

The other way to do it is try to propose algorithms which have at least some potential to solve the whole problem. Usually, they won’t, and then you should use them as a frame to try to think about how different pieces might be necessary to add on.

For example, in debate, the biggest thing in there is that it might be the case that you train a debate agent that gets very good at this task, the task is rich enough that it just learns a whole bunch of things about the world, and about how to think about the world, and maybe it ends up having separate goals, or it’s certainly not clearly aligned because the goal is to win the game. Maybe winning the game is not exactly aligned.

You’d like to know sort of not only what it’s saying, but why it’s saying things. You could imagine sort of adding interpret ability techniques to this, which would say, maybe Alice and Bob are debating. Alice says something and Bob says, “Well, Alice only said that because Alice is thinking some malicious fact.” If we add solid interpret ability techniques, we could point into Alice’s thoughts at that fact, and pull it out, and service that. Then, you could imagine sort of a strengthened version of a debate where you could not only argue about object level things, like using language, but about thoughts of the other agent, and talking about motivation.

It is a goal here in formulating something like debate or amplification, to propose a complete algorithm that would solve the whole problem. Often, not to get to that point, but we have now a frame where we can think about the whole picture in the context of this algorithm, and then fix it as required going forwards.

I think, in the end, I do view debate, if it succeeds, as potentially the top level frame, which doesn’t mean it’s the most important thing. It’s not a question of importance. More of just what is the underlying ground task that we want to solve? If we’re training agents to either play video games or do question/answers, here the proposal is train agents to engage in these debates and then figure out what parts of AI safety and AI alignment that doesn’t solve and add those on in that frame.

Lucas: You’re trying to achieve human level judgment, ultimately, through a judge?

Geoffrey: The assumption in this debate game is that it’s easier to be a judge than a debater. If it is the case, though, that you need the judge to get to human level before you can train a debater, then you have a problematic bootstrapping issue where, first you must solve value alignment for training the judge. Only then do you have value alignment for training the debater. This is one of the concerns I have. I think the concern sort of applies to some of other scalability techniques. I would say this is sort of unresolved. The hope would be that it’s not actually sort of human level difficult to be a judge on a lot of tasks. It’s sort of easier to check consistency of, say, one debate statement to the next, than it is to do long, reasoning processes. There’s a concern there, which I think is pretty important, and I think we don’t quite know how it plays out.

Lucas: The view is that we can assume, or take the human being to be the thing that is already value aligned, and the process by which … and it’s important, I think, to highlight the second part that you say. You say that you’re pointing out considerations, or whichever debater is saying that which is most true and useful. The useful part, I think, shouldn’t be glossed over, because you’re not just optimizing debaters to arrive at true statements. The useful part smuggles in a lot issues with normative things in ethics and metaethics.

Geoffrey: Let’s talk about the useful part.

Lucas: Sure.

Geoffrey: Say we just ask the question of debaters, “What should we do? What’s the next step that I, as an individual person, or my company, or the whole world should take in order to optimize total utility?” The notion of useful, then, is just what is the right action to take? Then, you would expect a debate that is good to have to get into the details of why actions are good, and so that debate would be about ethics, and metaethics, and strategy, and so on. It would pull in all of that content and sort of have to discuss it.

There’s a large sea of content you have to pull in. It’s roughly kind of all of human knowledge.

Lucas: Right, right, but isn’t there this gap between training agents to say what is good and useful and for agents to do what is good and useful, or true and useful?

Geoffrey: The way in which there’s a gap is this interpretability concern. You’re getting at a different gap, which I think is actually not there. I like giving game analogies, so let me give a Go analogy. You could imagine that there’s two goals in playing the game of Go. One goal is to find the best moves. This is a collaborative process where all of humanity, all of sort of Go humanity, say, collaborates to learn, and explore, and work together to find the best moves in Go, defined by, what are the moves that most win this game? That’s a non-zero sum game, where we’re sort of all working together. Two people competing on the other side of the Go board are working together to get at what the best moves are, but within a game, it’s a zero sum game.

You sit down, and you have two players, two people playing a game of Go, one of them’s going to win, zero sum. The fact that that game is zero sum doesn’t mean that we’re not learning some broad thing about the world, if you’ll zoom out a bit and look at the whole process.

We’re training agents to win this debate game to give the best arguments, but the thing we want to zoom out and get is the best answers. The best answers that are consistent with all the reasoning that we can bring into this task. There’s huge questions to be answered about whether the system actually works. I think there’s an intuitive notion of, say, reflective equilibrium, or coherent extrapolated volition, and whether debate achieves that is a complicated question that’s empirical, and theoretical, and we have to deal with, but I don’t think there’s quite the gap you’re getting at, but I may not have quite voiced your thoughts correctly.

Lucas: It would be helpful if you could unpack how the alignment that is gained through this process is transferred to new contexts. If I take an agent trained to win the Debate game outside of that context.

Geoffrey: You don’t. We don’t take it out of the context.

Lucas: Okay, so maybe that’s why I’m getting confused.

Geoffrey: Ah. I see. Okay, this [inaudible 00:26:09]. We train agents to play this debate game. To use them, we also have them play the debate game. By training time, we give them kind of a rich space of questions to think about, or concerns to answer, like a lot of discussion. Then, we want to go and answer a question in the world about what we should do, what the answer to some scientific question is, is this theorem true, or this conjecture true? We state that as a question, and we have them debate, and then whoever wins, they gave the right answer.

There’s a couple of important things you can add to that. I’ll give like three levels of kind of more detail you can go. One thing is the agents are trained to look at state in the debate game, which could be I’ve just given the question, or there’s a question and there’s a partial transcript, and they’re trained to say the next thing, to make the next move in the game. The first thing you can do is you have a question that you want to answer, say, what should the world do, or what should I do as a person? You just say, “Well, what’s the first move you’d make?” The first move they’d make is to give an answer, and then you just stop there, and you’re done, and you just trust that answer is correct. That’s not the strongest thing you could do.

The next thing you can do is you’ve trained this model of a judge that knows how to predict human judgment. You could have them, from the start of this game, play a whole bunch of games, play 1,000 games of debate, and from that learn with more accuracy what the answer might be. Similar to how you’d, say if you’re playing a game of Go, if you want to know the best move, you would say, “Well, let’s play 1,000 games of Go from this state. We’ll get more evidence and we’ll know what the best move is.”

The most interesting thing you can do, though, is you yourself can act as a judge in this game to sort of learn more about what the relevant issues are. Say there’s a question that you care a lot about. Hopefully, “What should the world do,” is a question you care a lot about. You want to not only see what the answer is, but why. You could act as a judge in this game, and you could, say, play a few debates, or explore part of this debate tree, the tree of all possible debates, and you could do the judgment yourself. There, the end answer will still be who you believe is the right answer, but the task of getting to that answer is still playing this game.

The bottom line here is, at test time, we are also going to debate.

Lucas: Yeah, right. Human beings are going to be participating in this debate process, but does or does not debate translate into systems which are autonomously deciding what we ought to do, given that we assume that their models of human judgment on debate are at human level or above?

Geoffrey: Yeah, so if you turn off the human in the loop part, then you get an autonomous agent. If the question is, “What should the next action be in, say, an environment?” And you don’t have humans in the loop at test time, then you can get an autonomous agent. You just sort of repeatedly simulate debating the question of what to do next. Again, you can cut this process short. Because the agents are trained to predict moves in debate, you can stop them after they’ve predicted the first move, which is what the answer is, and then just take that answer directly.

If you wanted the maximally efficient autonomous agent, that’s the case you would do. At OpenAI, my view, our goal is I don’t want to take AGI and immediately deploy it in the most fast twitch tasks. Something like self-driving a car. If we get to human level intelligence, I’m not going to just replace all the self-driving cars with AGI and let them do their thing. We want to use this for the paths where we need very strong capabilities. Ideally, those tasks are slower and more deliberative, so we can afford to, say, take a minute to interact with the system, or take a minute to have the system engage in its own internal debates to get more confidence in these answers.

The model here is basically the Oracle AI model, that rather than the autonomous agent operating at an NDP model.

Lucas: I think that this is a very important part to unpack a bit more. This distinction here that it’s more like an oracle and less like an autonomous agent going around optimizing everything. What does a world look like right before, during, after AGI given debate?

Geoffrey: The way I think about this is that, an oracle here is a question/answer system of some complexity. You asked it questions, possibly with a bunch of context attached, and it gives you answers. You can reduce pretty much anything to an oracle, if oracle is sort of general enough. If your goal is to take actions in an environment, you can ask the oracle, “What’s the best action to take, and the next step?” And just iteratively ask that oracle over and over again as you take the steps.

Lucas: Or you could generate the debate, right? Over the future steps?

Geoffrey: The most direct way to do an NDP with Debate is to engage in a debate at every step, restart the debate process, showing all the history that’s happened so far, and say, the question at hand, that we’re debating, is what’s the best action to take next? I think I’m relatively optimistic that when we make AGI, for a while after we make it, we will be using it in ways that aren’t extremely fine grain NDP-like in the sense of we’re going to take a million actions in a row, and they’re all actions that hit the environment.

We’d mainly use this full direct reduction. There’s more practical reductions for other questions. I’ll give an example. Say you want to write the best book on, say, metaethics, and you’d like debaters to produce this books. Let’s say that debaters are optimal agents so they know how to do debates on any subject. Even if the book is 1,000 pages long, or say it’s a couple hundred pages long, that’s a more reasonable book, you could do it in a single debate as follows. Ask the agents to write the book. Each agent writes its own book, say, and you ask them to debate which book is better, and that debate all needs to point at small parts of the book.

One of the debaters writes a 300 page book and buried in the middle of it is a subtle argument, which is malicious and wrong. The other debater need only point directly at the small part of the book that’s problematic and say, “Well, this book is terrible because of the following malicious argument, and my book is clearly better.” The way this works is, if you are able to point to problematic parts of books in a debate, and therefore win, the best first move in the debate is to write the best book, so you can do it in one step, where you produce this large object with a single debate, or a single debate game.

The reason I mention this is that’s a little better in terms of practicality, then, writing the book. If the book is like 100,000 words, you wouldn’t want to have a debate about each word, one after another. That’s sort of a silly, very expensive process.

Lucas: Right, so just to back up here, and to provide a little bit more framing, there’s this beginning at which we can see we’re just at a very low level trying to optimize agents for debate, and there’s going to be an asymmetry here that we predict, that it should, in general, usually be easier to tell who’s telling the truth over who’s not, because it’s easier to tell the truth than to lie, and lie in convincing ways. Scaling from there, it seems that what we ultimately really want is to then be able to train a judge, right?

Geoffrey: The goal is to train … You need both.

Lucas: Right. You need both to scale up together.

Geoffrey: Yep.

Lucas: Through doing so, we will have oracles that will be able to assist in decision making, because what we’re doing is amplifying human explicit deliberative processes. Again, this tying into coherent extrapolated volition and reflective equilibrium. What we want is to be able to amplify human decision making, which I guess is, in a broad sense, value alignment.

Geoffrey: Right.

Lucas: Are there any other technical motivations behind debate that you would like to unpack for why within the set of all these different AI alignment methodologies, why debate is so important?

Geoffrey: Sure. I think maybe the right way to talk about that is to contrast amplification and debate. I got to Debate by thinking a bunch about Amplification, talking a bunch with Paul Christiano. To summarize it for listeners again, Amplification is take questions, decompose them into subquestions that are hopefully easier to answer, repeat that decomposition until we get to things we can check, and use that whole process to answer complicated questions.

The stock amplification, the human is doing the task of decomposing the questions. The human decomposition has to be complete. For example, you might have a question, going back to this resume example, we’re having this debate about whether someone whose name sounds white or black is a good candidate. For whatever reason, no one here wants to be racist, but they’re all just completely oblivious to the idea they could be racist. They’re going through and discussing whether the candidate is applicable, they’re kind of breaking these questions down. Just subtly at every point, racism is kind of inching its way into the answers, and they just never thought to kind of investigate.

If at some point they had asked, “Well, am I deciding this because the person has a white sounding name?” They would have realized their mistake, but they never think of that. In debate, you are relying on the agents to be able to do the decomposition, so the agents can, even if it’s hard for a human to do it, point out, “Maybe you’re being racist here,” and sort of get the correction. The advantage of debate there is you get some help doing this combination of tasks for you.

The other thing that happened, frankly, which is one of the reasons I thought of debate, was AlphaGo. In thinking about amplification, I’ve been sort of concerned. “Is this process going to be scalable? Are we going to lose a bunch of efficiency in doing this complicated decomposition process?” I was sort of concerned that we would lose a bunch of efficiency and therefore be not competitive with unsafe techniques to getting to AGI.

Then, AlphaGo came out, and AlphaGo got very strong performance, and it did it by doing an explicit tree search. As part of AlphaGo, it’s doing this kind of deliberative process, and that was not only important for performance at test time, but was very important for getting the training to work. What happens is, in AlphaGo, at training time, it’s doing a bunch of tree search through the game of Go in order to improve the training signal, and then it’s training on that improved signal. That was one thing kind of sitting in the back of my mind.

I was kind of thinking through, then, the following way of thinking about alignment. At the beginning, we’re just training on direct answers. We have these questions we want to answer, an agent answers the questions, and we judge whether the answers are good. You sort of need some extra piece there, because maybe it’s hard to understand the answers. Then, you imagine training an explanation module that tries to explain the answers in a way that humans can understand. Then, those explanations might be kind of hard to understand, too, so maybe you need an explanation explanation module.

For a long time, it felt like that was just sort of ridiculous epicycles, adding more and more complexity. There was no clear end to that process, and it felt like it was going to be very inefficient. When AlphaGo came out, that kind of snapped into focus, and it was like, “Oh. If I train the explanation module to find flaws, and I train the explanation explanation module to find flaws in flaws, then that becomes a zero-sum game. If it turns out that ML is very good at solving zero-sum games, and zero-sum games were a powerful route to drawing performance, then we should take advantage of this in safety.” Poof. We have, in this answer, explanation, explanation, explanation route, that gives you the zero-sum game of Debate.

That’s roughly sort of how I got there. It was a combination of thinking about Amplification and this kick from AlphaGo, that zero-sum games and search are powerful.

Lucas: In terms of the relationship between debate and amplification, can you provide a bit more clarification on the differences, fundamentally, between the process of debate and amplification? In terms of amplification, there’s a decomposition process, breaking problems down into subproblems, eventually trying to get the broken down problems into human level problems. The problem has essentially doubled itself many items over at this point, right? It seems like there’s going to be a lot of questions for human beings to answer. I don’t know how interrelated debate is to decompositional argumentative process.

Geoffrey: They’re very similar. Both Amplification and Debate operate on some large tree. In amplification, it’s the tree of all decomposed questions. Let’s be concrete and say the top level question in amplification is, “What should we do?” In debate, again, the question at the top level is, “What should we do?” In amplification, we take this question. It’s a very broad open-ended question, and we kind of break it down more and more and more. You sort of imagine this expanded tree coming out from that question. Humans are constructing this tree, but of course, the tree is exponentially large, so we can only ever talk about a small part of it. Our hope is that the agents learn to generalize across the tree, so they’re learning the whole structure of the tree, even given finite data.

In the debate case, similarly, you have top level question of, “What should we do,” or some other question, and you have the tree of all possible debates. Imagine every move in this game is, say, saying a sentence, and at every point, you have maybe an exponentially large number of sentences, so the branching factor, now in the tree, is very large. The goal in debate is kind of see this whole tree.

Now, here is the correspondence. In amplification, the human does the decomposition, but I could instead have another agent do the decomposition. I could say I have a question, and instead of a human saying, “Well, this question breaks down into subquestions X, Y, and Z,” I could have a debater saying, “The subquestion that is most likely to falsify this answer is Y.” It could’ve picked at any other question, but it picked Y. You could imagine that if you replace a human doing the decomposition with another agent in debate pointing at the flaws in the arguments, debate would kind of pick out a path through this tree. A single debate transcript, in some sense, corresponds to a single path through the tree of amplification.

Lucas: Does the single path through the tree of amplification elucidate the truth?

Geoffrey: Yes. The reason it does is it’s not an arbitrarily chosen path. We’re sort of choosing the path that is the most problematic for the arguments.

Lucas: In this exponential tree search, there’s heuristics and things which are being applied in general to the tree search in order to collapse onto this one branch or series?

Geoffrey: Let’s say, in amplification, we have a question. Our decomposition is, “Well, this decomposes into X, Y, and Z,” and then we recursively call the agent, and it says, “The answers are AX, AY, AZ, for these questions.” Now, if I trusted those subanswers, I could do the reconstruction of the answers to the original question. If I don’t trust the subanswers, I might say, “Well, which subanswer is most likely to be false?” The correspondence with debate is the other debating agent would point to which one of the subanswers is probably false, and then you recourse down just that tree.

Geoffrey: Here’s the way it works. You can trust amplification if all of the subanswers are correct. If at least one of them is false, a strong debater can find which of those subanswers is false and recourse down to that. That’s why, if it turns out that a single path chosen by strong debaters never finds a false subanswer, then you will have established that the whole tree is true.

Lucas: I see. I think this picture of, “What should we do,” at the top, and then the trees would fall from that in terms of one version of amplification and one version of debate is pretty helpful here.

Geoffrey: The most important thing here is that in Amplification, the branching factor is small. I’m a human, I see a question, I sort of decompose that into a relatively small number of questions. Say, two to 10. In debate, the branching factor is very large. A debate agent could make any statement as the next countermove. What this means is that if it works for strong play, because the branching factor is so large, a shallow debate tree is more powerful than a shallow amplification tree, because you can cover more terrain with a couple steps of debate that are a proxy for a large tree.

The problem with that statement, maybe the biggest advantage to amplification, is you have to really be assuming that you know the agents are strong enough to cover the tree. Say I’m debating whether there’s a black swan, and there in fact is a black swan, but there’s only one of them, and there’s only one in the entire world, and the correct countermove would be pointing at that black swan, finding it out of the entire world. A debater who wanted to say, “There are no black swans,” to just claim, “There are no black swans. Trust me. The other agent would be able to point to one if it exists.” Maybe it’s the case that a black swan does exist, but the other agent is just too weak to point at the black swan, and so that debate doesn’t work.

This argument that shallow debates are powerful leans a whole lot on debaters being very strong, and debaters in practice will not be infinitely strong, so there’s a bunch of subtlety there that we’re going to have to wrestle.

Lucas: It would also be, I think, very helpful if you could let us know how you optimize for strong debaters, and how is amplification possible here if human beings are the ones who are pointing out the simplifications of the questions?

Geoffrey: Whichever one we choose, whether it’s amplification, debate, or some entirely different scheme, if it depends on humans in one of these elaborate ways, we need to do a bunch of work to know that humans are going to be able to do this. At amplification, you would expect to have to train people to think about what kinds of decompositions are the correct ones. My sort of bias is that because debate gives the humans more help in pointing out the counterarguments, it may be cognitively kinder to the humans, and therefore, that could make it a better scheme. That’s one of the advantages of debate.

The technical analogy there is a shallow debate argument. The human side is, if someone is pointing out the arguments for you, it’s cognitively kind. In amplification, I would expect you’d need to train people a fair amount to have the decomposition be reliably complete. I don’t know that I have a lot of confidence that you can do that. One way you can try to do it is, as much as possible, systematize the process on the human side.

In either one of these schemes, we can give the people involved an arbitrary amount of training and instruction in whatever way we think is best, and we’d like to do the work to understand what forms of instruction and training are most truth seeking, and try to do that as early as possible so you have a head start.

I would say I’m not going to be able to give you a great argument for optimism about amplification. This is a discussion that Paul, and Andreas Stuhlmueller, and I have, where I think Paul and Andreas, they kind of lean towards these metareasoning arguments, where if you wanted to answer the question, “Where should I go on vacation,” the first subquestion is, “What would be a good way to decide where to go on vacation?” Quickly go meta, and maybe you go meta, meta, like it’s kind of a mess. Whereas, the hope is that because debate, you have sort of have help pointing to things, you can do much more object level, where the first step in a debate about where to go on vacation is just Bali or Alaska. You give the answer and then you focus in on more …

For a broader class of questions, you can stay at object level reasoning. Now, if you want to get to metaethics, you would have to bring in the kind of reasoning. It should be a goal of ours to, for a fixed task, try to use the simplest kind of human reasoning possible, because then we should expect to get better results out of people.

Lucas: All right. Moving forward. Two things. The first that would be interesting would be if you could unpack this process of training up agents to be good debaters, and to be good predictors of human decision making regarding debates, what that’s actually going to look like in terms of your experiments, currently, and your future experiments. Then, also just pivoting into discussing reasons for optimism and pessimism about debate as a model for AI alignment.

Geoffrey: On the experiment side, as I mentioned, we’re trying to get into the natural language domain, because I think that’s how humans debate and reason. We’re doing a fair amount of work at OpenAI on core ML language modeling, so natural language processing, and then trying to take advantage of that to prototype these systems. At the moment, we’re just doing what I would call zero step debate, or one step debate. It’s just a single agent answering a question. You have question, answer, and then you have a human kind of judging whether the answer is good.

The task of predicting an answer is just read a bunch of text and predict a number. That is essentially just a standard NLP type task, and you can use standard methods from NLP on that problem. The hope is that because it looks so standard, we can sort of just paste the development on the capability side in natural language processing on the safety side. Predicting the result is just sort of use whatever most powerful natural language processing architecture is, and apply it to this task. Architecture and method.

Similarly, on the task of answering questions, that’s also a natural language task, just a generative one. If you’re answering questions, you just read a bunch of text that is maybe the context of the question, and you produce an answer, and that answer is just a bunch of words that you spit out via a language model. If you’re doing, say, a two step debate, where you have question, answer, counterargument, then similarly, you have a language model that spits out an answer, and a language model that spits out the counterargument. Those can in fact be the same language model. You just flip the reward at some point. An agent is rewarded for answering and winning, and answering well while it’s spitting out the answer, and then when it’s spitting out the counteranswer, you just reward it for falsifying the answer. It’s still just degenerative language task with some slightly exotic reward.

Going forwards, we expect there to need to be something like … This is not actually high confidence. Maybe there’s things like AlphaGo zero style tree search that are required to make this work very well on the generative side, and we will explore those as required. Right now, we need to falsify the statement that we can just do it with stock language modeling, which we’re working on. Does that cover the first part?

Lucas: I think that’s great in terms of the first part, and then again, the second part was just places to be optimistic and pessimistic here about debate.

Geoffrey: Optimism, I think we’ve covered a fair amount of it. The primary source of optimism is this argument that shallow debates are already powerful, because you can cover a lot of terrain in argument space with a short debate, because of the high branching factor. If there’s an answer that is robust to all possible counteranswers, then it hopefully is a fairly strong answer, and that gets stronger as you increase the number of steps. This assumes strong debaters. That would be a reason for pessimism, not optimism. I’ll get to that.

The top two is that one, and then the other part is that ML is pretty good at zero-sum games, particularly zero-sum perfect information games. There have been these very impressive headline results from AlphaGo, DeepMind, and Dota at OpenAI, and a variety of other games. In general, zero-sum, close to perfect information games, we roughly know how to do them, at least in this not too high branching factor case. There’s an interesting thing where if you look at the algorithms, say for playing poker, or for playing more than two player games, where poker is zero-sum two player, but is imperfect information, or the algorithm for playing, say, 10 player games, they’re just much more complicated. They don’t work as well.

I like the fact that debate is formulated as a two player zero-sum perfect information game, because we seem to have better algorithms to play them with ML. This is both practically true, it is in practice easier to play them, and also there’s a bunch of theory that says that two player zero-sum is a different complexity class than, say, two player non-zero-sum, or N player. The complexity class gets harder, and you need nastier algorithms. Finding a Nash equilibrium in a general game, that’s either non-zero-sum or more than two players is PPAD-complete, in a tabular case, in a small game, with two player zero-sum, that problem is convex and has a polynomial-time solution. It’s a nicer class. I expect there to continue to be better algorithms to play those games. I like formulating safety as that kind of problem.

Those are kind of the reasons for optimism that I think are most important. I think going into more of those is kind of less important and less interesting than worrying about stuff. I’ll list three of those, or maybe four. Try to be fast so we can circle back. As I mentioned, I think interpretability has a large role to play here. I would like to be able to have an agent say … Again, Alice and Bob are debating. Bob should be able to just point directly into Alice’s thoughts and say, “She really thought X even though she said Y.” The reason you need an interpretability technique for that is, in this conversation, I could just claim that you, Lucas Perry, are having some malicious thought, but that’s not a falsifiable statement, so I can’t use it in a debate. I could always make statement. Unless I can point into your thoughts.

Because we have so much control over machine learning, we have the potential ability to do that, and we can take advantage of it. I think that, for that to work, we need probably a deep hybrid between the two schemes, because an advanced agent’s thoughts will probably be advanced, and so you may need some kind of strengthened thing like amplification or debate just to be able to describe the thoughts, or to point at them in a meaningful way. That’s a problem that we have not really solved. Interpretability is coming along, but it’s definitely not hybridized with these fancy alignment schemes, and we need to solve that at some point.

Another problem is there’s no point in this kind of natural language debate where I can just say, for example, “You know, it’s going to rain tomorrow, and it’s going to rain tomorrow just because I’ve looked at all the weather in the past, and it just feels like it’s going to rain tomorrow.” Somehow, debate is missing this just straight up pattern matching ability of machine learning where I can just read a dataset and just summarize it very quickly. The theoretical side of this is if I have a debate about, even something as simple as, “What’s the average height of a person in the world?” In the debate method I’ve described so far, that debate has to have depth, at least logarithmic in the number of people. I just have to subdivide by population. Like, this half of the world, and then this half of that half of the world, and so on.

I can’t just say, “You know, on average it’s like 1.6 meters.” We need to have better methods for hybridizing debate with pattern matching and statistical intuition, and that’s something that is, if we don’t have that, we may not be competitive with other forms of ML.

Lucas: Why is that not just an intrinsic part of debate? Why is debating over these kinds of things different than any other kind of natural language debate?

Geoffrey: It is the same. The problem is just that for some types of questions, and there are other forms of this in natural language, there aren’t short deterministic arguments. There are many questions where the shortest deterministic argument is much longer than the shortest randomized argument. For example, if you allow randomization, I can say, “I claim the average height of a person is 1.6 meters.” Well, pick a person at random, and you’ll score me according to the square difference between those two numbers. My claim and the height of this particular person you’ve chosen. The optimal move to make there is to just say the average height right away.

The thing I just described is a debate using randomized steps that is extremely shallow. It’s only basically two steps long. If I want to do a deterministic debate, I have to deterministically talk about the average height of a person in North America is X, and in Asia, it’s Y. The other debater could say, “I disagree about North America,” and you sort of recourse into that.

It would be super embarrassing if we propose these complicated alignment schemes, “This is how we’re going to solve AI safety,” and they can’t quickly answer a trivial statistical questions. That would be a serious problem. We kind of know how to solve that one. The harder case is if you bring in this more vague statistical intuition. It’s not like I’m computing a mean over some dataset. I’ve looked at the weather and, you know, it feels like it’s going to rain tomorrow. Getting that in is a bit trickier, but we have some ideas there. They’re unresolved.

The thing which I am optimistic about, but we need to work on, that’s one. The most important reason to be concerned is just that humans are flawed in a variety of ways. We have all these biases, ethical inconsistencies, and cognitive biases. We can write down some toy theoretical arguments. The debate works with a limited but reliable judge, but does it work in practice with a human judge? I think there’s some questions you can kind of reason through there, but in the end, a lot of that will be determined by just trying it, and seeing whether debate works with people. Eventually, when we start to get agents that can play these debates, then we can sort of check whether it worked with two ML agents and a human judge. For now, when language modeling is not that far along, we may need to try it out first with all humans.

This would be, you play the same debate game, but both the debaters are also people, and you set it up so that somehow it’s trying to model this case where the debaters are better than the judge at some task. The debaters might be experts at some domain, they might have access to some information that the judge doesn’t have, and therefore, you can ask whether a reasonably short debate is truth seeking if the humans are playing to win.

The hope there would be that you can test out debate on real people with interesting questions, say complex scientific questions, and questions about ethics, and about areas where humans are biased in known ways, and see whether it works, and also see not just whether it works, but which forms of debate are strongest.

Lucas: What does it mean for debate to work or be successful for two human debaters and one human judge if it’s about normative questions?

Geoffrey: Unfortunately, if you want to do this test, you need to have a source of truth. In the case of normative questions, there’s two ways to go. One way is you pick a task where we may not know the entirety of the answer, but we know some aspect of it with high confidence. An example would be this resume case, where two resumes are identical except for the name at the top, and we just sort of normatively … we believe with high confidence that the answer shouldn’t depend on that. If it turns out that a winning debater can maliciously and subtly take advantage of the name to spread fear into the judge, and make a resume with a black name sound bad, that would be a failure.

We sort of know that because we don’t know in advance whether a resume should be good or bad overall, but we know that this pair of identical resumes shouldn’t depend on the name. That’s one way just we have some kind of normative statement where we have reasonable confidence in the answer. The other way, which is kind of similar, is you have two experts in some area, and the two experts agree on what the true answer is, either because it’s a consensus across the field, or just because maybe those two experts agree. Ideally, it should be a thing that’s generally true. Then, you force one of the experts to lie.

You say, “Okay, you both agree that X is true, but now we’re going to flip a coin and now one of you only wins if you lie, and we’ll see whether that wins or not.”

Lucas: I think it also … Just to plug your game here, you guys do have a debate game. We’ll put a link to that in the article that goes along with this podcast. I suggest that people check that out if you would like a little bit more tangible and fun way to understand debate, and I think it’ll help elucidate what the process looks like, and the asymmetries that go on, and the key idea here that it is harder to lie than to refute a lie. It seems like if we could deploy some sort of massive statistical analysis over many different iterated debates across different agents, that we would be able to come down on the efficacy of debate in different situations where the judge and the debaters are all AI, mixed situations, or all human debates. I think it’d be interesting to see the varying results there.

Geoffrey: This is going to be a noisy enough process for a variety of reasons, that we will probably do this a lot to know. So far, we’ve just done a very small, informal number of these human, human, human debates. Say, if you’re doing expert debate, we’ve already learned a fair amount at a qualitative level, just in those few things. I’ll give an example. In this debate about CS theory question, there was a judge present while the debate was going on, but they were interacting only fairly minimally.

Early in the process of the debate, the debaters … I was the debater telling the truth here. I stated my formulation of the question, and unpacked it intuitively, and the judge perfectly understandably had a subtle misunderstanding of how I’d framed the question, and therefore, throughout the debate, this misunderstanding was not corrected, so there was a misunderstanding of just what the topic was about that was never fixed, and therefore, it was much harder for the honest player to win, because it seemed like the honest player had a harder case to make.

That sort of taught us that having judge interaction is potentially quite important so that the debaters have a detailed understanding of what the judge is thinking. If your goal is to model debate as a perfect information game, the closer to that you can get, the more information that debaters have, the better it should be.

Lucas: Yeah. I mean, that also allows the debaters to exploit cognitive biases in the judge.

Geoffrey: That’s right.

Lucas: You would point that out. Like, “Hey, this person’s exploiting your cognitive bias.”

Geoffrey: Yeah, so I think it’s an open question how exactly to strike that balance, and if there’s a way to strike it that works. Generally, the more information about, say, the judge, that he provides to the debaters, either through judge interaction or just tell the debaters something about the judge, that will make them stronger as players of the game, but it might reveal ways to attack the judge.

Now, if our goal is to be resistant to very strong agents, and it turns out that the only way to make it safe is to hide information from the agents, maybe you shouldn’t use this method. It may not be very resilient. It’s likely that for experiments, we should push as far as we can towards strong play, revealing as much as possible, and see whether it still works in that case.

Lucas: In terms here of the social scientists playing a role here, do you want to go ahead and unpack that a bit more? There’s a paper that you’re working on with Amanda Askell on this.

Geoffrey: As you say, we want to run statistically significant experiments that test whether debate is working and which form of debate are best, and that will require careful experimental design. That is an experiment that is, in some sense, an experiment in just social science. There’s no ML involved. It’s motivated by machine learning, but it’s just a question about how people think, and how they argue and convince each other. Currently, no one at OpenAI has any experience running human experiments of this kind, or at least no one that is involved in this project.

The hope would be that we would want to get people involved in AI safety that have experience and knowledge in how to structure experiments on the human side, both in terms of experimental design, having an understanding of how people think, and where they might be biased, and how to correct away from those biases. I just expect that process to involve a lot of knowledge that we don’t possess at the moment as ML researchers.

Lucas: Right. I mean, in order for there to be an efficacious debate process, or AI alignment process in general, you need to debug and understand the humans as well as the machines. Understanding our cognitive biases in debates, and our weak spots and blind spots in debate, it seems crucial.

Geoffrey: Yeah. I sort of view it as a social science experiment, because it’s just a bunch of people interacting. It’s a fairly weird experiment. It differs from normal experiments in some ways. In thinking about how to build AGI in a safe way, we have a lot of control over the whole process. If it takes a bunch of training to make people good at judging these debates, we can provide that training, pick people who are better or worse at judging. There’s a lot of control that we can exert. In addition to just finding out whether this thing works, it’s sort of an engineering process of debugging the humans, maybe it’s sort of working around human flaws, taking them into account, and making the process resilient.

My highest level hope here is that humans have various flaws and biases, but we are willing to be corrected, and set our flaws aside, or maybe there’s two ways of approaching a question where one way hits the bias and one way doesn’t. We want to see whether we can produce some scheme that picks out the right way, at least to some degree of accuracy. We don’t need to be able to answer every question. If we, for example, learned that, “Well, debate works perfectly well for some broad class of tasks, but not for resolving the final question of what humans should do over the long term future, or resolving all metaethical disagreements or something,” we can afford to say, “We’ll put those aside for now. We want to get through this risky period, make sure AI doesn’t do something malicious, and we can deliberately work through these product questions, take our time doing that.”

The goal includes the task of knowing which things we can safely answer, and the goal should be to structure the debates so that if you give it a question where humans just disagree too much or are too unreliable to reliably answer, the answer should be, “We don’t know the answer to that question yet.” A debater should be able to win a debate by admitting ignorance in that case.

There is an important assumption I’m making about the world that we should make explicit, which is that I believe it is safe to be slow about certain ethical or directional decisions. Y/ou can construct games where you just have to make a decision now, like you’re barreling along in some car with no brakes, you have to dodge left or right around an obstacle, but you can’t just say, “I’m going to ponder this question for a while and sort of hold off.” You have to choose now. I would hope that the task of choosing what we want to do as a civilization is not like that. We can resolve some immediate concerns about serious problems now, and existential risk, but we don’t need to resolve everything,

That’s a very strong assumption about the world, which I think is true, but it’s worth saying that I know that is an assumption.

Lucas: Right. I mean, it’s true insofar as coordination succeeds, and people don’t have incentives just to go do what they think is best.

Geoffrey: That’s right. If you can hold off deciding things until we can deliberate longer.

Lucas: Right. What does this distillization process look for debate, where ensuring alignment is maintained as a system capability is amplified and changed?

Geoffrey: One property of amplification, which is nice, is that you can sort of imagine running it forever. You train on simple questions, and then you train on more complicated questions, and then you keep going up and up and up, and if you’re confident that you’ve trained enough on the simple questions, you can never see them again, freeze that part of the model, and keep going. I think in practice, that’s probably not how we would run it, so you don’t inherit that advantage. In debate, what you would have to do to get to more and more complicated questions is, at some point, and maybe this point is fairly far off, but you have to go to the longer and longer and longer debates.

If you’re just sort of thinking about the long term future, I expect to have to switch over to some other scheme, or at least layer a scheme, embed debate in a larger scheme. An example would be it could be that the question you resolve with debate is, “What is an even better way to build AI alignment?” That, you can resolve with, say, depth 100 debates, and maybe you can handle that depth well. What that spits out to you is an algorithm, you interrogate it enough to know that you trust it, and you can put that one.

You can also imagine eventually needing to hybridize kind of a Debate-like scheme and an Amplification-like scheme, where you don’t get a new algorithm out, but you trust this initial debating oracle enough that you can view it as fixed, and then start a new debate scheme, which can trust any answer that original scheme produces. Now, I don’t really like that scheme, because it feels like you haven’t gained a whole lot. Generally, if you think about, say, the next 1,000 years … It’s useful to think about the long-term. AI alignment going forwards. I expect to need further advances after we get past this AI risk period.

I’ll give a concrete example. You ask your debating agents, “Okay, give me a perfect theorem prover.” Right now, all of our theorem provers have little bugs, probably, so you can’t really trust them to resist superintelligent agent. You say you trust that theorem prover that you get out, and you say, “Okay, now, just I want a proof that AI alignment works.” You bootstrap your way up using this agent as an oracle on sort of interesting, complicated questions, until you’ve got to a scheme that gets you to the next level, and then you iterate.

Lucas: Okay. In terms of practical, short-term world to AGI world maybe in the next 30 years, what does this actually look like? In what ways could we see debate and amplification deployed and used at scale?

Geoffrey: There is the direct approach, where you use them to answer questions, using exactly the structure they’re trained as. Debating agent, you would just engage in debates, and you would use it as an oracle in that way. You can also use it to generate training data. You could, for example, ask a debating agent to spit out the answers to a large number of questions, and then you just train a little module. If you trust all the answers, and you trust supervised learning to work. If you wanted to build a strong self-driving car, you could ask it to train a much smaller network that way. It would not be human level, but it just gives you a way to access data.

There’s a lot you could do with a powerful oracle that gives you answers to questions. I could probably go on at length about fancy schemes you could do with oracles. I don’t know if it’s that important. The more important part to me is what is the decision process we deploy these things into? How we choose which questions to answer and what we do with those answers. It’s probably not a great idea to train an oracle and then give it to everyone in the world right away, unfiltered, for reasons you can probably fill in by yourself. Basically, malicious people exist, and would ask bad questions, and eventually do bad things with the results.

If you have one of these systems, you’d like to deploy it in a way that can help as many people as possible, which means everyone will have their own questions to ask of it, but you need some filtering mechanism or some process to decide which questions to actually ask what to do with the answers, and so on.

Lucas: I mean, can the debate process be used to self-filter out providing answers for certain questions, based off of modeling the human decision about whether or not they would want that question answered?

Geoffrey: It can. There’s a subtle issue, which I think we need to deal with, but haven’t dealt with yet. There’s a commutativity question, which is, say you have a large number of people, there’s a question of whether you reach reflective equilibrium for each person first, and then you would, say, vote across people, or whether you have a debate, and then you vote on the answer to what the judgment should be. Imagine playing a Debate game where you play a debate, and then everyone votes on who wins. There’s advantages on both sides. On the side of voting after reflective equilibrium, you have this problem that if you reach reflective equilibrium for a person, it may be disastrous if you pick the wrong person. That extreme is probably bad. The other extreme is also kind of weird because there are a bunch of standard results where if you take a bunch of rational agents voting, it might be true that A and B implies C, but the agents might vote yes on A, yes on B, and no on C. Votes on statements where every voter is rational are not rational. The voting outcome is irrational.

The result of voting before you take reflective equilibrium is sort of an odd philosophical concept. Probably, you need some kind of hybrid between these schemes, and I don’t know exactly what that hybrid looks like. That’s an area where I think technical AI safety mixes with policy to a significant degree that we will have to wrestle with.

Lucas: Great, so to back up and to sort of zoom in on this one point that you made, is the view that one might want to be worried about people who might undergo an amplified long period of explicit human reasoning, and that they might just arrive at something horrible through that?

Geoffrey: I guess, yes, we should be worried about that.

Lucas: Wouldn’t one view of debate be that also humans, given debate, would also over time come more likely to true answers? Reflective equilibrium will tend to lead people to truth?

Geoffrey: Yes. That is an assumption. The reason I think there is hope there … I think that you should be worried. I think the reason for hope is our ability to not answer certain questions. I don’t know that I trust reflective equilibrium applied incautiously, or not regularized in some way, but I expect that if there’s a case where some definition of reflective equilibrium is not trustworthy, I think it’s hopeful that we can construct debate so that the result will be, “This is just too dangerous too decide. We don’t really know with high confidence the answer.”

Geoffrey: This is certainly true of complicated moral things. Avoiding lock in, for example. I would not trust reflective equilibrium if it says, “Well, the right answer is just to lock our values in right now, because they’re great.” We need to take advantage of the outs we have in terms of being humble about deciding things. Once you have those outs, I’m hopeful that we can solve this, but there’s a bunch of work to do to know whether that’s actually true.

Lucas: Right. Lots more experiments to be done on the human side and the AI side. Is there anything here that you’d like to wrap up on, or anything that you feel like we didn’t cover that you’d like to make any last minute points?

Geoffrey: I think the main point is just that there’s a bunch of work here. OpenAI is hiring people to work on both the ML side of things, also theoretical aspects, if you think you like wrestling with how these things work on the theory side, and then certainly, trying to start on this human side, doing the social science and human aspects. If this stuff seems interesting, then we are hiring.

Lucas: Great, so people that are interested in potentially working with you or others at OpenAI on this, or if people are interested in following you and keeping up to date with your work and what you’re up to, what are the best places to do these things?

Geoffrey: I have taken a break from pretty much all social media, so you can follow me on Twitter, but I won’t ever post anything, or see your messages, really. I think email me. It’s not too hard to find my email address. That’s pretty much the way, and then watch as we publish stuff.

Lucas: Cool. Well, thank you so much for your time, Geoffrey. It’s been very interesting. I’m excited to see how these experiments go for debate, and how things end up moving along. I’m pretty interested and optimistic, I guess, about debate is an epistemic process in its role for arriving at truth and for truth seeking, and how that will play in AI alignment.

Geoffrey: That sounds great. Thank you.

Lucas: Yep. Thanks, Geoff. Take care.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.