International cooperation is essential if we want to capture the benefits of advanced AI while minimizing risk. Brian Tse, a policy affiliate at the Future of Humanity Institute, discusses concrete opportunities for coordination between China and the West, as well as how China’s government and technology industry think about different forms of AI risk.



A transcript of Brian’s talk, which we have edited lightly for clarity, is below. You can also watch it on YouTube or read it on effectivealtruism.org.

The Talk

It has been seven decades since a nuclear weapon has been detonated.







For almost four decades, parents everywhere have not needed to worry about their children dying from smallpox.







The ozone layer, far from being depleted to the extent once feared, is expected to recover in three decades.







These events — or non-events — are among humanity’s greatest achievements. They would not have occurred without cooperation among a multitude of countries. This serves as a reminder that international cooperation can benefit every country and person.



Together, we can achieve even more. In the next few decades, AI is poised to be one of the most transformative technologies. In the Chinese language, there is a word, “wēijī,” which is composed of two characters: one meaning danger and the other opportunity.







Both characters are present at this critical juncture. With AI, we must seek to minimize dangers and capture the upsides. Ensuring that there is robust global coordination between stakeholders around the world, especially those in China and the West, is critical in this endeavor.



So far, the idea of nations competing for technological and militarist supremacy has dominated the public narrative.







When people talk about China and AI, they always invoke the country's ambition to become the world leader in AI by 2030. In contrast, there is very little attention paid to China’s call for international collaboration in security, ethics, and governance of AI, which are areas of mutual interest. I believe it is a mistake to think that we must have either international cooperation or international competition. Today, some believe that China and the U.S. are best described as strategic adversaries.



I believe we must deliberately use new concepts and terms that capture the two countries’ urgent need to cooperate — not just their drive to compete.







Joseph Nye, well-known for coining the phrase “soft power,” has suggested that we use “cooperative rivalry” to describe the relationship. Graham Allison, the author of Destined For War, has proposed the word “coopertition,” allowing for the simultaneous coexistence of competition and cooperation.



In the rest of my talk, I'm going to cover three areas of AI risk that have the potential for global coordination: accidents, misuse, and the race to develop AI.







For each of these risks, I will talk about their importance and feasibility for coordination. I will also make some recommendations.

The risk of AI accidents

As the deployment of AI systems has become more commonplace, the number of AI-related accidents has increased. For example, on May 6, 2010, the Dow Jones Industrial Average experienced a sudden crash of $1 trillion known as the “Flash Crash.”







It was partly caused by the use of high-frequency trading algorithms. The impact immediately spread to other financial markets around the world.



As the world becomes increasingly interdependent, as with financial markets, local events have global consequences that demand global solutions. The participation of [the Chinese technology company] Baidu in the Partnership on AI is an encouraging case study of global collaboration.







In a press release last year, Baidu said that the safety and reliability of AI systems is critical to their mission and was a major motivation for them to join the consortium. The [participating] companies think autonomous vehicle safety is an issue of particular importance.



China and the U.S. also seem to be coordinating on nuclear security. One example is the Center of Excellence on Nuclear Security in Beijing, which is by far the most extensive nuclear program to receive direct funding from both the U.S. and Chinese governments.







It focuses on building a robust nuclear security architecture for the common good. A vital feature of this partnership is an intense focus on exchanging technical information, as well as reducing the risk of accidents.



It is noteworthy that, so far, China has emphasized the need to ensure the safety and reliability of AI systems. In particular, the Beijing AI Principles and the Tencent Research Institute have highlighted the risks of AGI systems.



With our current understanding of AI-related accidents, I believe Chinese and international stakeholders can collaborate in the following ways:







1. Researchers can attend the increasingly popular AI safety workshops at some of the major machine learning conferences.



2. Labs and researchers can measure and benchmark the safety properties of reinforcing learning agents based on efforts by organizations and safety groups like that of DeepMind.



3. International bodies, such as ISO, can continue their efforts to set technical standards, especially around the reliability of machine learning systems.



4. Lastly, alliances such as the Partnership on AI can facilitate discussions on best practices (for example, through [the Partnership’s] Safety-Critical AI Working Group).

The risk of AI misuse

Even if we can mitigate the unintended accidents of AI systems, there is still a possibility that they'll be misused.







For example, earlier this year, OpenAI decided not to release the training model of GPT-2, which [can generate language on its own], due to concerns that it might be misused to impersonate people, create misleading news articles, or trick victims into revealing their personal information. This reinforces the need for global coordination; malicious actors from anywhere could have gained access to the technology behind GPT-2 and deployed it in other parts of the world.



In the field of cybersecurity, there was a relevant case study of the global response to security incidents.







In 1989, one of the first computer worms attacked a major American company. The incident prompted the creation of the international body FIRST to facilitate information-sharing and enable more effective responses to future security incidents. Since then, FIRST has been one of the major institutions in the field. It currently currently lists ten American and eight Chinese members, including companies and public institutions.



Another source of optimism is the growing research field of [adversarial images]. These are small input samples that have been moderated slightly to cause machine learning classifiers to misclassify them [e.g., mistake a toy turtle for a gun].







This issue is highly concerning, because [adversarial images] could be used to attack a machine learning system without the attacker having access to the underlying model.



Fortunately, many of the leading AI labs around the world are already working hard on this problem. For example, Google Brain organized a competition on this research topic, and the team from China’s Tsinghua University won first place in both the “attack” and “defense” tracks of the competition.



Many of the Chinese AI ethical principles also cover concerns related to the misuse of AI . One promising starting point of coordination between Chinese and foreign stakeholders, especially the AI labs, involves publication norms.







Following the [controversy around] OpenAI’s GPT-2 model, the Partnership on AI organized a seminar on the topic of research openness. There was no immediate consideration of whether the AI community should restrict research openness. However, they did agree that if the AI community moves in that direction, review parameters and norms should be standardized across the community (presumably, on a global level).

The risk of competitively racing to develop AI

The third type of risk that I'm going to talk about is the risk from racing to develop AI.







Under competitive pressure, AI labs might put aside safety concerns in order to stay ahead. Uber’s self-driving car crash in 2018 illustrates this risk.







When it happened, commentators initially thought that the braking system was the culprit. However, further investigation showed that the victim was detected early enough for the emergency braking system to have worked and prevented the crash.



So what happened? It turned out that the engineers intentionally turned off the emergency braking system because they were afraid that its extreme sensitivity would make them look bad relative to their competitors. This type of trade-off between safety and other considerations is very concerning, especially if you believe that AI systems will become increasingly powerful.



This problem is going to be even more heuristic in the context of international security. We should seek to draw lessons from historical analogs.







For example, the report “Technology Roulette” by Richard Danzig discusses the norm of “no first use” and its contribution to stability during the nuclear era. Notably, China was the first nuclear-weapon state to adopt such a policy back in 1964, with varying degrees of success. Other nations have also used the norm to moderate the proliferation and use of various military technologies, including blinding lasers and offensive weapons from outer space.



Now, with AI as a general-purpose technology, there is a further challenge: How do you specify and verify that certain AI technologies haven’t been used? On a related note, the Chinese nuclear posture has been described as a defense-oriented one. The question with AI is: Is it technically feasible for parties to differentially improve defensive capabilities, rather than offensive capabilities, thereby stabilizing the competitive dynamics? I believe these are still open questions.



Ultimately, constructive coordination depends on the common knowledge that there is this shared risk of a race to the bottom with AI. I'm encouraged to see increasing attention paid to the problem on both sides of the Pacific.







For example, Madame Fu Ying, who is chairperson of the National People’s Congress Foreign Affairs Committee in China and an influential diplomat, has said that Chinese technologists and policymakers agree that AI poses a threat to humankind. At the World Peace Forum, she further emphasized that the Chinese believe we should preemptively cooperate to prevent such a threat.



The Beijing AI Principles, in my view, provide the most significant contribution from China regarding the need to avoid a malicious AI race. And these principles have gained support from some of the country’s major academic institutions and industry leaders. It is my understanding that discussions around the Asilomar AI Principles, the book Superintelligence by Nick Bostrom, and warnings from Stephen Hawking and other thinkers have all had a meaningful influence on Chinese thinkers.



Building common knowledge between parties is possible, as illustrated by the Thucydides Trap.







Coined by the scholar Graham Allison, The Thucydides Trap describes the idea that rivalry between an established power and a rising power often results in conflict. This thesis has captured the attention of leaders in both Washington, D.C. and Beijing. In 2013, President Xi Jinping told a group of Western visitors that we should cooperate to escape from the Thucydides Trap. In parallel, I think it is important for leaders in Silicon Valley — as well as in Washington, D.C. and Beijing — to recognize this collective problem of a potential AI race to the precipice, or what I might call “the Bostrom Trap.”



With this shared understanding, I believe the world can move in several directions. First, there are great initiatives, such as the Asilomar AI Principles, which can help many of the signatories [adhere to] the principle of arms-race avoidance.







Expanding the breadth and depth of this dialogue, especially between Chinese and Western stakeholders, will be critical to stabilize expectations and foster mutual trust.



Second, labs can initiate AI safety research collaborations across borders.







For example, labs could collaborate on some of the topics laid out in the seminal paper “Concrete Problems Of AI Safety,” which was itself a joint effort from multiple institutions.



Lastly — and this is also the most ambitious recommendation — leading AI labs could consider adopting the policies in the OpenAI Charter.







The charter claims that if a value-aligned, safety-conscious project comes close to building AGI technology, OpenAI will stop competing and start assisting with that project. This policy is an incredible public commitment, as well as a concrete mechanism in trying to reduce these undesirable [competitive] dynamics.



Throughout this talk, I have not addressed many of the complications involved in such an endeavor. There are considerations such as industrial espionage, civil/military fusion, and civil liberties. I believe each of those topics deserve a nuanced, balanced, and probably separate discussion, given that I will not be able to do proper justice to them in a short presentation like this one. That said, on the broader challenge of overcoming political tension, I would like to share a story.



Some believe the Cuban Missile Crisis had a one-in-three chance of resulting in a nuclear war between the U.S. and the Soviet Union. After the crisis, President J. F. Kennedy was desperately searching for a better way forward.







Before he was assassinated, in one of his most significant speeches about international order, he proposed the strategic concept of a world safe for diversity. In that world, the U.S. and Soviet Union could compete rigorously, but only peacefully, to demonstrate whose value and system of governance might best serve the needs of citizens. This eventually evolved into what became “détente,” a doctrine that contributed to the easing of tension during the Cold War.



In China, there is a similar doctrine, which is “harmony in diversity.” [Brian says the word in Mandarin.]







The world must learn to cooperate in tackling our common challenges, while accepting our differences. If we were able to achieve this during the Cold War, I believe we should be more hopeful about our collective future in the 21st century. Thank you.



Nathan Labenz [Moderator]: I think the last time I saw you was just under a year ago. How do you think things have gone over the last year? If you were an attentive reader of the New York Times, you would probably think things are going very badly in US/China relations. Do you think it's as bad as all that? Or is the news maybe hyping up the situation to be worse than it is?



Brian: It is indeed worrying. I will add two points to the discussion. One: we're not only thinking about coordination between governments. In my talk, I focused on state-to-state cooperation, but I mentioned a lot of potential areas of collaboration between AI labs, researchers, academia and civil society. And I believe that the incentive and the willingness to cooperate between those stakeholders are there. Second, my presentation was meant to be forward-looking and aspirational. I was not looking at the current news. I was thinking that if in five to 10 years, or even 20 years, AI systems become increasingly advanced and powerful — which means there could be tremendous upsides for everyone to share, as well as downsides to worry about — the incentive to cooperate, or at least aim for “coopertition,” should be there.



It could be interesting to think about game theory. I won’t go into the technical details. But the basic idea is that if there are tremendous upsides and also shared downsides for some number of parties, then it is more likely that those parties will be willing to cooperate instead of just compete.



Nathan: A question from the audience: Do you think that there's any way to tell right now whether the U.S. or the West (however you prefer to think about that), has an edge over China in developing AI? And do you think that there are political or cultural differences that contribute to that, if you think such a difference exists?



Brian: Just in terms of the potential for developing capable systems? We are not talking about safety and ethics, right?



Nathan: You can interpret the question [how you like].



Brian: Okay. I will focus on capabilities. Currently, it is quite clear to me that China is nowhere near the U.S. in terms of overall AI capabilities. People have argued at length. I would add a few things.



If you look at the leadership structure of Chinese AI companies — for example, Tencent — and some of the recent developments, it seems like the incentive to develop advanced and interesting theoretical research is not really there. Chinese AI companies are much more focused on products and near-term profit.



One example I would give is the Tencent AI lab director, Dr. Tong Zhang, who was quite interested in ideas relevant to AGI and worked at Tencent for two years. He decided to leave the AI lab earlier this year and is now going back to academia. He is joining the Hong Kong University of Science and Technology as a faculty member. Even though he didn't explicitly mention the reason [for his departure], people think that the incentive to develop long-term, interesting research is not there at Tencent or, honestly, at many of the AI companies.



Another point I will raise is this: If you look at some of the U.S. AI labs — for example, FAIR or Google Brain — the typical structure is that you have two research scientists and one research engineer on a team. The number could be greater, but the ratio is usually the same. But the ratio of research scientists to research engineers is the opposite for Chinese AI companies. There, you have one research scientist and two research engineers, which implies that they are much more focused on putting their research ideas into practice and applications.



Nathan: That's a surprising answer to me because I think that the naive, “New York Times reader” point of view would be that the Chinese government is way better than the U.S. government in terms of long-term planning and priority-setting. If you agree with that, how do you think that translates into a scenario where the Chinese mega companies are maybe not doing as much as the American companies?



Brian: I think the Chinese model is still interesting from a long-term, mega-project perspective. But there is variance in terms of what type of mega projects you are talking about. If you're talking about railways, bridges, or infrastructure in general, the Chinese government is incredibly good at that. You can construct a lot of buildings in just days, and I think that it takes the U.S., UK, and many other governments years. But they are engineering projects. We're not talking about Nobel Prize-winning types of projects. I think that's really the difference.



There is some analysis on where the top AI machine learning researchers are working and all of them are in the U.S. But if you look at pretty good researchers — potentially Alan Turing Prize-winning researchers — then yes, China has a lot of them. I think we have to be very nuanced in terms of looking at what types of scientific projects we are talking about, and whether it's mostly about scientific breakthroughs or engineering challenges.



Nathan: Fascinating. A bunch of questions are coming in. I'm going to do my best to get through as many as I can. One question is about the general fracturing of the world that seems to be happening, or bifurcation of the world, into a Chinese sphere of influence (which might just be China, or maybe it includes a few surrounding countries), and then the rest of the world. We're seeing Chinese technology companies getting banned from American networks, and so on. Do you think that that is going to become a huge problem? Is it already a huge problem, or is it not that big of a problem after all?



Brian: It's definitely concerning. My main concern is the impact on the international research community. [In my talk], I alluded to the international and interconnected community of research labs and machine-learning researchers. I believe that community will still be a good mechanism for coordinating on different AI policy issues —they would be great at raising concerns through the AI Open Letter Initiative, collaborating through workshops, and so on.



But this larger political dynamic might affect them in terms of Chinese scientists’ ability to travel to the U.S. What if they just can't get Visas? And maybe in the future, U.S. scientists might also be worried about getting associated with Chinese individuals. The thing I'm worried about is really this channel of communication between the research communities. Hopefully, that will change.



Nathan: You're anticipating the next question, which is the idea that individuals are maybe starting to become concerned that if they appear to be on either side of the China/America divide — if they appear too friendly — they'll be viewed very suspiciously and might suffer consequences from that. Do you think that is already a problem, and if so, what can individuals do to try to bridge this divide while minimizing the consequences that they might suffer?



Brian: It's hard to provide a general answer. It probably depends a lot on the career trajectories of individuals and other constraints.



Nathan: There’s a question about the Communist Party. The questioner assumes that the Communist Party has final say on everything that's going on in China. I wonder if you think that's true, and if it is, how do we work within that constraint?



Brian: In terms of international collaboration and what might be plausible?



Nathan: Is there any way to make progress without the buy-in of the Communist Party, or do you need it? And if you need it, how do you get it?



Brian: I think one assumption there is that it is bad to have involvement from the government. I think we need to try to avoid that — I can just smell the assumptions when people ask these types of questions. It is not necessarily true. I think there are ways that the Chinese government can be involved meaningfully. We just need to be thinking about what those spaces are.



Again, one promising channel would be AI safety conferences through academia. If Tsinghua University is interested in organizing an AI safety conference with potential buy-in from the government, I think that's fine, and I think it's still a venue for research collaboration. The world just needs to think about what the mutual interests are and, honestly, the magnitude of the stakes.



Nathan: At a minimum, the Communist Party has at least demonstrated awareness of these issues and seems to be thinking about them. I think we're a little bit over time already, so maybe just one last question. Do you see this competition/cooperation dynamic and potentially this race to the precipice dynamics getting repeated across a lot of things? There's AI, and obviously in an earlier era there was nuclear rivalry, which hasn't necessarily gone away either. We also saw this news item of the first CRISPR-edited babies, and that was a source of a lot of concern for people who thought, "We're losing control of this technology." So, what's the portfolio of these sorts of potential race-dynamic problems?



Brian: I think these are relevant historical analogs, but what makes AI a little bit different is that AI is a general-purpose technology, or omni-use technology. It's used across the economy. It's a question of political and economic [importance], not just international security. It's not just a nuclear weapon or a space weapon. It’s everywhere. It's more like electricity in the industrial revolution.



One thing that I want to add, which is related to the previous question, is the response from Chinese scientists to the gene-editing incident. Many people condemned the behavior of the scientist [responsible for the gene editing] because he didn't [comply fully] with regulations and was just doing it at a small lab in the city. But what you can see there is this uniformity of an international response to the incident; the responses from U.S. scientists, UK scientists, and Chinese scientists were basically the same. There was an open letter to Nature, with hundreds and hundreds of Chinese scientists saying that this behavior is unacceptable.



What followed was that the Chinese government wanted to develop better regulations for gene editing and [explore] the relevant ethics. I think this illustrates that we can have a much more global dialogue about ethics and safety in science and technology. And in some cases, the Chinese government is interested in joining this global dialogue, and takes action in its domestic policy.