source

This presents issues for the US and China and even bigger issues for other countries when it comes to redistributing the gains from automation and reducing inequality. If these companies continue to take a larger and larger share of the global economy the delta between tax revenues for China or America and everyone else becomes a bigger and bigger issue for politicians.

Kai-Fu Lee, formerly of Google China and now a leading venture capitalist in Beijing presents a bleak view on how this plays out for countries that are not the US or China,

"[I]f most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances."

This kind of dependency would be tantamount to a new kind of colonialism.

We can see small examples of new geopolitical relationships emerging. In March, Zimbabwe’s government signed a strategic cooperation framework agreement with a Guangzhou-based startup, CloudWalk Technology for a large-scale facial recognition program where Zimbabwe will export a database of their citizens’ faces to China, allowing CloudWalk to improve their underlying algorithms with more data and Zimbabwe to get access to CloudWalk’s computer vision technology. This is part of the much broader Belt and Road initiative of the Chinese Government.

There are historical parallels in all of this with the development of the oil industry. As Daniel Yergin explains in his masterful history of oil:

“two contradictory, even schizophrenic, strands of public policy towards the major oil companies have appeared and reappeared in the United States. On occasion, Washington would champion the companies and their expansion in order to promote America’s political and economic interests, protect its strategic objectives, and enhance the nation’s well-being. At other times, these same companies were subjected to populist assaults against “big oil” for their allegedly greedy, monopolistic ways and indeed for being arrogant and secretive”.

My prediction is that domestic antitrust action against Google and Amazon will not materialise, because for now Washington will care more about strengthening its hand against China. The notes Mark Zuckerberg prepared for his Senate hearing capture this pithily:

“Break up FB? US tech companies key asset for America, break up strengthens Chinese companies.”

What can countries that aren’t China or America do?

To answer that question we need to consider the resources that are important to a country in the race to develop a leading position in AI:

Compute. The compute resources associated with machine learning progress are increasing rapidly. Consider for example this Open AI analysis. While compute costs run into the hundreds of millions for the leading machine learning corporations, this is still small compared to government budgets, so in theory smaller states like Germany, Singapore, the UK or Canada can compete head to head with the US and China.

Deeply specific talent. At present, progress in machine learning is very sensitive to a talent pool that is microscopically small compared to the world’s population. There are perhaps 700 people in the world who can contribute to the leading edge of AI research, perhaps 70,000 who can understand their work and participate actively in commercialising it and 7 billion people who will be impacted by it. There are parallels with nuclear weapons, where the pool of scientists like Fermi, Szilard, Segre, Hahn, Frisch, Heisenberg capable of designing an atomic bomb was incredibly small compared to the consequences of their work. This suggests that specific talent could be a huge determiner in any AI arms race. China certainly thinks so. In this regard, some smaller countries--notably the UK and Canada--punch massively above their weight.

General STEM talent. The alternative is that you don’t need a Fermi or an Oppenheimer, you just need a lot of competent engineers, mathematicians and physicists. If so, the balance tips in favour of the largest most-developed countries, with the US and China squarely at the forefront.

Adjacent technologies. I have restricted this discussion to machine learning, but it is worth noting that there are various technologies that could contribute to progress in machine learning. For example if quantum computing enables a breakthrough in computing power, this would further accelerate progress in machine learning. A state’s ability to win an AI arms race will be partly enabled by a broader set of technology investments in particular software and semiconductors.

Political environment - clearly any state action around AI will consume a portion of the leaderships political capital and will trade off against other key issues consuming the country. If a country’s political leadership is absorbed by dealing with another form of instability - for example climate change or Brexit then it will be harder for them to focus attention on AI.

The strange case of the UK

My interest in this topic partly stems from my concern that the UK government is not getting its AI strategy right.

The UK finds itself in a fortunate position of having DeepMind--arguably the most important AI lab on the planet--headquartered in London. DeepMind has the magical combination of visionary, exceptional leadership in Demis Hassabis, Shane Legg and Mustafa Suleyman as well as the greatest density of AI research talent in the world. If humanity builds Artificial General Intelligence, many of the deepest thinkers on the topic believe that it will happen in Kings Cross. If you were looking for a domestic champion for the UK, you would be hard pressed to find a better candidate.

However, DeepMind is no longer an independent British company. It was acquired by Google in 2014 for £400 million at a critical inflection point: after their success with Atari DQN, but before the big AlphaGo/AlphaZero breakthroughs. It was a brilliant acquisition. In general, it appears that Google has been an excellent parent company for DeepMind, providing substantial resources to increase both the compute spend and the talent base (reported by Quartz as $160 million in 2016) as well as being able to tap into Google’s existing talent in machine learning--for example the Google Brain team. For a pre-revenue startup, remaining independent would have required DeepMind to raise close to half a billion dollars between 2014 and now to execute a similar plan. Today, in the middle of an bull market for AI startups, that seems reasonable, but looking back at 2014--before SoftBank’s Vision Fund and the escalation in huge growth rounds for pre-revenue companies--it would have been a tall order. Ultimately, DeepMind probably chose the highest impact and ambition path available to them in 2014 by selling to Google. I have always had enormous respect for Google and the principled and visionary leadership there is likely a very good fit with the DeepMind culture.

However I find it hard to believe that the UK would not be better off were DeepMind still an independent company. How much would Google sell DeepMind for today? $5 billion? $10 billion? $50 billion? It’s hard to imagine Google selling DeepMind to Amazon, or Tencent or Facebook at almost any price. With hindsight, would it have been better for the UK government to block this acquisition and help keep it independent? Even now, is there a case to be made for the UK to reverse this acquisition and buy DeepMind out of Google and reinstate it as some kind of independent entity?

The two main political parties in the UK both struggle with this kind of question for different reasons. The Conservative MPs I have spoken to about this topic will always cite the troubled history of British Leyland; that spectre of failed market interference still looms large over their thinking. They remain convinced that the only path is laissez-faire economics.

The Labour party has a different challenge. They assert the importance of state action, for example Jeremy Corbyn’s desire to nationalise railways, water and energy companies. But this thinking focuses on those historic battles over privatisation and doesn’t look to the future. Corbyn and McDonnell today are more interested in Great Western Rail than DeepMind.

All of this is further complicated by the fact that the government is hugely distracted by Brexit.

DeepMind is not the only example of an exceptional British company working on cutting edge machine learning. The UK has made many fundamental contributions to the field of machine learning and is home to some of the world’s very best universities for machine learning research including Cambridge, Edinburgh, Imperial, Oxford and UCL. With the growth of the UK’s startup sector over the past decade, there are now many great teams working to combine the UK’s expertise in building great technology companies like Arm, and its academic talent in machine learning. Prowler is applying reinforcement learning to the general field of decision making. Graphcore is building a new type of processor for machine learning. Ocado is arguably the most sophisticated global player in warehouse automation after Amazon. DarkTrace is one of the leading companies applying machine learning to cybersecurity. Benevolent is doing pioneering work in applying machine learning to drug discovery. All these companies are growing incredibly quickly, doing transformational work in their fields and building deep talent pools. They are all still independent startups. What will the UK government do when Amazon, Google or Tencent make them a multi-billion dollar offer? At present, nothing. This is a good thing if you’re Google, Amazon or Alibaba looking to further cement your position and indirectly a good thing for the US or China. Is it a good thing for the average UK citizen?

Rogue actors

Most of this essay has focused on the national interests of countries. There are other non-state political actors who also have to be considered - for example terrorist cells or rogue states. This is most relevant when it comes to machine-learning-enabled cyberattacks and autonomous weaponry. For those interested to learn more about these risks, they were covered well in this report on malicious uses of AI. The key question for me is the extent to which key labs, corporations or nation states ‘go dark’ in terms of publishing AI research to avoid enabling malicious actors. The risk is well captured by Allan Friedman in Cybersecurity and Cyberwar:

“To make a historic comparison, building Stuxnet the first time may have required an advanced team that was the cyber equivalent to the Manhattan Project. But once it was used, it was like the Americans didn’t just drop this new kind of bomb on Hiroshima, but also kindly dropped leaflets with the design plan so anyone else could also build it, with no nuclear reactor required… the proliferation of cyber weapons happens at Internet speed”

This is also complicated by the fact that cyber attacks may not be as easily identified:

“The problem is that, unlike in the Cold War, there is no simple bipolar arrangement, since, as we saw, the weapons are proliferating far more widely. Even more, there are no cyber equivalents to the clear and obvious tracing mechanism of a missile’s smoky exhaust plume heading your way, since the attacks can be networked, globalized, and of course, hidden. Nuclear explosions also present their own, rather irrefutable evidence that atomic weapons have been used, while a successful covert cyber operation could remain undetected for months or years”

The most likely outcome here is that certain key machine learning research ceases to be shared in the public domain to avoid enabling malicious actors. This thinking is captured most clearly in OpenAI’s recent charter:

“We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.“

If we do see key research labs or countries ‘go dark’ on some of their research output, a Cold War dynamic could emerge that will reward the most established and largest state or corporate actors. Ultimately, this reinforces the AI Nationalism dynamic.

The great wall of money

So far the amount invested by states is an order of magnitudes lower than that of Google, Alibaba etc. McKinsey estimates that the largest technology multinationals spent $20-30 billion on AI in 2016.

I believe that the current government spending on AI is tiny compared to the investment we will see as they come to realise what is at stake. What if rather than spending ~£500 million of public money on AI over a number of years the UK spent something closer to its annual defence budget of £45 billion?

Consider again the parallel with nuclear weapons, where the US government went from ignoring key scientists like Leo Szilard to recognising the existential importance of nuclear weapons to initiating the Manhattan Project. The Manhattan Project went from employing zero people in 1941 to within 3 years spending $25 billion (in 2016 dollars), employing over 100,000 people and building industrial capacity as large as the entire US automobile industry. States have tremendous inertia, but once they move they can have incredible momentum.

If this happens, then the amount of investment in AI research and commercialisation could be 10-100X what it is today. It is not always the case that more funding enables more progress but nonetheless I think it is prudent to assume that if states substantially increase their investment in machine learning then progress is likely to speed up further. This only reinforces the importance of investing now in research that helps to mitigate risks and ensure that these developments go well for humanity.

Engineers without borders

It is also worth acknowledging that there are connections that transcend the state and nationalism as Jeff Ding notes in his excellent report “Deciphering China’s AI Dream”:

“It is important to consider the interdependent, positive-sum aspects of various AI drivers….Cross-border AI investments, with respect to the U.S. and China, have significantly increased in the past few years. From 2016 to 2017, China-backed equity deals to U.S. startups rose from 19 to 31 and U.S.-backed equity deals to Chinese startups quadrupled from 5 to 20. Moreover, what is often forgotten is the fact that both Tencent and Alibaba are multinational, public companies that are owned in significant portions by international stakeholders (Naspers has a 33.3% stake in Tencent and Yahoo has a 15 percent stake in Alibaba).”

It is also true that economies and fundamental science and technology progress do not neatly track state borders. Talent and capital are global: DeepMind’s initial investors were from Silicon Valley and Hong Kong, their team is extremely international and they now have offices in Canada and France. There is a weakness to viewing things too narrowly through a state-centric lense. However, I believe that overall the economic and military consequences of machine learning will be such a dramatic cause of instability that nation states will be forced to put their citizens ahead of broader goals around internationalism.

Up until now I have just tried to outline what I think will happen. Machine learning becomes a huge differentiator between states--economically, militarily and technologically--and triggers an arms race, which causes progress in AI to speed up faster.

However there is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result. George Orwell writing on nationalism in 1945 captures the tension between a patriotism that is primarily defensive, and a nationalism that seeks to dominate:

“Nationalism is not to be confused with patriotism. Both words are normally used in so vague a way that any definition is liable to be challenged, but one must draw a distinction between them, since two different and even opposing ideas are involved. By ‘patriotism’ I mean devotion to a particular place and a particular way of life, which one believes to be the best in the world but has no wish to force on other people. Patriotism is of its nature defensive, both militarily and culturally. Nationalism, on the other hand, is inseparable from the desire for power. The abiding purpose of every nationalist is to secure more power and more prestige, not for himself but for the nation or other unit in which he has chosen to sink his own individuality.”

Personally, I believe that AI should become a global public good--like GPS, HTTP, TCP/IP, or the English language--and the best long term structure for bringing this to fruition is a non-profit, global organisation with governance mechanics that reflect the interests of all countries and people. The best shorthand I have for this is some kind of cross between Wikipedia, and The UN. One organisation that has made a step in this direction is OpenAI, which operates as a non-profit entity focused on AI research. This doesn’t solve many of the economic issues around machine learning that I have discussed in this essay, but it is a great improvement on machine learning research being primarily the economic domain of large technology companies and the military domain of nation states.

While the idea of AI as a public good provides me personally with a true north, I think it is naive to hope we can make a giant leap there today, given the vested interests and misaligned incentives of nation states, for-profit technology companies and the weakness of international institutions. I believe that we are likely to go through a period of AI Nationalism before we get to a place where AI is treated like a public good, and that, to use Orwell’s distinction, a kind of AI Patriotism is likely to be a good thing for smaller countries in the short term.

Taking the example of the UK again, I am in favour of a more expansive national AI strategy to protect the UK’s economic, military and technological interests and to give the UK a credible seat at the table when global issues around AI are being worked out. That will help ensure that the UK’s economic interests and values are considered. I believe that the stronger the position of smaller countries like the UK, Canada, Singapore or South Korea in the short term, the more likely we are to move in the longer term to AI as a global public good. For that reason I believe it is necessary for the UK government to take steps towards investing in and protecting its homegrown AI companies and institutions to allow them to play a larger role on the world stage independent of America and China. I have lived in both America and China, and during that time developed enormous respect and affection for both of those countries. That does not prevent me from believing the UK should protect the economic interests of its citizens and I would like to see the UK play a material role in shaping the future of AI. Once again I come back to DeepMind - I believe that the UK and the world would be in a better place were DeepMind to be an independent entity. Ideally, in the longer term as a non-profit, international organisation focused on AI as a global public good.

During the coming phase of AI Nationalism that this essay predicts, I believe we need a simultaneous investment in organisations and technologies that can counterbalance this trend and drive an international rather than national agenda. Something analogous to The Baruch Plan led by organisations like DeepMind and OpenAI. I plan to write more about that soon.