Of the 20-some-odd public figures gunning for the Democratic presidential nomination this U.S. election cycle, Andrew Yang stands left of center for his decidedly progressive views. The New York native’s signature policy — one inspired by futurist Martin Ford’s book Rise of the Robots — is what he calls the Freedom Dividend, a form of universal basic income that would guarantee every American aged 18-64 at least $1,000 per month. Yang has also proposed a federal agency devoted to regulating the “addictive nature” of media, making Tax Day a national holiday and limiting the private sector work of federal investigators after they leave public office in an effort to curb corruption.

Yang — a graduate of Brown University and Columbia Law School — worked in the health care industry for four years before assuming the role of CEO at test preparation company Manhattan Prep in 2006. In 2009, the company was acquired by testing giant Kaplan, after which Yang launched Venture for America, a fellowship program with the mission “to create economic opportunity in American cities by mobilizing the next generation of entrepreneurs.” The nonprofit grew from an annual operating budget of $200,000 in 2012 to $6 million in 2017, and it now has a presence in about 20 U.S. cities and a startup accelerator in Detroit.

Moments before Yang stepped onstage last week to deliver a speech at the Techonomy 2019 conference in New York City, VentureBeat spoke with him about the U.S.’ role in fostering a thriving AI research community, unregulated AI’s potential to do harm, and policies that might ease the coming AI-driven job market transformation.

American AI Initiative

In February, President Trump signed an executive order establishing a program called the American AI Initiative, which codifies several White House proposals made in spring 2018 during an informal summit on AI. Specifically, it tasks federal agencies with devoting additional resources to AI research and training, and it instructs the White House Office of Science and Technology Policy (OSTP), the National Institute of Standards and Technology (NIST), and other departments to draft standards guiding the development of “reliable, robust, trustworthy, secure, portable, and interoperable AI systems” and to create AI fellowships and apprenticeships. In the future, these agencies will be required to make a good-faith effort to provide data, computing resources, and models to AI researchers whenever possible and to “prioritize AI” investments in their budgets.

Yang says the American AI Initiative is “better than nothing” and “very welcome,” but he points out that it fails to devote the meaningful resources needed to bolster the U.S.’ leadership in AI. “We have to be realistic about what actually happened — the announcement amounted more to an assignment of agencies,” he told VentureBeat. “If you were to make a list of the top folks in AI, a very, very low proportion of them work in the government at present. And if you were to dig into one of these agencies and see what they’re doing on AI, you would [probably] find that they’re doing very little, in most places.”

With the signing of the American AI Initiative, the U.S. joined the dozens of other countries that have launched national AI guidelines and policies, most of which have outstripped the U.S. with respect to the amount of capital they’ve set aside for research. For instance, Canada’s Pan-Canadian Artificial Intelligence Strategy is a five-year, $94 million (CAD $125 million) plan to invest in AI research and talent, and the EU Commission has committed to increasing its investment in AI from $565 million (€500 million) in 2017 to $1.69 billion (€1.5 billion) by the end of 2020. China’s AI plan is perhaps the most ambitious: In two policy documents, “A Next Generation Artificial Intelligence Development Plan” and “Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry,” the Chinese government laid out a roadmap for cultivating an AI industry worth roughly $147 billion by 2030.

“The top minds in the field aren’t working for the government trying to make breakthroughs — they’re working for Google, OpenAI, and other Silicon Valley giants,” said Yang. “It’s a very different model than what’s happening in China, where China is essentially writing blank checks to private firms and saying, ‘Hey, if you need computing, resources, and infrastructure, we’ve got billions of dollars in infrastructure that you can utilize.'”

He’s not wrong. Analysts at CB Insights report that 11% of all AI startups are headquartered in Chinese cities, a share equaled only by Israel, and that China-based firms SenseTime and Megvii are two of the best-funded AI companies globally, owing in large part to government contracts. SenseTime and Megvii — both of which develop computer vision technologies, principally in the areas of facial recognition and object detection — have so far attracted over $1.4 billion and $2.6 billion in capital, respectively, according to Crunchbase. By comparison, one of the top-funded stateside AI firms — self-driving delivery vehicle startup Nuro — has raked in just over $1 billion in equity financing.

Moreover, according to the Allen Institute, a Seattle-based nonprofit bioscience research organization, China has published more AI academic papers than the U.S. since 2005. A more recent analysis of more than 2 million publications forecasts that China’s share of top research contributions will equal that of the U.S. by 2020.

One thing that gives Chinese firms a leg up on their American rivals is liberal access to data, according to Yang. “[China’s] privacy regulations are very lax, and their attitude toward sharing one’s information are different in China, so they can feed their AI more data and make their algorithm smarter, stronger, and faster.”

That’s why Yang says it’s critical that the U.S. government “step up and help,” perhaps by contributing capital in exchange for “a bead on what’s happening at the frontier of development in AI.”

“We [need to] make sure that nothing destructive comes out that could be negative for our citizens,” said Yang, perhaps alluding to the Defense Innovation Board — an advisory commission that seeks to bring technological innovation to the U.S. military — and its ongoing effort to formulate AI ethics recommendations for the Department of Defense (DoD). “The U.S. model of letting private companies lead the charge works phenomenally well for [some things and] not as well for others, and there’s reason to believe that in the development of AI, the organizations with the most data are going to make the most headway more quickly.”

“We need to try and avoid what’s considered an AI arms race with China, and the best way to avoid an arms race dynamic is to maintain a spot at the leadership table,” said Yang. “And if we fall behind significantly, then it’s going to be much harder for us to be collaborative. I believe [this is something that] would be welcomed by many leaders in Silicon Valley because they recognize that this is not a domestic industry — that this is going to be global.”

Progress and setbacks

Yang notes that there have been small but meaningful steps in the right direction.

The U.S. military last month opened its Joint AI Center and published an AI strategy adherent to a 2012 DoD directive that limits the use of AI in weaponry. Separately, the U.S. Senate is considering legislation like the Algorithmic Accountability Act, which would require companies to study and fix inaccurate or biased algorithms, and the Commercial Facial Recognition Privacy Act, which would ban users of commercial face recognition from identifying or tracking consumers without their consent. More recently, the City of San Francisco prohibited the deployment of facial identification systems by the police and other agencies.

On the private sector side of the equation, Google CEO Sundar Pichai last summer laid out the company’s AI principles, which include a ban on the creation of autonomous weaponry, and the company committed in December to refrain from offering a “general-purpose” facial recognition service until “challenges” have been “identif[ied] and address[ed].” Microsoft similarly revealed last year that it turned down requests to install facial recognition technology where there might be human rights risks, and the company said it canceled a contract to supply processing and AI tools to U.S. Immigration and Customs Enforcement (ICE).

But setbacks threaten to derail the progress that’s been made, warns Yang.

At a city council hearing in March, New York City officials said they had not yet arrived at a definition of automated decision systems (ADS) — the tools that the city’s Automated Decision Systems Task Force was created to study — and that they hadn’t identified a system the task force could study in detail.

Reports last year revealed that Project Maven, a secretive Pentagon contract that attracted participation from companies like Google, Oculus founder Palmer Luckey’s Anduril Industries, Clarifai, and IBM, sought to use AI to improve object recognition in military drones. And Microsoft’s Research Asia division, which works with AI researchers affiliated with the Chinese military, has been accused of complicity in human rights abuses.

Meanwhile, Clarifai late last year announced the formation of a Washington, D.C. subsidiary — Neural Net One — that will in part handle defense and intelligence contracts, sparking concern among some employees. This summer, Amazon seeded Rekognition, a cloud-based image analysis technology that an MIT study found to be biased against certain genders and ethnicities, to law enforcement agencies. And Peter Thiel-backed Palantir, a startup that provides predictive policing solutions to local law enforcement, has been accused of perpetrating “racist feedback loop[s]” in which police surveil groups of people based on data generated by their own racially biased policing, creating more monitoring and thereby more arrests.

According to Yang, the government’s ethically regressive policies — produced in the absence of a sound AI strategy — have led to its complicity in advancing potentially dangerous AI. “[T]he U.S. government is clearly 24 years behind [the times], because they literally got rid of the Office of Technology Assessment in 1995,” said Yang. “So we’ve essentially been flying blind when it comes to technology for two and a half decades. There’s some catching up to do.”

Universal Basic Income

Yang believes this is particularly true when it comes to the job market, which he says will be — and already has been — transformed by AI. Analysts agree: In a report late last year, Forrester found that automation could eliminate 10% of U.S. jobs in the coming months. The World Economic Forum, PricewaterhouseCoopers, McKinsey Global Institute, and Gartner have forecast that AI could make redundant as many as 75 million jobs by 2025. And in a recent study, the U.K.’s Office for National Statistics (ONS) found that of the 19.9 million people the department surveyed in 2017, approximately 7.4% could be replaced by robots in the coming years.

“[There’s] low-hanging fruit that [automation is] going to knock out soon, like replacing call center workers and truck drivers,” said Yang. “That’s in part because there are powerful economic incentives driving the development.”

That’s why Yang is advocating the Freedom Dividend. He asserts that the monthly payments — which would be paid for by a value-added tax, a consumption tax levied on goods at each stage of their production — could kickstart economic development in regions of the country that haven’t benefited from a wellspring of venture capital. “One reason why people are so animated about breaking up the biggest tech companies is because they know that, in Silicon Valley, the primary business model is to get bought by one of those big tech companies, rather than trying to build a company that’s going to last for decades,” said Yang.

He’s far from the first to voice support for a universal basic income. Martin Luther King Jr., Mark Zuckerberg, Richard Branson, and Elon Musk have championed the idea, with Musk going so far as to suggest that the job loss from automation will be so severe that some form of social safety net will be necessary to support society. Bill Gates has suggested implementing a “robot tax,” whereby the government would extract a fee every time a business replaces an employee with automated software or machines. To that end, the EU considered — but ultimately rejected — a draft plan that would have seen robot workers classed as “electronic persons,” with their owners liable to pay social security for them.

Indeed, a semblance of a universal basic income program is ongoing in Alaska, where the government has invested money in a diversified asset portfolio (the Alaska Permanent Fund) and distributed the earnings among the residents of the state for the better part of four decades. Namibia launched a UBI pilot project in 2009, followed a decade later by the city of Stockton, California. And this year, Y Combinator‘s research arm — YC Research — will begin conducting a UBI study involving 3,000 people in partnership with the University of Michigan’s Survey Research Center.

Some of these programs have achieved success, but not all. Since the implementation of Alaska’s policy, the state has created more than 10,000 new jobs, and the Namibian government reports that its pilot resulted in an 18% dip in general poverty levels. However, participants in Finland’s trial said they weren’t more likely to find work than before they started receiving payments.

Still, Yang thinks that the dividend is “[t]he best way” to encourage entrepreneurship around the country. “[It] would enable more Americans to take risks, have a cushion, and be able to figure out what kind of work they want to do and what sort of work their community needs,” he said. “I think managing the transition [to a more automated workforce] is one of the great projects of this age and that the U.S. in particular is behind the curve. [W]e have to get our act together.”