The Chinese government has wholeheartedly embraced surveillance technology to exercise control over its citizenry in ways both big and small. It’s facial-scanning passers-by to arrest criminals at train stations, gas pumps, and sports stadiums and broadcasting the names of individual jaywalkers. Government-maintained social credit scores affect Chinese citizens’ rights and privileges if they associate with dissidents. In Tibet and Xinjiang, the government is using facial recognition and big data to surveil the physical movements of ethnic minorities, individually and collectively, to predict and police demonstrations before they even start. China is even using facial recognition to prevent the overuse of toilet paper in some public bathrooms.

We may soon see dictators in other countries use these sorts of tools, too. If American cities and states are laboratories of democracy, China’s remote provinces have become laboratories of authoritarianism. China is now exporting internationally a suite of surveillance, facial recognition, and data tools that together equip governments to repress citizens on a scale and with a ruthless algorithmic effectiveness that previous generations of strongmen could only dream of. Call it algorithmic authoritarianism. Where yesterday’s strongmen were constrained by individual informants and case-by-case sleuthing, tomorrow’s authoritarians will, like China, be able to remotely identify thousands of specific individuals in public via cameras, constantly track them, and use unprecedented artificial intelligence and computing to crunch surveillance information and feed it back into the field in real time. This technology is still being imperfectly and inconsistently applied, but China is working to close the gaps. And even the perception of surveillance where it doesn’t exist has been shown to shape behavior.

The limits of China’s willingness to use these tools at home or export them to others are unknown. Worse still, China’s digital authoritarianism could emerge as an exportable model, appealing to leaders on the fence about democratic norms, that could undercut or even rival liberal democracy.

If American cities and states are laboratories of democracy, China’s remote provinces have become laboratories of authoritarianism.

Part of what makes technologically enabled authoritarianism so complex is that the tools also have immense promise to serve customers and citizens well. They’re double-edged swords. Consider the surreal case of the California-based suspected serial killer apprehended after his relative voluntarily submitted DNA to an online ancestry database that matched material at the crime scene—or the accused Maryland newsroom shooter quickly identified by facial recognition. Or the lower-caste Indians who can now receive government benefits thanks to India’s national ID Aadhaar program, which relies on a database that has collected the iris scans and fingerprints of more than 1 billion Indians in a country where hundreds of millions previously lacked state identity cards. Or the Londoners kept safe by massive numbers of CCTV cameras. Or the predictive policing pilot launched in New Orleans with the pro-bono help of Palantir. Even in democracies with meaningful legal checks on state power, leveraging A.I. for policing often suffers from a lack of transparency, citizen input, and a serious risk of biased enforcement and overreach.

China and a few small, advanced, authoritarian states such as Singapore (which is soliciting Chinese bids to install 110,000 advanced facial recognition sensors on the small city-state’s lampposts) and the United Arab Emirates are at the forefront of the application of these technologies. But as China embarks on a trillion-dollar global infrastructure construction binge known as the Belt and Road Initiative, it is already exporting its own tech-enabled authoritarian toolkit to gain profit or goodwill with local authorities, or simply to extend the reach of its own surveillance.

What happens when these technologies migrate to bigger, more fractious societies? This won’t happen overnight—and the financial and logistical obstacles to broad implementation are significant. But there is every reason to think that in a decade or two, if not sooner, authoritarians and would-be strongmen in places like Turkey, Hungary, Egypt, or Rwanda will seek these tools and use them to thwart civil society and crush dissent in ways that weaken democracy globally.

Already there are reports that Zimbabwe, for example, is turning to Chinese firms to implement nationwide facial-recognition and surveillance programs, wrapped into China’s infrastructure investments and a larger set of security agreements as well, including for policing online communication. The acquisition of black African faces will help China’s tech sector improve its overall data set.

Malaysia, too, announced new partnerships this spring with China to equip police with wearable facial-recognition cameras. There are quiet reports of Arab Gulf countries turning to China not just for the drone technologies America has denied but also for the authoritarian suite of surveillance, recognition, and data tools perfected in China’s provinces. In a recent article on Egypt’s military-led efforts to build a new capital city beyond Cairo’s chaos and revolutionary squares, a retired general acting as project spokesman declared, “a smart city means a safe city, with cameras and sensors everywhere. There will be a command center to control the entire city.” Who is financing construction? China.

While many governments are making attempts to secure this information, there have been several alarming stories of data leaks. Moreover, these national identifiers create an unprecedented opportunity for state surveillance at scale. What about collecting biometric information in nondemocratic regimes? In 2016, the personal details of nearly 50 million people in Turkey were leaked.

Now is the time for those invested in individual freedom—in government, in civil society, and in the tech sector—to be thinking about the challenges ahead.

This starts with basic transparency and awareness at home, in international fora, and ultimately inside nations deciding how and whether to adopt the tools of algorithmic authoritarianism. Diplomats, CEOs, activists, and others will need to use their various bully pulpits to reach members of the public. Oversight bodies like the U.S. Congress and European Parliament should convene hearings to hold tech companies and government agencies accountable for their role in exporting elements of the authoritarian toolkit in search of profits or market share. Forging reasonable, balanced approaches to these new technologies at home will be a crucial aspect of pushing other states to do the same. As a recent blog post from Microsoft’s President Brad Smith essentially calling for intensive study and regulation of this space makes clear, now is the time to bridge the tech-policy divide to find feasible, ethical solutions.

Reaching beyond established democracies to set international norms and standards will be difficult, but it is essential to try. An international body, be it the European Union or United Nations, will need to put forward a set of best practices for protecting individual rights in an era of facial recognition. Companies and countries alike can group together to commit to protecting citizens by placing limits on facial recognition, offering the right in some instances to opt out of sharing biologically identifiable information, as Indians fought for and won, or protecting identifying data on the back end.

China and other determined authoritarian states may prove undeterrable in their zeal to adopt repressive technologies. A more realistic goal, as Georgetown University scholar Nicholas Wright has argued, is to sway countries on the fence by pointing out the reputational costs of repression and supporting those who are advocating for civil liberties in this domain within their own countries. Democracy promoters (which we hope will one day again include the White House) will also want to recognize the coming changes to the authoritarian public sphere. They can start now in helping vulnerable populations and civil society to gain greater technological literacy to advocate for their rights in new domains. It is not too early for governments and civil society groups alike to study what technological and tactical countermeasures exist to circumvent and disrupt new authoritarian tools.

Everyone will have to approach these developments with the humbling recognition that Silicon Valley is not the only game in town. Regardless of what happens stateside or in Europe, China will have formidable and growing indigenous capabilities to export to the rest of the world.

Seven years ago, techno-optimists expressed hope that a wave of new digital tools for social networking and self-expression could help young people in the Middle East and elsewhere to find their voices. Today, a new wave of Chinese-led technological advances threatens to blossom into what we consider an “Arab spring in reverse”—in which the next digital wave shifts the pendulum back, enabling state domination and repression at a staggering scale and algorithmic effectiveness.

Americans are absolutely right to be urgently focused on countering Russian weaponized hacking and leaking as its primary beneficiary sits in the Oval Office. But we also need to be more proactive in countering the tools of algorithmic authoritarianism that will shape the worldwide future of individual freedom.