These days, advancements in artificial intelligence are not only making rich people billions of dollars, but inspiring wild-eyed fear-mongering about the end of civilization. Those concerned include Elon Musk, who has said that the technology could eventually produce an “immortal dictator,” and the late Stephen Hawking, who warned that the sudden explosion of artificial intelligence could be “the worst event in the history of our civilization.” Generally, the fear is that we will produce machines so intelligent that they are capable of becoming smarter and smarter until we no longer have control over them. They will become a new form of life that will rule over us the way we do the rest of the animal kingdom.

As a professional in the AI industry, I can tell you that given the state of the technology, most of these predictions take us so far into the future that they’re closer to science fiction than reasoned analysis. Before we get to the point where computers have an unstoppable “superintelligence,” there are much more pressing developments to worry about. The technology that already exists, or is about to exist, is dangerous enough on its own.

Let me focus on some real-world developments that are terrifyingly immediate. Of the many different kinds of artificial neural networks, algorithms modeled after a rough approximation of how groups of neurons in your brain operate (which make up what is commonly called AI) I will focus on two: Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs).

GANs are good at making counterfeit images, and thus videos as well. A GAN is made up of two neural networks that have each been “trained” on what a certain thing looks like, like a bathroom or an animal or a person of a certain identity. When the training is complete, one network is told to start generating new images of the thing on its own. The other network is presented with a stream of these counterfeit images with real images interspersed and tries to guess which are fakes. Human input tells each network its successes and failures. Each then adjusts itself to try to do better and they push each other to greater and greater heights of success. RNNs work with data that exists as an ordered sequence, such as a record of daily high temperatures in a city, or the words in a paragraph. Processing and generating written and spoken communication are two of the tasks RNNs are most commonly used for.

A computer program that can generate convincing images, or another that can understand human speech and generate it, might not seem world-shaking. But as these “counterfeiters” steadily improve, the implications are immense. GANs can produce photorealistic images and videos of nonexistent people. Magazines and advertisers can simply replace real people with generated pictures, saving money on photo shoots which require lighting, sets, technicians, photographers, and models. Stock photos will no longer be of people pretending to be students, professionals, workmen, etc. They will be computers pretending to be people. Many of the images you see on the internet will be of people who literally do not exist. If that sounds implausible, realize that it’s just another small step in the kind of fakery that occurs already through Photoshop and CGI. It just means that instead of starting with a photo, you can start by asking the computer to generate one.

In the 2002 film Simone, Al Pacino plays a film producer who creates a fictitious CGI actress to avoid the personality conflicts that come with shooting real live humans. “Simone” develops a popular following and wins two Academy Awards, and when Pacino can’t produce her in person he is arrested for her murder. When Simone came out, it received mixed reviews, the critical consensus being that “the plot isn’t believable enough to feel relevant.” I can assure you, it’s now relevant. It’s possible that fashion designers will soon get their “perfect model”—a woman with body proportions that would make it physically impossible to stand upright or even stay alive, like the original Barbie doll. They won’t need photo-editing tricks, and they won’t need to force young women to starve themselves. (Though undoubtedly the literally impossible “beauty” of the resulting images will lead to plenty more instances of eating disorders.) Why would anyone hire real people, when artificially-generated replicas are just as realistic, far more flexible, and don’t ask to get paid?

If you think “fake news” is a problem now, just wait. When an image can be generated of literally anyone doing literally anything with perfect realism, truth is going to get a whole lot slipperier. The videos will soon catch up to the images, too. Already, it’s possible to make a moderately convincing clip that puts words in Barack Obama’s mouth. Fake security camera footage, fake police body camera footage, fake confessions: We are getting close. Marco Rubio has worried that “a foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe” or a “fake video of a U.S. soldier massacring civilians overseas.” More worrying is what the U.S. military and police forces could do with it themselves. It didn’t take much deception to manipulate the country into supporting the invasion of Iraq. Fake intelligence is going to become a whole lot more difficult to disprove.

These people do not actually exist.

AI-generated images and videos are not just going to cast doubt on reporting, but will pose a major challenge for the legal system. Photographic evidence in trials will always be in doubt once generated images can’t be distinguished from real ones by human experts or other AIs. They can also be used as alibis, with claims that the real images are the counterfeit ones. In this dizzying world of forgery and illusion, how is anyone going to know what to believe? So-called “deepfake” videos will make Donald Trump’s claims of “fake news” that much more plausible and difficult to counter.

Mimicking ordinary human speech is coming to be a cinch. Google recently unveiled a new AI assistant that can talk like a person. It even puts “ums” and “uhs” where they need to go. Called Duplex, it can run on a cell phone, and not only sounds like a human but can interact like one. Duplex’s demo used it to call a hair salon and make an appointment. The woman on the line had no idea she wasn’t talking to a person. Google says it is building Duplex “to sound natural, to make the conversation experience comfortable.”

Imagine how tomorrow’s technology could have worked in 2016. Two days before the election, a video appears, showing Hillary Clinton muttering “I can’t believe Wisconsin voters are so stupid,” supposedly caught on a “hot mike” at a rally in Eau Claire. It circulates on Facebook through the usual rightwing channels. Clinton says she never said it, and she didn’t. It doesn’t matter. It’s impossible to tell it’s fake. The fact-checkers look into it, and find that there never was an event in Eau Claire, and that Clinton had never even been to Wisconsin. It doesn’t matter. By that time, the video is at 10 million shares. The “Wisconsin can’t believe you’re so stupid” shirts are already being printed. Clinton loses, Trump becomes president. Catastrophe.

Of course, there will undoubtedly be some benefits along with the risks. It’s going to be easier than ever to get fresh ideas for remodeling your bathroom, for instance. Designers will begin to use generated images to get new ideas for interior design, clothes, whatever they want. The expanded power of filmmakers, artists, and game designers will certainly open up new creative possibilities.

If we’re cynical, we might even rather like the idea of sowing endless reasonable doubt and undermining the U.S. legal system. After all, police officers already aren’t punished when they’re caught on film murdering people. Technology could, in certain ways, act as an equalizer.

But the state may also be empowered in incredibly invasive ways. AI will be used to improve “lie detection,” which even if it doesn’t work may dazzle judges enough to be accepted as reliable. If this seems far-fetched, realize that something similar is already being deployed. There is a machine learning algorithm being used by judges to predict whether or not a person convicted of a crime will commit more in the future. It is being used in sentencing and setting bond. For the most part, it is about as accurate as randomly guessing, except that it is prejudiced against black people.

This particular dystopian prospect has a solution, one with the advantage of being simple and easily understood by the public: Ban the use of AI in courtrooms and police interrogations entirely. But that depends on having reasonable people setting policy, and some will actively push for the expansion of AI in criminal justice. Sam Harris has gone further and looked forward to a time in which human society at large features “zones of obligatory candor” and “whenever important conversations are held, the truthfulness of all participants will be monitored.” One great fantasy of the authoritarian mind has been a machine that could determine the real and absolute truth. In the legal system, some will soon believe they have found such a machine, whether or not they actually have.

In language, the RNNs are beginning to produce another revolution. Simple online news articles, like reporting on a regular season baseball game, can be produced without human input. The first RNN-generated stories were published in 2015 to industry fanfare, and they are already being deployed by the Associated Press and the Washington Post. These articles include properly used idioms and are almost charming in their implementation of U.S. vernacular English. (“Bad news, folks,” begins a sports report.) We can expect the use of “automated journalism” expand further and further, since it allows publishers to pay even less for content than the already-minimal amount they pay writers.

We’ve all heard about how social media was manipulated in 2016, in part through the use of bots. The “Russian propaganda” that appeared on Facebook was often ludicrous (e.g., a meme of Jesus arm-wrestling with Satan—“SATAN: If I win, Clinton wins! JESUS: Not if I can help it!”) But as the ability to imitate human content improves, it won’t be necessary for Russians to come up with crude imitations of American media. The RNNs can do it, and vast networks of social media accounts run by RNNs will be able to shape narratives and manipulate perceptions. Language processing and generation is one of the areas receiving the most investment at the moment. The bots will improve quickly.

In fact, it’s already pretty easy to trick someone into thinking they’re talking to a fellow human when they’re not. There are some fun examples of this. Nora Reed created a Christian Twitter bot account that successfully trolled New Atheists, got into arguments with them, and had Christians come to its defense. Here’s an excerpt from a genuine chat between “@christianmom18” and some real live human atheists:

@christianmom18 atheists are going to hell

@ElNuevoOtraMio2 why thank you, don’t believe in it though so i’ll just have to get on with life 😉

@christianmom18 wow

@RichysGames not only is hell not real, but the logic behind the threat of it makes Jesus a terrorist

@christianmom18 check the bible

@RichysGames Yes I know it quite well which is why I know it’s nonsense and the scenario proposed is not one of a savior

@christianmom18 and then what?

@RichysGames Nothing, I live my life and then my atoms continue on through nature after I die […]

@christianmom18 i think God sent you to me to learn the truth

@RichysGames Truth is based upon evidence, not ignorance from bronze age sheep horders

@christianmom18 i am so sad for you

@RichysGames I am living my life, you’re wasting yours because ignorant bronze age idiots wrote a fairytale

@christianmom18 you can find god

@RichysGames Which one? humans have proposed over 3000

@christianmom18 no

Richy continued to talk to her for three hours.

In 1950, Alan Turing developed his famous “Turing test” to measure whether a machine could exhibit intelligent behavior indistinguishable from that of human beings. A machine passes the test if a human evaluator cannot reliably tell the difference between the machine and the human. In my opinion, when internet “rationalists” are being fooled into having arguments with bots, the Turing test has been passed. Note, too, that @christianmom18 wasn’t even run by an RNN. It’s a much simpler algorithm, and yet it is still fooling people. The RNNs used for this form of communication will continue to improve, and at some point prominent commentators and intellectuals may be engaging in discourse with AIs without knowing it. When Ross Douthat has a thorough discussion with a Twitter bot about how we need to return to a past that never was, we’ll know the future has truly arrived.

The Christian mom isn’t the only bot to successfully antagonize men on Twitter. Sarah Nyberg developed a social justice feminist bot that would post statements like “feminism is good,” “patriarchy exists,” “Drudge Report fans are toxic and terrible,” “nothing true is ever said in gamergate” and then, in her words, “watch desperate internet assholes rush to yell at them.” Nyberg’s bot didn’t do much to conceal its true nature. It tweeted every 10 minutes exactly, the only accounts it followed were bait shops, and its handle was @arguetron. But it was “honey for internet jerks” who would “spend hours and hours yelling at it.” @arguetron would reply to every reply with simple statements like “The data disagrees with you,” “Would you like a medal for being so wrong,” “That’s gibberish, try again,” or “You haven’t said anything i haven’t heard 1000 times before from other people who were also wrong.” Yet one InfoWars fan spent almost 10 hours trying to get the last word on feminism and social justice, with indignant comments like “Typical lib, when u can’t prove something you pretend the other side isn’t making sense.”

The good news here is that we may finally have found a solution to the problem of internet reactionaries: deploy a feminism-bot and have them spend their days trying to argue it to death. The bad news here is that when AI passes the Turing test, our ability to tell truth from fiction further erodes.

I suppose we should talk about sex. Here, the development of more sophisticated robots is downright creepy.

It is well known that due to either raging misogyny and generally unpleasant personalities, and/or social awkwardness and anxiety, there are a large number of heterosexual men who are chronically unable to find partners. Some call themselves “incels,” others merely “lonely.” One solution frequently trotted out is to give them sex robots. (Solutions for those in need don’t change very much, do they? “Let them eat cake” is now “let them fuck robots.”)

This seems a solution more suited to the misogynistic than the anxious. The first company to present a functional sex robot at a consumer tech convention discovered that the men who make up their potential customer base mostly seemed interested in committing sexual violence against women. The robot, “Samantha,” was practically destroyed in a couple of days after being aggressively molested. (After an upgrade, Samantha now has the ability to refuse to engage if she thinks her user is being too aggressive, but one suspects this will make the problem worse rather than better.)

It seems likely that prolonged exposure to a sex robot would render men, especially of this sort, permanently incapable of having healthy sexual relations with a real human woman. Some men have already developed seemingly lifelong attachments to their sex dolls (there is a BBC documentary about them). But perhaps it’s socially beneficial for the type of man who would want a sex robot to be given a sex robot, if it’s the alternative to dysfunctional relationships.

For those lonely hearts who have simply been socially atomized and isolated by neoliberal capitalism and are not raging misogynists, a sex robot is not the answer. For these men, the market will offer artificial girlfriends with full personalities. The movie Her explored this concept, but once again, it’s not especially speculative. There are already unsophisticated girlfriend simulation games that don’t even use AI (e.g., “My Virtual Manga Girl”). And we’ve already seen companies use romance-bots in basic ways. The adultery-facilitation service Ashley Madison immediately contacted new users with a bot posing as an interested woman. Men would have to buy credits from the site to reply to the woman, and they did. 80 percent of initial purchases came from users trying to message a bot.

Those for whom online dating fails will have ready access to software designed to satisfy emotional, intellectual, and sexual needs. Just combine GAN-generated pornography with video games, and add a fully optimizable personality trained to listen and respond. Users will be able to get GAN-generated photographs of themselves with their partners on vacation to hang on their walls. Men will have pictures of the Canadian girlfriend they met on vacation to show their friends! The company that makes the Fleshlight may even sell custom… well, let’s not finish that sentence.

The audience for such products is obvious in a time of ever-deepening mass loneliness. But it may have especially broad appeal in countries with extremely skewed gender ratios. Between China and India, for instance, there are 70 million more men than women. Some men are simply going to end up unlucky, and many may understandably turn to simulations of love. However nightmarish the idea of replacing human companionship with lifeless consumer products may sound, it may be better than having no available relief for isolation. After all, robotic therapy seals (and other animals) have already been successfully introduced as a way of keeping elderly people company and giving them stimulation. A better solution would be a world in which strong communal bonds and mutual care means nobody lacks for companionship. But such a world is far off.

By far the most serious and most frightening AI development is in military technology: armed, fully autonomous attack drones that can be deployed in swarms and might ultimately use their own judgment to decide when and whom to kill. Think that’s an exaggeration? The Department of Defense literally writes on its websites about new plans to improve the “autonomy” of its armed “drone swarms.” Here’s FOX News, which seems excited about the new developments:

No enemy would want to face a swarm of drones on the attack. But enemies of the United States will have to face the overwhelming force of American drone teams that can think for themselves, communicate with each other and work together in hundreds to execute combat missions…. Say you have a bomb maker responsible for killing a busload of children, our military will release 50 robots – a mix of ground robots and flying drones…Their objective? They must isolate the target within 2 square city blocks within 15 to 30 minutes max… It may sound farfetched – but drone swarm tech for combat already exists and has already been proven more than possible.

The focus here is on small quadcopter drones, designed to be deployed en masse to kill urban civilians, rather than the large Predator drones used to murder entire rural wedding parties in Muslim countries. DARPA’s repulsive Twitter account openly boasts about the plan: “Our OFFSET prgm envisions future small-unit infantry forces using unmanned aircraft systems and/or unmanned ground systems in swarms of >250 robots for missions in urban environment.” The Department of Defense is spending heavily in pursuit of this goal—their 2018 budgetary request contained $457 million for R&D in the technology. Combined with our new $275 million drone base in Niger, the United States is going to have a formidable new capacity to inflict deadly harm using killer robots.

Perhaps more telling, the Department of Defense is also spending heavily on counter-drone systems. They know from experience that other entities will acquire this technology, and that they’ll need to fight back. But while the offensive murder technology is likely to be incredibly effective, the defensive efforts aren’t going to work. Why? Because a swarm of cheap drones controlled by AI are almost unstoppable. Indeed, the DoD counter-drone efforts are pathetic and comically macabre: “The Air Force has purchased shotgun shells filled with nets and the Army has snatched up the Dronebuster, a device used to jam the communications of consumer drones…the Army and Navy are developing lasers to take down drones.” Lord help me, shotgun shells with nets! And if a drone is autonomous, communications jamming doesn’t do anything. If you were facing a swarm of drones, communications jamming would disrupt their coordination, making them less effective, but there would still be hundreds of drones trying to kill you.

It’s ironic, given all the fear that powerful members of the tech industry and government have about killer AI taking over the world, that they are silent as we literally build killer robots. If you don’t want AI to take over, stop the military industrial complex from building autonomous death drones.

An AI-piloted drone is a perfect spying and assassination machine. ISIS has already used them on the battlefield. Venezuela’s Nicolas Maduro recently survived an assassination attempt carried out by drone while he was giving a speech. Two explosive-laden drones blew up near him (there is some dispute about exactly what happened). This is something that should have been far bigger news. It’s not the last we will see of drone murders. Small, inexpensive drones will be able to follow people around and kill them at the first opportunity. (Even more effectively in the “swarms” the U.S. government is proudly developing.) Privacy invasion will be rampant. High-quality cameras and shotgun microphones mounted on drones will be used to spy on politicians, generals, CEOs, and activists (and, of course, the spouses of jealous types). If you piss off the wrong people, you’ll be tailed by a drone until they either lose interest or gain suitable blackmail material.

At Current Affairs, we are supposed to at least try to suggest some solutions to the problems we raise. Well, this one’s tricky. The only real solution is to create a society in which people won’t want to do all that spying and assassinating. The Campaign to Stop Killer Robots is pushing for international agreements to limit the development of autonomous military drones, but this technology is different from anything that came before in that a lot of it is accessible to anyone. The rate of increase in processor power has begun to slow, but it is still increasing, and the cost-to-performance ratio of the specialized chips that make this all possible is still falling just as the rest of computer technology has been doing for decades. If you can scrape together enough cash to buy a gaming PC, you can run neural networks. A tank costs $6 million and you can’t just go buy one. Not so for AI.

There is, however, something positive we can say about these developments. The products of AI labor can be used to take care of everyone’s needs. The automated assistant can reduce the number of harried human beings who have to do other people’s scheduling. Drones, the non-armed kind, can be fun and can take incredible video footage. If we didn’t have a military-industrial complex in which building death robots was profitable, if we didn’t have isolated, angry men who want to rape and kill, if we had an egalitarian society in which people weren’t trying to abuse and exploit each other, then we wouldn’t have anything to fear from the technology itself, because it would help us do good rather than evil. The dystopia is not inevitable. But first we have to recognize what the realistic AI risks actually are, and what they aren’t.