The U.S. military has quietly said it wants 70 unmanned self-driving supply trucks by 2020. And seeing as $21 trillion has gone unaccounted for at the Pentagon over the past 20 years, when the Pentagon wants something, it tends to get that something.

Of course supply trucks in and of themselves don’t sound so bad. Even if the self-driving trucks run over some poor unsuspecting saps, that will still be the least destruction our military has ever manifested. But because I’ve read a thing or two about our military, I’ll assume that by “supply trucks,” they mean “ruthless killing machines.” In fact, it’s now clear the entire “Department of Defense” is just a rebranding of “Department of Ruthless Killing Machines.”

And even if they do mean simple supply trucks, once those unmanned trucks are commuting themselves around the Middle East like a cross between “Driving Miss Daisy” and “Platoon,” how long do you think it will be until some a-hole general blurts, “Why don’t we put a missile or two on those things?”

The answer is 17 minutes. (Fifteen minutes if Trump is still president.)

Plus, these trucks are not the military’s only venture into artificial intelligence. The Navy wants $13.5 million to go toward rapid advances in AI. The Air Force is looking for $87 million to experiment with it. The Army has requested $6.5 million more for it. And the Marine Corps says it needs $7.1 million. (These are just the publicly stated numbers. Much like a vampire, our military does 95 percent of its best work in the dark.)

So this brings up a pressing question that we will see again and again in the coming years: How much do we need to fear artificial intelligence—or is it simply a great technological advancement?

Let me answer that question with a bit of a tangent. Human beings are notoriously unreliable. But there are two things you can always rely on humans for:

1. Humans will advance technology in every way possible.

2. Other humans will strap explosives to that technology.

Think about it: The automobile eventually became the tank. The airplane became the bomber. The printing press became the semi-automatic assault printing press. And so on.

But maybe I’m being paranoid. Maybe artificial intelligence is here to help us. One of the top AI geniuses at Google says the world is currently screwed (climate change, pollution, auto-tune). To save it, he says, “either we need an exponential improvement in human behavior—less selfishness, less short-termism, more collaboration, more generosity—or we need an exponential improvement in technology. … I don’t think we’re going to be getting an exponential improvement in human behavior. … That’s why we need a quantum leap in technology like AI.”

Basically, he’s saying we’re horrible, shitty people who are not going to change, BUT the bots will arrive soon to show us the way!

And there is some truth to this. AI will one day be able to tap into basically the entire internet simultaneously and learn everything that has ever been learned far quicker than troglodytes like us. So it will be incredibly, unimaginably smart, and will always be three moves ahead of us. On top of that, it won’t have the things that get in the way of our mental advancement as a species, such as:

hunger

fear

insecurity

superstition

religion

the drive to stick one’s penis in anything that moves

Artificial intelligence doesn’t have to deal with any of that.

So maybe AI will indeed save us from ourselves. … Orrrr, maybe with its infinite knowledge it will decide the planet would be better off without the ape-like creatures who keep trying to tell it what to do. Tesla CEO Elon Musk had an exciting and upbeat response when he was recently asked about how fast artificial intelligence is advancing.

“I tried to convince people to slow down. Slow down AI, to regulate AI. This was futile. I tried for years.”

(If you happen to have a cyanide tablet nearby, now would be the time to chomp down on that.)

Musk believes artificial intelligence is a far greater threat to humanity than nuclear weapons. Keep in mind, in order for AI to do great harm to our dopey species, it doesn’t necessarily have to be out to get us. It could simply come up with “solutions” that humans aren’t really prepared for. Here’s an example from The Atlantic of an AI mistake:

“One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its memory and count it as a perfect score. So the AI crashed the plane, over and over again, killing all the virtual people on board.”

That particular bot got a perfect score on landing a plane by killing all the imaginary humans. It kind of reminds me of the time I stopped my younger brother from beating me in “The Legend of Zelda” video game by throwing our television in a creek.

So now, dear reader, you may be thinking, “That’s terrifying—the AI was given an objective and basically just did ANYTHING to get there.” However, is that so different from humans? In our society, we are given the objective of “accumulate wealth and power,” and now we have people like weapons contractors and big oil magnates achieving the objective by promoting and fostering war and death around the world. It’s almost like they don’t care how they achieve the objective.

I’m not saying I know whether AI will save us all or kill us all, but I am saying these are the types of questions that need to be asked, AND SOON, because we won’t be the smartest beings on this planet much longer. (As it is we’re barely holding on to the top spot. A solid 50 percent of us are just glorified butlers to our dogs and cats. One can’t really claim to rule the world when one is carrying another species’ poop.)

—

Also check out Lee Camp’s new comedy special, which one review called “the new standard for political stand-up comedy.” It’s only available at LeeCampComedySpecial.com.

This column is based on a monologue Lee Camp wrote and performed on his TV show “Redacted Tonight.”