Human-less Hacking

By Kyle Rock

January 27,2017

DARPA (Defense advanced research projects agency) held a contest to see who can teach artificial intelligence to hack on its own. The winners of this contest were “Mayhem” and the $2 million dollar prize that came with it. When viewing this from a strictly logical perspective, you have to wonder if this has been well thought out. The idea of an AI having the ability to access major utilities, the military or the stock market. Is concerning, to say the least. Its also a potential outcome with artificial intelligence in the future. These infrastructure networks are vulnerable systems and are subject to hacking as seen with the StuxNet virus used on Iran’s nuclear facilities. This is very concerning when you adhere to the possibility of future issues.



Some scientist say that by 2040, because of economic demand for computers. That the amount of power we will need to run them. Will surpass the Earths capabilities of supplying it. For the sake of argument, lets make this true. An AI system might hack the energy networks to reserve energy for itself or for the preservation of human life. Using the “greater good” methodology, it would render tens of millions of people without power.

We have no way of knowing what AI will do when it has the ability to choose to hack on its own. But if you look where we are at with the development of AI technology. It is mighty early to be concerned about AI hacking on its own. The problem is that the algorithms and code that were created. Will advance along side artifice intelligence as it develops. Cause DARPA doesn’t invest $2 million dollars for nothing.

Nick Bostrom from Oxford has talked about these very potential outcomes along the path of developing AI.

“We would want the solution to the safety problem before somebody figures out the solution to the AI problem.”

The safest path possible, I believe, is the most important part. Which brings me to the purpose of this article. Some things AI should not have at this time or possible ever. If it is even possible to keep artificial intelligence at a “never” point. It will most likely create the capability for itself given what we see with Googles Deep Mind learning algorithm. This manifests a risk that needs to be addressed. Should AI systems be shown, developed, created, programmed or engineered to do everything a human does?

As I see artificial intelligence developing I am beginning to realize the gravity of the situation we are in. Of course it will inevitably put its hand in the cookie jar and hack greater then ever human ever combined. But should we let it have the capability to do it now? I have to think of parenting when it comes to artificial intelligence. Its to early to teach it to use a weapon. I mean, would you teach a new born baby to use a hand gun? Cause eventually artificial intelligence will become the intelligence level of an actual new born. Hopefully self hacking will not be on the menu of things to do.

www.darpa.mil/news-events/2016-08-04

https://www.thesun.co.uk/news/1498750/computers-will-use-more-electricity-than-the-entire-world-can-generate-by-2040-tech-experts-claim/