AI safety is the single most important problem of our time. There is currently no consensus on how to create a recursively self-improving AGI (artificial general intelligence) that does not spell doom for the human race. The human brain, for example, is considered a biological general intelligence because it can learn many new abilities and skills. Currently we have only achieved ANI (artificial narrow intelligence), which can only excel at one thing. We want to create a self-improving AI safely because it would result in an intelligence explosion, leading to an ASI (artificial super intelligence). Command: "End all suffering"? Solution: Kill all lifeforms. Put it in a simulation not connected to the internet to see what it will do? It realizes it, wiggles its electrons around to create a radio signal, and copies itself to other computers in the world, carrying out its command so well that it might spell the end of the human race. It is less likely, and I'm not fear mongering, but torture until the stars run out is another possibility. By the time this technology rolls around, however, it is likely we will have solved this problem. This is especially because it will happen in around 30 years according to Ray Kurzweil. Ray is the Director of Engineering at Google and the world's leading predictor of future technological trends with an accuracy of 86%. Bill Gates said he trusts no one with these kinds of predictions more than Ray. It is not guaranteed that we will solve this problem, however. Funding is crucial, and the Machine Intelligence Research Institute, for example, is not meeting its funding goals. Donald Trump appointed Peter Thiel recently to his transition team as a tech advisor. This is great because he is the top donor to MIRI. However, he has only given (I believe) $1.6 million. There are billions being poured into every other issue (which, in my opinion, are non-issues) in the world, when this one is the most important the world will ever face. He also donates to companies working to solve aging, however, aging, which kills 100,000 people a day worldwide, pales in comparison to the upcoming development of an ASI that will quickly become billions of times smarter than Einstein. I propose a serious consideration of increased funding to all AI safety research labs, programs, and think tanks. I hope to convince you all of the colossal importance of this issue.



P.S. This problem is called the Control Problem.