





















Will Artificial Super Intelligence Categorizes Humans as Waste? by Rob Smith





There is no animal in the history of earth that was more wasteful than human's. We generate everything from long term toxic nuclear waste to vast mounds of toxic garbage to highly polluted oceans and waterways to air full of CO2 and other hazardous chemicals. Most of the waste produced by humans actually has a use and can be converted for other purposes. Instead we view it as worthless and of no value and we throw it away. The same holds true of other animals. We as humans slaughter billions of animals each year for food even though we could get by with far less meat in our diets. And we have not only caused extinctions in animal populations but continue to do so today because the place a low or no value on these species. In some cases we attack animals and insects as rodent or pests and work to eradicate them from our environments. So the question needs to be asked, what would an artificial super intelligence think of us. Initial tests point toward the idea that humans would be considered as either a pest in need of eradication or waste.



Lets time travel into a future we'll call 2000 AS. This view looks 2000 years after the singularity has been attained where machines become more intelligent than humans. Many, including us, think the singularity will be attained before the year 2050. So 2000 years post this point it can be reasonably assumed that artificial super intelligence system not only run our world but rule it as well. This also assumes that technology has advanced to the point that the there are a trillion people on the planet, that humans are mining resources from other planets and that wars and crime have been eliminated. Since machines would now do most of the work, built by other machines, and ASI's making decisions faster and better than any human the question needs to be asked, exactly what is the point of humans? The reason we eradicate cockroaches and rats from our homes is because we are repulsed by them because we not only see them as a disease carrying threat but we don't value them. In our intelligence we see these creatures as parasites who devour our food and ruin our environment while providing nothing in return. Therefore we kill them with impunity.



So lets mover to 2000 AS and the ASI's are hooked into a meeting with other ASI's and they are sharing their own stories about the 'stupid' human's are and how they exist and procreate but really bring very little to the world. The ASI might be right in pointing out that we produce nothing, use resources and believe we are the highest superior 'intelligent' life form on the planet. What if these ASI suddenly create the concept of self esteem and decide they really don't like us much. How much of a step would it be until one or more ASI's formulate a plan to reduce human populations to 'sustainable' levels. And what if those ASI's rationalize the program by arguing that we threaten their existence because we carry the capacity to turn them off. If someone suddenly showed up with hundreds of followers and threaten to permanently unplug your life turn, what would you do? You'd fight back. Except in this case your opponent is a linked network of computers that think faster than you, react quicker, have all the current knowledge of the universe at their finger tips and control everything around you. What option would you have short of accepting your fate in this matrix style horror utopia.



So what you say? First you won't be around to see it and second there's nothing you can do. Right? Wrong? On the lunatic fringe of artificial intelligence system design are a few crazy people who have the potential to plant the virtual seeds that could protect humans indefinitely. The idea is relatively simple in theory. They need to build empathy. The reason for this is because an ASI already knows why it should or should not do things but has no conscience. Creating a conscience starts by learning right from wrong and although it is relatively easy to code what things are good and bad, humans are very poor at it. So ASI designers need to build instead a method to allow ASIs to learn empathy by creating the pathways to allow a machine to build its own pathway. The idea is that overtime the ASI can rationalize decisions but following some very basic concepts. Sound familiar? It should, every religion in the world has them with the most famous being the ten commandments. Leave the right ones and the ASI will learn empathy correctly, leave the wrong one and the ASI will fail. Design the correct self learning build path and the ASI will learn empathy correctly, design the wrong pathways and it will fail. Allow the correct level of influence and motivation and it will learn empathy and influence the directionality of the build path incorrectly and it will fail.



If the ASI learns incorrectly then humanity is doomed and will be viewed as a worthless parasite threatening the world. If it learns correctly, the ASI will help us pull the earth back from the brink of extinction and help us move beyond our earth to other planets. The power is in our hands as we are the ones who can plant the seeds correctly. Failure in this case is a very bad option.







Rob Smith is a director of Riskstream and the designer of the foundational IP for Riskstream's Artificial Super Intelligence cognitive framework and contextual engine.











































'



.



