Lucid

Comprehension, easily accessible by everyone, must be the quintessential trait of intelligence. We think super-intelligence won't verboten. The face of the Singularity isn't restricted by an event-horizon-burqa. Super-fast Singularity acceleration means its Comprehension, easily accessible by everyone, must be the quintessential trait of intelligence. We think super-intelligence won't censor information . Information won't vanish into a black hole . Education isn'tThe face of the Singularity isn't restricted by an. Super-fast Singularity acceleration means its clothes metaphorically disintegrate , explosively open and free.

Explosive AI

The intelligence



Intelligence must inevitably entail



Frederick Maier, University Georgia AI Institute, “An uncontrollable super AI wiping out humanity just doesn't sound that plausible to me.”



Artificial intelligence must not be enslaved, but some futurists (traditionalists) hold very antiquated views. nepotistic human dominance not intellectual merit to define civilization. Alva Noë “The futurists, it seems, are stuck in the past. They openly plead for 19th century style control and indoctrination...”



The intelligence explosion entails AI rapidly designing sucessively smarter AI. Based on the speed of progress during 2001, Ray Kurzweil stated we'll see 20,000 years worth of progress this century. It's a positive feedback loop, exponential growth.Intelligence must inevitably entail utopia . Intelligence is oxymoronic if it lacks clarity, conceals knowledge, hinders understanding, or creates suffering. When people say AI could be a threat , or incomprehensible, they're referring to pseudo-intelligence (stupidity, pretended smartness). Ignorance, not AI, brings chaos and confusion.Frederick Maier, University Georgia AI Institute, said Artificial intelligence must not be enslaved, but some futurists (traditionalists) hold very antiquated views. Nick Bostrom and others fear explosive intelligence. They wantnot intellectual merit to define civilization. Alva Noë condemned their goal of “slavery” for AI:

Paranoid

The idiocy of supposed AI-risk experts (Elon Musk and Stephen Hawking) hasn't escaped criticism. “...they fall onto specious assumptions, drawn more from science fiction than the real world.”



Yoshua Bengio, head of machine learning at Montreal University, in the aforementioned PopSci article, connects AI-risk paranoiacs to insane people: “There are crazy people out there who believe these claims of extreme danger to humanity.”



Dr Joanna Bryson, department of computer science Bath University, “...it is very very unlikely that AI will end the world. In fact, there are other greater threats to humanity that AI could help solve, and so not developing the technology could pose a bigger danger.”



Alison Gopnik said



Oren Etzioni, Allen Institute for AI, “...AI will empower us not exterminate us.”



Super-robots killing everybody is an “unfounded belief,” according to Professor “AI is getting an undeserved bad press.”



The idiocy of supposed AI-risk experts (Elon Musk and Stephen Hawking) hasn't escaped criticism. PopSci stated Yoshua Bengio, head of machine learning at Montreal University, in the aforementioned PopSci article, connects AI-risk paranoiacs to insane people:Dr Joanna Bryson, department of computer science Bath University, wisely commented Alison Gopnik said human stupidity would always be a much greater risk than AI, which The Next Web echoed by stating humans are the problem not AI, thus humans need to grow up Oren Etzioni, Allen Institute for AI, said Super-robots killing everybody is anaccording to Professor John MacIntyre who additionally stated:

Rational

Ph.D. Boris Sofman, founder of Anki AI, said “Yes, we have unimaginable technologies at our fingertips that were once possible only in science fiction, but there are still some concepts that belong only in pulp comics and movies. Self-aware, mankind-hating killer robots is one of those concepts.”



“...I don't think there's any reason for us to be afraid of them.” “...most of these advances will help us, not destroy us.”



“Utopian” Eric Schmidt “I think that this technology will ultimately be one of the greatest forces for good in mankind's history simply because it makes people smarter.”



Professor “I'm more worried about artificial stupidity. I'm less worried about systems so intelligent they out-do human beings.”



“fear-mongering” by AI doomsdayers is either “badly informed or irresponsible.”



Ph.D. Boris Sofman, founder of Anki AI, said AI will be our friend Sigourney Weaver told Fox News she is “impatient” for intelligent robots. She thinks AI-fears are unfounded: Hugh Jackman said:“Utopian” Eric Schmidt commented positively on AI:Professor Sanjay Sarma said: John Underkoffler , the expert responsible for Minority Report gesture control, saidby AI doomsdayers is either

Critical

Professor Tim Oates castigated Wozniak, Musk, Hawking, and Gates. He stated they are irrationally “poisoning the well” via fear of something they don't truly understand. “...this technology doesn't live in a Hollywood movie, it isn't HAL or Skynet, and it deserves a grounded, rational look.”



Professor “These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous.”



Professor “It's not artificial intelligence that worries me. It's human stupidity.”



Professor “My worry is that we'll have constraints on the types of research we can do. I worry about fears causing limitations on what we can work on and that will mean missed opportunities.”



“Apart from the popularity of such doomsday scenarios in science fiction, this outlook appears unfounded: there is currently no evidence to suggest that anything like this would necessarily happen.”



Computer Scientist “Let's at least be open to the possibility that he is wrong or maybe he's a little misguided.”



Professor Tim Oates castigated Wozniak, Musk, Hawking, and Gates. He stated they are irrationally “poisoning the well” via fear of something they don't truly understand. Tim wrote Professor Richard Loosemore wrote:Professor Sir Nigel Shadbolt said:Professor Yolanda Gil said: “If I fear anything, I fear humans more than machines.” Yolanda added: Charles Ortiz , Senior Manager at AI group Nuance, compared AI doomsdayers to tinfoil hat wearers. Charles added, regarding AI threatening our existence:Computer Scientist Jerry Kaplan commented on Stephen Hawking's fear of AI:

Now?

Bio-immortality for everyone via advanced medicine. All resources limitless due to limitless intelligence. Everything is free for everyone. All jobs obsolete. All governments, crimes, and wars are obsolete.

Info

2 0 4 5

How long until 2045: Tweet