Ever dreamed about a world where cars drive themselves, planes fly themselves and medical sciences are so advanced that robots are able to perform the most complex surgeries with sheer efficiency? Fascinating, isn’t it? But also far fetched as it may seem.

Believe it or not, some of these things might really happen few years from now. ‘How’ is the question. Most of you might be thinking about Artificial Intelligence, and you’re probably right. But is it really so advanced that it can supersede the human intelligence, not just supersede, outperform it in every aspect? Well, there’s a bit of change in the picture.

Artificial Intelligence has been around us for years but the world is yet to see an ultra-advanced form of AI with a power and potential to dramatically transform the world around us.

Enter Artificial Superintelligence (ASI), a synthetic system with cognitive abilities that is so powerful, it can outperform the human intelligence across any relevant metric. And although it looked like a pipedream few years back, the technologies are progressing so tremendously that researchers now believe that the era of ASI might not be that far after all.

Be that as it may, one thing is obvious, that once we gain access to a technology so advanced, it would radically transform our lives, bringing a paradigm shift that would last for ages and generations to come.



You may also like 4 Revolutionary Aspects of Artificial Intelligence.

The Looming Risk Factors

As fascinating as the idea sounds, it’s actually way more intimidating. By rendering self-decisive capabilities, we’re putting machines and computers in control of everything. And at some point, it adduces a horrifying picture of a dreadful future where computers and machines are ruling the universe. Take self-driving cars for instance. A slight software glitch can cause the car to go out of control precipitating damage to the property, harming civilians and might even cause casualties. However, several organizations are giving their best efforts to ensure that the rise of Artificial Superintelligence won’t lead to the fall of humanity.

Surprisingly, the Global Catastrophic Risk Institute (GCRI) which is committed toward preventing the global civilization against all sorts of catastrophic events is also having an ASI project currently under its belt. Not only that, the project has received funding from the Future of Life Institute which in itself is a big deal. When it comes to Artificial Superintelligence, GCRI reveals that they have several initiatives and are continuously doing risk analysis to avert any possibility of computers taking over the world and jeopardizing humanity. The executive director of GCRI, Seth Baum states that they are currently developing structured risk models to help people understand what the probable risk factors are and how they can be mitigated.

Also read Big Data And Artificial Intelligence For A Better Future.

The Risk Model By GCRI

In order to understand how the emergence of ASI can endanger the human species and what protective measures can be taken, GCRI has created a model that would investigate numerous questions from all feasible perspectives. While the first stage of the GCRI model focuses on how to build an ASI, the second stage investigates all the risk factors through an out-and-out risk analysis.

The model is still going through refinements. However, the research team from GCRI claims that they have already made substantial progress, though the findings are yet to be unveiled.