Strong or General Artificial Intelligence is much more in line with what we have seen in movies like Terminator or The Matrix where machines have become autonomous (controlling themselves,) conscious and self-aware. The software that control the robots aren't limited to a specific set of tasks, instead being able to generalize it's knowledge to any situation, including thinking of possible future scenarios.

There is a lot of research currently being done in developing Artificial General Intelligence (AGI) but we don't really see any applications out there just yet. Back in the 1970's many thought we would have seen AGI develop out of the task-specific systems within a decade, but this quickly stalled. It was a much more daunting problem than many scientists had expected at the time.

Researchers quickly found that there were many facets beyond just the taking in of data, processing and learning. They needed a way to build up and understand actual context requiring an understanding of natural language (beyond simply having a dictionary available,) social intelligence and ways to answer the questions "What don't I know" and "How can I try to learn it?" There have been great strides in applying AI moving from being fast if-then checkers, visual and language recognition systems, to being more "creative" by understanding context and more generalized learning.



One major sticking point is that we don't actually have an understanding of what makes humans conscious. Often arguments start with the assumption of "I think therefore I am" that came from the philosopher Descartes. While useful for people to be able to move around the sticking point of "How do I know I'm conscious," AI researchers don't have that luxury. This is somewhat a Catch-22 that would require having a conscious, self-aware system first to make that assumption, yet needing a model to create it in the first place.

While not all AGI research is necessarily looking to create Artificial Consciousness, they are easily inter-related. The AI dystopia movies typically are presenting the latter, where the machine has become self-aware, sentient, creates its own goals and actively works towards them. Because of this, I wanted to slightly touch on the topic of human consciousness, of which we still have limited understanding.



Neuroscience and Theories on Consciousness

Neuroscientists are still trying to get a better idea of the brains role in human consciousness, with research being done on an area called the claustrum. They found that this region of the brain when stimulated made the patient instantly unconscious, acting like an on-off switch. One of the researchers Francis Crick (who helped discover the structure of DNA) hypothesized

[T]his region might integrate information across different parts of the brain, like the conductor of a symphony.

-Source

Currently there are two main theories of how our brains generate consciousness, Integrated Information Theory and Global Workspace Theory. These are not mutually exclusive of each other, but rather approach the problem from different places.

Integrated Information Theory is more a synergistic, top-down approach where the whole is more than the sum of the parts, meaning it can not simply be broken down to individual pieces to get an accurate idea. This starts with the assumption that consciousness exists and works backwards to see what pieces can not be reduced or "broken apart."

By borrowing from the language of mathematics, IIT attempts to generate a single number as a measure of this integrated information, known as phi (Φ, pronounced “fi”).

Something with a low phi, such as a hard drive, won’t be conscious. Whereas something with a high enough phi, like a mammalian brain, will be.

-Source

Global Workspace Theory says that experiences are imprinted and retained in our minds taking a bottom-up approach. These can then be recalled as context during new situations that influence how we filter new information, feel about it and hence react.

Essentially this says that each new experience we have is influence by things that have already been imprinted in our mind. For example, if we had a terrible experience of being stung by a ton of wasps as a child, that memory instinctively comes to mind when anything that flies and stings comes around. We recall previous memories prior to interpreting new situations.



Turing and Other Intelligence Tests

The Turing test which came out of the 1950's has a human interact with a computer. If the human is not able to tell that they are interacting with a computer, the test is passed. With the advances in AI we now have various programs that are starting to pass this test (on some but not all humans) including chatbots, content writing bots and bots to handle CAPTCHA's.

While these don't on their own consitute general intelligence, they are an important step. All of these require the ability to take in any data (such as writen text, topics to write about or a visual test to solve) and create a response that can simulate what a human would do or expect.

Other intelligence tests have been proposed and put into use claiming that the Turing test doesn't go far enough anymore. Claims often mention that it is more telling on the ability to fool humans versus an actual test on intelligence. Some alternatives instead ask more general questions that require some visual, symbolic, spatial,or artistic understanding.



General Artificial Intelligence Research

Long story short on creating a true Artificial General Intelligence is that we are not there yet. The industry has come a long way in many areas being able to now have deep data mining (like determining what ads to give you in Google or Facebook,) Automated Driving Cars and Bots that can play dozens (sometimes more) games well and robots with insect brains.

Deep Learning

Deep learning uses neural networks with many hidden layers allowing it to refine its answers over time. Facebook and Google make use of these networks to better serve up advertising based on what you search, click, write and other ways that you interact on your computer. There are many different learning methods that can be used by these systems that allow them to refine their answers over time, but they are limited in that they can't easily tell you why they came to a certain decision nor are they able to search for data not passed into it.



An example would be that they could see things in your profile (sex, age, profession, etc,) what articles you have clicked on and what general location you are in either with GPS or what internet provider/cell tower being used to give you an ad to the shop down the street for a certain product. These systems are able to see what ads are clicked on or what other behaviors you do to try to maximize their effectiveness to sell things to you.

Other areas these systems are applied include image recognition (like a reverse image search in Google or trying to automatically tag Facebook friends in a picture,) putting color to a black and white image, generating handwriting or playing games such as Starcraft.

While Deep Learning networks are able to get good approximations for difficult tasks, they are limited in the ability to change itself on its own. Humans have to be a part of the loop to determine what inputs are fed in (like profile information, what's been clicked, goals of a game, etc,) stating the desired targets for answers and providing some training.

OpenAI Universe

OpenAI Universe was started as an open source project with the goal of having bots, which can be created by anybody, using any method, to be learn and play as many games well as possible. While this project has mostly been applied to gaming, it may lead to some interesting developments with it being used to create bots able to perform well across many different types of games but also with it being open source from the beginning allowing anybody to participate. Google and Facebook have recently followed suit to open source their some of their projects as well.

The project is also working to use Reddit to teach human like speech while being very careful not to have follow Microsoft's chatbot that became a "racist jerk" on Twitter after only one day of running. Elon Musk, one of OpenAI's primary benefactors, has publically stated his concern of AI agents becoming "more harmful to humans than nuclear weapons."

Seed AI

Neural Network systems have limitations towards an Artificial General Intelligence like we picture from the movies. They don't have the ability to understand their core programming and rewrite code as needed. A true AGI needs to be able to answer the questions of "How do I know what I don't know" and "How do I figure this out." You'll notice that both of these questions include the term of "I," which denotes a self-identity.

This is where Seed AI comes in to play which seeks to create a systems that has enough self-awareness determine when and what to refine about its own code, which theoretically is known as a Gödel machine. There has been some research done towards a Seed AI model called Recursive self-improvement by Jürgen Schmidhuber but so far implemented examples don't exist. While this type of model is the most intruging towards an actual AGI there are inherant risks that include being:

Unpredictable and unstable

Potential to evolve too quickly to be controlled

May develop intentionality and adopt negative goals such as causing damage or loss of life.

Evolution of a superintelligence that represents an existential risk.

A few organziation pursuing research into Seed AI are Singularity Institute, OpenCog, and Adaptive AI.





@winstonwolfe's Crowdsourced Steemit Video

Are you new to Steemit and Looking for Answers? - Try https://www.steemithelp.net.

Image Sources: