Artificial intelligence is the idea that software may become smart enough to form and evaluate decisions not directly provided to them in their application code. This document proposes an idea to achieve such an objective in an extensible manner.

Please refrain from personal bias until the entirety of this paper is read. In a recent discusion readers stopped reading the paper two paragraphs in because their definition of artificial intelligence did not directly align with concepts briefly mentioned in a summary. Selection bias cost the world the greatest breakthrough in medical history (Mirkin, 2013) . Please don't allow your preconceptions to commit such a horrendous atrocity.

Simply speaking an autonomous intelligence is a system capable of forming original decisions spontaneously. That definition is challenging to measure in a qualitative fashion as the underlying qualifications are subjective in practice. Speaking analogously, it is hard to tell if you have crossed that proverbial line in the sand if the line is blurry and occasionally shifts in distance and direction. A more simplistic measure of intelligence is a system capable of making decisions that cast doubt on the system's prior assertions. According to Cartesian logic a thinking machine is that which is sufficiently able to doubt as the mere ability to doubt is a primal essence that cannot be reduced from an intelligent self (Descartes, 1993) .

Perhaps the most agreed upon definition of intelligence is creativity, which is the ability to create that which is unique or original. This can only be achieved through a series of decisions and evaluations that are independently and wholly internally open to modification without external assistance. This phenomenon is commonly referred to as brilliance.

Animals of various species and capabilities are so far observed making, modifying, and manipulating the results of tool formations (Pickrell, 2003) . There is not a single area of animal brains responsible for any single step in this process. Animal brains are not a single organ, but rather a tool box of various tools acting in concert. These different neurological tools are specially modified and honed through the life experiences of the given animal in a condition called neuroplasticity.

A decision-making capacity does not suggest a measure of accuracy or validity. Humans are generally considered substantially more intelligent than the most powerful computational devices even though humans perform simple arithmetic much slower and formulate incorrect decisions frequently. The advantage humans have over powerful machines is a decision-making capacity. This is the ability to spontaneously make an original decision unassisted.

A person cannot appreciate the concept of an artificial intelligence without first appreciating a natural intelligence. Intelligence is not the measure of accuracy of data. In many cases intelligent entities make decisions completely in absence of desirable data merely because some stimulus demands a decision be made.

The goal of artificial intelligence is software that spontaneously originates creative decisions in response to a stimulus. The stimulus could be human interaction, an automated news update, environmental change, or various other factors. There are perahsp two fundamental challenges to solving for this: a proper understaind of intelligence and a consideration for originality.

Artificial Intelligence

At the time of this writing any form of sharing between applications is limited and explicit. Applications execute entirely within their own vertical funnel from application code, inputs, outputs, and evaluations. If anything must be shared to another application additional work is required to provide an interface, commonly referred to as an API (Application Programming Interface), so that data of the executing application's choosing is provided in a non-standard format dicated by that application. Data opposed to information is shared and the sharing is ad hoc.

The Universal Constant of Parsers Computers operate with machine language. Virtually no software in common use today is written by humans in machine language. As a result, a compiler is required for all modern software to translate instructions written by humans into the instructions executed by computer hardware. Compilers require a parser to decompose the instructions into small atomic pieces that can be described sufficiently for a variety of tasks, including translation. It can be reasoned that all modern application code passes through a parser, and possibly through many domain specific parsers for various different forms of interpretation. It can be reasoned parsers are, perhaps, as universally present as the instructions they describe. Parsers fit this paper's definition of information as they associate data with a plurality of descriptions. If the parsed data were available for consumption to a variety of applications that act of sharing and the processing resultant from that sharing fit this paper's definition of knowledge. When that knowledge, or rather the application's reasoning of parsed information, is available for consumption by other applications wisdom is achieved. When wisdom forms a model for application modification artificial intelligence is achieved.

Contrary to a Data Economy At the time of this writing the dominant economy of software development is a data economy composed of a few massively monolithic data brokers. Amazing conglomerates have risen up with astounding value to gather and horde treasure troves of data. Value and revenue are generated from transforming data into information that is partially transferred to the end user and partially transferred to advertising firms using data auctions. While it is the information that is responsible for the enterprise value this information is formed entirely upon data held in a secretive vault. In order for this economy to generate wealth data is horded and never returned, unless so directed by law. Only information is returned. This ensures competing firms must independently gather their own data. Since that proprietary and heavily secured data is the foundation of the data economy it must never be shared with external parties. An artificially intelligent system requires only minimal data to thrive. Speaking from human experience prior decisions are a more significant source of accuracy and validation than data. People typically refer to this as experience. Data on its own does not validate or qualify anything as data is too primitive. A qualifier, a purpose built application, must reason upon the data and generate a conclusion. To determine if data is valid or invalid a qualifying application evaluates the data against a know set of rules. The data isn't qualifing anything, but rather an application that produces a decision. The advantage afforded an artificially intelligent system is the ability to mreasure a current decision against a prior decision in consideration for whether that very decision-making ability is in demand of improvement, which serves as a far superior and autonomously improving qualifier than a data facet or a primitive collection of static rules. The reasoning, that prior decisions are more valuable than hording data, introduces friction for an economy that derives its value solely upon its stored data. In this case an artificially intelligent system would render that economy largely obsolete. Stored decision points are a more valuable commodity than a vault of horded data. Additionally, the qualify of those decision points when increased sharing allows domain specific applications to independently contribute to an artificially intelligent system, which is contrary to the nature of hording and never sharing data. Data, due to its primitive nature, loses value with age faster than qualified decisions thereby creating a large diminishing return on investment over time. Higher order sharing is the basis of an artificially intelligent system. As technologies advance and more domain specific technologies arise, in direct support of artificially intelligent systems, the value of closed systems will proportionaly erode in value and demand, economically speaking. This erosion of value is compounded in that artificially intelligent systems demand less data over time as their store of decision points increases from experience. These two economic factors represent value destroyers to a data economy as artifically intelligent systems, particularly open systems, come online.