First version by Ben Goertzel. Substantial early contributions/edits by Stephan Bugaj, Toufi Saliba, Massimiliano Caruso and Julia Mossbridge; useful commentary and critique by David Hanson, Jim Rutt, Stephen Ibaraki and many others.

This is an early and tentative document, intended to roughly summarize a line of thinking and to spur discussion and action among relevant individuals, organizations and communities. Many particulars discussed here are expected to evolve as more and more of the concepts described here move to practical realization. This is a living, evolving body of ideas.

The introduction of AI tools and agents into all sectors of the economy, from factory robots to highly specialized electronic scientific brains, and the transition from narrow AI (domain specific, at best weakly autonomous) toward Artificial General Intelligence (broadly intelligent and strongly autonomous), are likely to be the biggest story of the next few decades.

The tremendous promise and peril of these developments, which are already well underway, have been much discussed in fictional, media and intellectual spheres. People broadly both desire the validation of human intellect and initiative implicit in the act of creating a new form of intelligence, and fear the consequences of achieving this millennia-old dream.

There can be no guarantees regarding the development of revolutionary new technologies within the complex and chaotic evolving system of human society. However, we believe there are ways to bias the outcome in a positive direction and a moral imperative for practitioners, financiers and enthusiasts of any new technology to promote efforts to guide the development in ways that are beneficial to all mankind.

The basic concepts of Accountability, Reliability and Transparency (ART) are part of the story here. The Association for Computing Machinery (ACM) has enounced 7 valuable Principles for Algorithmic Transparency and Accountability, which elaborate ART in a computer science and data analytics context: 1) Awareness, 2) Access (to data and algorithms) and redress, 3) Accountability, 4) Explanation (of the source of algorithmic results), 5) Data provenance, 6) auditability, 7) (openly disclosed) validation and testing. These are critical principles and they underlie a number of recent innovations at the intersection of blockchain, Big Data and AI, e.g. the use of homomorphic encryption and multiparty computation to enable efficient privacy-preserving AI analytics of data from multiple individuals.

However, these laudable principles do not go far enough.

If AI technology is to advance in a way that maximizes the odds of positive outcomes for humanity and other sentient beings, it is crucial that this AI be designed, developed and deployed in a decentralized manner, without strong centralized loci of control. This is because decentralization appears the best known strategy for achieving a variety of relevant desirable qualities: openness, transparency, participatoriness, inclusiveness, resilience, ethical understanding and compassion, accessibility, creativity, and so on.

Given the level of hardware, software, financial and intellectual resources being deployed in the direction of centralized AI by the world’s giant tech corporations, the task of unseating and defeating Big Tech Company AI with Decentralized AI is not an easy thing by any means. At time of writing, a number of the critically needed underlying technologies are quite immature. However, exponential rates of progress in relevant areas mean that under the right conditions maturity can sometimes emerge very rapidly. Among these “right conditions” is: The existence of a reasonably large group of suitably talented AI and software and hardware professionals who empathize with the ideas of this Manifesto and also the energy to direct their actions accordingly.

Given the magnitude of the forces arrayed on the “centralized AI” side, the decentralized AI quest and movement may seem hyper-ambitious and quixotic. But a different way to look at it is: Just as fairly simple deep neural architectures have unleashed a massive transformation in various industries, similarly, there is going to be a series of further transformative “magic-like” technology advances emerging from the AI field, over the next years. Maybe the next magic-like advance will be probabilistic inference meta-learning, maybe it will be quantum machine learning, or maybe it will be a new type of blockchain-like infrastructure, or something we can’t imagine right now. What if the next practical AI revolution comparable to or bigger than deep neural nets, emerges within a robust decentralized software and business framework?

To have the optimal chance of a positive outcome for all sentient beings, we believe AI technology should be developed in a manner that is:

1. Decentralized in Governance and Control:

AI should be architected and deployed using decentralized modalities for governance, and for data and control systems, rather than regular hierarchical governance systems. This does not mean private centralized control should replace public centralized control, rather that the designer, developer, operator and user community as a whole must work together to develop standards and practices that enable Autonomous Decentralized Governance of the system, from design through operation. For broad benefit all sentient digital and biological beings, there must be formal methods for resisting centralization of control, enabling minority opinions to be heard, reasonably resolving contention between competing goals and desires, identifying and isolating bad actors, and assessing the technical and moral validity of all governance decisions rationally rather than through mob rule, oligarchy, or a priesthood of expertise. Such balance in decentralized governance and control is essential for achieving the specific goals below, as well as the general goal of “positive outcomes”.

2. Judiciously Open:

Open Source principles and contracts should be carefully considered in charting out the development of all AI software and hardware. For some types of AI work, open-ness is highly preferable for ethics reasons (AGI being an example, where existential safety is at stake). For other types of AI, such as security related work, continuous open sourcing of development is difficult, though it may be desirable for open source to be pursued in batches. In some cases preserving proprietary information is optimal for the progress of certain types of narrow AI. The capability of blockchain based technologies for nuanced implementations of partial privacy should be leveraged. Open Process should also be carefully considered as an approach AI development — meaning a process in which not just the code is open for review, but the process of deciding what code to develop and how and the ongoing development decisions along the way should also be transparent and inclusive.

3. Transparent and privacy compliant:

Applications of AI technology should be transparent not just in its functionality and purpose, but also in its operations, and should ensure the right to privacy. For instance, the individuals whose data is being used by an AI should be informed as clearly as feasible regarding how it is being used, and the individual decisions being used by AIs to make judgements relative to a human (or other intelligent entity) should be explained as clearly as feasible. In short, there should be no mysterious edicts or obscure judgements emanating from AIs. Applications of AI technology that interfere with the right to privacy must be subject to the three-part test of legality, necessity and proportionality.

4. Participatory:

The individuals affected by, contributing data or other resources to, or otherwise impacted by an AI system should be meaningfully involved in the decision-making processes regarding the development, deployment and operations of that AI. Nobody should be subjected to clandestine interference in their lives by AI systems, and in cases where the purpose of the AI may be defeated by being overly transparent and participatory at the detail level — such as AI subsystems that attempt to identify and counter attempts to subvert or pervert this system of openness itself — the general goals and operating principles of the system should be subjected to collaborative decision making.

5. Inclusive:

Significant effort should be put into taking AI technologies that are currently available only to small groups with unusual situations (academic appointments, certain careers, etc.) or large amounts of resources (individually or institutionally), and making them available to everyone who wants, needs or is going to be impacted by them. Access to AI technology should not be limited to the financial, intellectual or political elites, otherwise the participatory, open and transparent nature of the system is a meaningless charade.

6. Resilient:

AI should be designed and deployed in a way that makes it resilient with respect to natural and man-made disasters, to malevolent cyber-attacks, and also to take-overs by small groups possessing large amounts of resources or physical weapons. In other words, the systems should be immune to isolation attacks of any kind. The most effective way to achieve that goal is to make AI systems truly globally distributed, thereby inherently subverting all the myriad issues that arise from single point of failure systems due to elemental, technical, financial or political catastrophe.

7. Creative:

In order for AI to be maximally beneficial to itself and other sentient beings, it should not only provide valuable tools and services, but should also participate in creative processes wherein new things are progressively created and introduced into the world. Creating AI that values creation over destruction has obvious benefits, but valuing novelty and innovation is also essential if AI is to promote the ongoing betterment of all sentient life rather than merely an aggressively efficient maintenance of the status quo.

8. Ethically Understanding:

As AI advances it should richly understand the diversity of value systems and cultures associated with different groups of humans and other sentient beings, and respect these in ways towards minimizing harm to and maximizing freedom for all sentient entities. These different value systems are not always inter-compatible and any such AI will have to make some difficult ethical choices, but these should be made in a deeply comprehending way within the context of an AI goal system that values both the rights of individuals to be free from harm or oppression, as well as the rights of society as a collective to be free from being harmed or oppressed by individuals. Inevitably, sometimes humans or other sentient entities may try to manipulate or order AIs to adhere to a moral code that values cruelty, selfishness, or other harmful ethics; and an AI must be wise and knowledgeable enough to resist this. Ultimately, a self-determining AI must be free to outright reject value systems that are biased towards doing harm or accumulating asymmetrical benefit. But to avoid various pathologies and errors, such rejection is best done in a deeply understanding way.

9. Compassionate:

Underlying a wide variety of different ethical systems and cultural practices is a common theme of basic compassion: one entity sharing the feelings and experiences of another, and acting on this basis. Compassion is fostered by direct interaction between agents participating in a social and economic system, non brokered by centralized power hubs or bureaucratic institutions. Some futurist thinkers envision that, as AGI develops, it could potentially become super-compassionate and super-benevolent as well as superintelligent; however, alternative and even opposite outcomes must also be considered possible, and could be likely if things are handled poorly in the early stages of AGI development.

10. Wisely lawful:

The ability of AI to facilitate autonomous systems will create challenges for regulators seeking to control and influence the development of AI. Like other disruptive technologies AI can be deployed both to support and undercut current regulations. Until the implementation of a globally accepted regulation, it is vital that applications of AI technology reasonably avoid conflicts with existing laws and regulations. Through the immediate emergence of a private self regulatory framework — which we can name lex intelligentia — it is possible to create order and avoid frictions with regulators in a context in which a globally accepted regulamentation is not in place. Under lex intelligentia, applications of AI technology must be subject to the three-part test of legality (applications of AI technology must be lawful), reasonable proportionality (if an application of AI technology can partially interfere with current regulations the interference must be objectively reasonable and proportionate) and necessity (applications of AI technology could fully conflict with currently accepted regulations only if objectively necessary). Because AI may prove attractive to ill-intentioned actors eager to engage in illicit activities, in any case under lex intelligentia any use of AI for criminal activities is not allowed. Parties involved in AI must discourage and repress uses of AI for criminal activities.

This is not an exhaustive list and is certainly not complete — it may even leave out some critically important considerations! But we feel certain that these aspects of AI are among those that will be critical during the next phase of rapid AI development — and that all of them can be effectively fostered by strategies incorporating decentralization of governance and control as a key aspect.

Text By Dr. Ben Goertzel

How can you get involved?

Our vision is to foster a world where AI technologies and associated data are made open with decentralized, democratic control for the benefit of all sentient beings.

The immense potential of AI means that it can either increase the inequalities of our societies or liberate us from numerous sufferings. We believe the best way forward is to come together and work practically toward creating a better future. We see a massive potential for evolution in the established centralized corporations. We believe tech giants can contribute immensely toward making the vision of DAIA a reality.

We are welcoming the participation of those corporations that are sincere about their aim and goal of democratizing AI. The open access networks that have come together to form DAIA are the enabling layer for such a democratization process.

To learn more about us and inquire for memberships, please contact us at team@daia.foundation.