Some of the tech world’s elites have come together to fund research that hopes to protect humanity from the rise of artificial intelligence.

LinkedIn Founder Reid Hoffman and eBay Founder Pierre Omidyar’s Omidyar Network have each donated $10 million dollars to the “Ethics and Governance of Artificial Intelligence Fund.” The Knight Foundation has contributed $5 million to the cause, while Raptor Group founder Jim Pallotta and the William and Flora Hewlett Foundation have made respective donations of $1 million.

This fund will focus on research into the inevitable ethical quandaries posed by the very nature of complex AI. As the Knight Foundation explains:

Even when we don’t know it, artificial intelligence affects virtually every aspect of our modern lives. Technology and commerce will ensure it will impact every society on earth. Yet, for something so influential, there’s an odd assumption that artificial intelligence agents and machine learning, which enable computers to make decisions like humans and for humans, is a neutral process. It’s not. Even algorithms have parents, and those parents are computer programmers, with their values and assumptions. Those values – who gets to determine what they are and who controls their application – will help define the digital age.

In summary, AI will only be as neutral or objective as the people who create it; which is to say, probably not at all. The Ethics and Governance of Artificial Intelligence Fund will allocate portions of its $27 million to people dedicated to addressing this issue, and the ever more complicated questions that will arise as we move toward a future guided in large part by the intelligent constructs that already know our names, our addresses, our incomes, our children, and even our favorite foods.

Harvard’s Berkman Klein Center and the MIT Media Lab will serve as “founding anchor institutions” in an effort at “bridging the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence (AI) from a multidisciplinary perspective.” MIT Media Lab Director Joi Ito outlined the “tough challenges” that “AI’s rapid development” brings with it:

For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society? How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?

According to the MIT Media Lab statement, the fund will be managed by “a small board, consisting of leadership from each participating foundation and institution,” as well as members of their faculty and “a number of other individuals from a wide range of disciplines and organizations.”

With the concept of truly intelligent artificial beings climbing even as far as the European parliament, this research seems especially relevant. According to the Berkman Klein Center, the participating organizations “welcome public engagement,” but are not currently seeking further investment from the general public. They do, however, “welcome discussions with all institutions and individuals engaging in research related to developing ethical AI in the public interest.”

Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.