The newly established fund will be anchored by the Harvard’s Berkman Klein Center for Internet and Society and MIT’s Media Lab.

“AI’s rapid development brings along a lot of tough challenges,” Joi Ito, director of MIT’s Media Lab, said in a press release. “For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society? How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”

As Ito notes, not all concerns over AI are apocalyptic. Rather, as these systems become more prevalent in our daily lives — from driving our cars to informing our news feeds — their negative effects may be more insidious.

AI bias is one such concern that’s emerged over the past couple years. In one study, an algorithm was shown to erroneously label black defendants as higher risks of reoffending than their white counterparts. In another, an algorithm was shown to offer women lower paying job advertisements than men. It’s these subtle yet significant issues that the fund will aim to address.

“Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that,” said Alberto Ibargüen, president of the Knight Foundation. “Since even algorithms have parents and those parents have values that they instill in their algorithmic progeny, we want to influence the outcome by ensuring ethical behavior, and governance that includes the interests of the diverse communities that will be affected.”

Just over a year ago, entrepreneurs like Elon Musk and Sam Altman launched OpenAI, an organization intended to advance AI for the good of humanity. The White House also chimed in last year with a series of workshops and reports on the future of AI, society, and our economy.

Editors' Recommendations