It’s easy to think of Silicon Valley’s artificial intelligence giants as superheroes: Each wields immense power, claims to work toward a better world, and terrifies the government in the speed and scale at which they work.

And much like comic book heroes, these companies are trying to form their own body of governance. Alphabet, Amazon, Facebook, IBM, and Microsoft are joining forces to create industry standards for the development of ethical artificial intelligence, reports John Markoff at the New York Times.

The group hasn’t been officially announced, but the companies have reportedly been meeting to discuss how AI can be developed safely and in the interest of the general public. Reached for comment, Microsoft said it does not discuss rumors. Facebook, IBM, and Alphabet were not immediately available for comment.

The biggest danger of artificial intelligence isn’t a sentient killer AI, but rather the concern that wealth created by the technology won’t be shared equally, creating a hyper-elite, technocratic upper class. Similar concerns were expressed in Stanford’s recent 100-year study on artificial intelligence, co-authored and overseen by researchers at Google and Microsoft who likely would have input in this industry ethics group.

This initiative would stand apart from another similar push in the industry: Elon Musk and Sam Altman’s OpenAI. Aside from research, OpenAI hopes to stand as a moral barometer for ethical and open-source AI development, even if it can’t stop others from developing malicious AI. The company has accrued its own bevy of partnerships, including Amazon, Nvidia, Peter Thiel, and Reid Hoffman.

It’s still unclear whether artificial intelligence will be the magic bullet to solve humanity’s big problems like poverty and hunger, but no matter the outlook, it seems everyone wants to appear on the ethical side of history.