posted by , ,

To some it’s scary and to others it’s exciting, but the scientific consensus is that artificial intelligence will have a drastic impact on humanity, probably within our lifetimes.

What exactly that will look like is a debated range of sci-fi scenarios. Stephen Hawking made waves when he claimed that the development of AI could end the human race. While real-life Tony Stark Elon Musk and Microsoft’s Bill Gates chimed in with concerns of their own, other experts believe the threat of artificial intelligence has been exaggerated. What everyone can agree on is that developing intelligence is something we should be very, very careful with.

When Google purchased DeepMind to the tune £400 million, the London-based AI startup had some pretty firm ground-rules regarding the two companies’ relationship. Demis Hassabis, theDeepMind CEO, said that a condition to their acquisition by Google was for Google to form an internal ethics committee. DeepMind also refuses to allow any of their technology to be used for weapons or military interests.

Hassabis has announced that he and many of the top minds currently working with AI research will be meeting in New York in early 2016 to discuss and debate ethical issues surrounding their work. Although no official list of participants has been released, big players such as Apple and Facebook will almost certainly have representatives present.





Since purchasing the AI company, Google has been using DeepMind’s technology in a wide array of implementations. Artificial intelligence has improved Google’s image recognition technology and is also helping services like Google Now anticipate user’s needs more accurately. Talks like the one expected to occur in New York will likely serve to create ethical frameworks that will guide the development of this and other technologies?





Given that we live in a world dominated by capitalistic desires, can humanity entrust the synthesis of such a pivotal base – the AI ethical framework, in the hands of Google, Facebook and Apple? Do these companies not have a vested business (profitable) interest in AI that may bias their respective inputs towards individual gain over that of humanity?



