Last week, Google DeepMind announced that they had open sourced Sonnet, a software library that draws on DeepMind’s internal best practices for quickly building neural network modules in TensorFlow. This is a great resource, leveraging the collective experiences of some of their 250 highly skilled engineers, released to enable others to more effectively apply machine learning to their problems.

In fact, over the last few years, the world’s biggest tech companies (including Google, Facebook, Microsoft, IBM, Baidu, Amazon and more) and university research labs have open sourced at least 2.5 million lines of machine learning platform code (see table below), which equates to over 650 man years or $80m in development costs.

These toolkits are now freely available online, and many such as TensorFlow and Paddle are accompanied by significant volumes of training and example materials.

As such, they can be viewed as an incredible initial investment into any company looking to use machine learning as an enabling technology for their product.

LOC and Person Year estimates from openhub.net for selection of most popular machine learning frameworks. Lines of code is current rather than at time of open sourcing.

This trend towards open sourcing seems set to continue, driven by researchers and engineers from academic backgrounds who push their employers for the ability to continue to contribute back to the research community.

This creates an interesting question around deployment of talent — is it better for the ecosystem to have its best AI engineers pooled in a small number of organizations doing core research and open sourcing key parts of their output, or to have this talent embedded as small teams across a much larger number of organizations where they can work to solve specific commercial problems?

Many early stage companies looking to hire engineers to work on machine learning problems find it hard to compete with the likes of Google and Facebook, who have invested significant resources into creating the best possible environments to do AI research. If this work was exclusively undertaken behind closed doors and for their employer’s gain, this ‘hoarding’ of talent could be seen as damaging to wider innovation.

However, by open-sourcing this work, these companies are actually accelerating the pace at which broader innovation is possible, by providing a significant head start to developers building their own businesses applying these technologies. And companies are doing this today, with over 7,000 job postings worldwide currently listed on LinkedIn which specifically mention one of the above frameworks.