Silicon Valley has had quite a few high level shuffles this past year and almost all of them were related to the deep learning / AI in one way or the other. We have seen Jim Keller join Tesla to head their autonomous initiative and we saw Intel roll out its Nirvana chip and formally enter the AI battlefield. Just a few weeks ago, we exclusively told you about Raja Koduri leaving for Intel to head their Core and Visual Computing Group (more on what hes is working on later) and we saw official confirmation just a day later.

Google offering multi-million dollar package to NVIDIA deep learning engineer(s) willing to jump ship

Today, I have another big update to share and something that has frankly been a long time coming. Sources close to these events have informed me that Google has started poaching engineering talent from NVIDIA's Deep Learning department with very handsome packages. Talent hunting and shuffling is a common sight to see in Silicon Valley, but it would appear that Google is shifting into high gear and bringing out the big guns. So far, I have only received confirmation of engineers from their Deep Learning department which have been approached with an offer but this might extend to other depts as well.

Google is offering an 8 figure package (in the range of 9 - 12 Million USD over the course of 3 years) to the deep learning engineer(s) in question to shift over from NVIDIA. There are rumors of an all-out raid going on for talent related to deep learning right now over at Google's and this could very well be just a drop in the bucket. Deep Learning is one of the key areas of growth that chip makers and independent design houses could benefit from and it is clear why Google would want to hire the best talent from the one company on earth that h as already made a name for itself in GPGPU ecosystem (I am talking of course about CuDNN). For those of you wondering, as far as non-competes go, since the state of California does not recognize any non-compete clauses, they are worth less than toilet paper (to quote a friend).

Google's portfolio includes the TPU (Tensor Processing Unit) which is its own homegrown chip that they rolled out to compete in the AI market space. It was originally designed to deal with a high volume of low precision computing and was specifically optimized with the TensorFlow framework in mind. This is why the company still uses GPUs and CPUs for other machine learning workloads and the company on the GPU end is none other than NVIDIA (in fact NVIDIA GPUs are offered on its Cloud Computing Platform).

With TPU 2.0, it was clear that Google has been gunning for NVIDIA and began pushing out specifications that were in direct competition with offerings from NVIDIA. It is still clear that CuDNN has a much more well developed ecosystem and the GPGPU approach more flexibility than the TPU approach. This is one of the reasons why Deep Learning engineering talent from NVIDIA is an ideal fit for Google's ambitions and it looks like they are paying for it in the talent's figurative weight in gold. This is good news for everyone concerned (except maybe NVIDIA) because it means that Google is increasing its stake in the deep learning sector - which is bound to be one of the learning markets of the future.