1 - Mobileye strongly implies that the next suite of hardware - 8 cameras on EyeQ3 - coming within months will be all that is needed for autonomous driving - and after that it's all software development.

2 - There is neural network "deep learning" going on in the current implementation of Autopilot - regardless of the need for additional cameras in the future:

Recently, we launched our first deep learning functions on Tesla auto pilot feature. These capabilities include semantic free-space which uses every pixel in the scene to help us understand where are the curves, barriers, [indiscernible] drills, moving objects and anything that is not part of the driving path.

Once we know the free-space, the big challenge is where to locate the vehicle in this free-space. We saw this with the holistic path prediction, which uses the context of the road to determine exactly where the car should go at all the time."





3 - There are two SEPARATE groups of "deep learning" fleet learning going on - Tesla's own properietary data set and Mobileye's data set. They are not one and the same, and Tesla is not sharing their own fleet learning with Mobileye.



This, to me, is an indication that Tesla's race to get Autopilot on the road is in fact a way to build a moat - a distinct competitive advantage - by building a superior self driving experience earlier than other automakers will be able to do.

4 - Deep learning requires a HUGE data set which can only be acquired by having a fleet on the roads doing the learning.By the time other manufacturers actually launch real hands-off autopilot systems Tesla will have raced ahead and have an informational advantage in building a reliable self driving car in real world conditions.