Tesla is currently tackling what could only be described as its biggest challenge to date. In his Master Plan, Part Deux, CEO Elon Musk envisioned a fleet of zero-emissions vehicles that are capable of driving on their own. Tesla has made steps towards this goal with improvements and refinements to its Autopilot and Full Self-Driving suites, but a lot of work remains to be done.

As noted by Tesla during its Autonomy Day presentation last year, attaining Full Self-Driving is largely a matter of training the neural networks used by the company. Tesla adopts what could be described as a somewhat organic approach for autonomy, with the company using a system that is centered on cameras and artificial intelligence — the equivalent of a human primarily using the eyes and brain to drive.

Tesla’s camera-centric approach may be quite controversial due to Elon Musk’s strong stance against LiDAR, but it is gaining ground, with other autonomous vehicle companies such as MobilEye developing FSD systems that rely primarily on visual data and a trained neural network. This approach does come with its challenges, as training neural networks requires tons of data. Tesla emphasized this point as much during its Autonomy Day presentation.

With this in mind, it is pertinent for the electric car maker to train its neural networks in a way that is as efficient as possible with zero compromises. To help accomplish this, Tesla seems to be looking into the utilization of augmented data, as described in a recently published patent titled “Systems and Methods for Training Machine Models with Augmented Data.”

A block diagram of an environment for computer model training. (Credit: Patentscope.wipo.int)

Teslas are equipped with a suite of cameras that provide 360-degree visual coverage for the vehicle. In the patent’s description, Tesla noted that images used for neural network training are usually captured by various sensors, which, at times, have different characteristics. An example of this may lie in a Tesla’s three forward-facing cameras, each of which has a different field of view and range as the other two.

Tesla’s recent patent describes a system that allows the company to process these images in an optimized manner. Part of how this is done is through augmentation, which opens the doors to flexible and widespread neural network training, even when it involves vehicles equipped with differently-specced cameras. The electric car maker describes this process as such:

“Augmentation may provide generalization and greater robustness to the model prediction, particularly when images are clouded, occluded, or otherwise do not provide clear views of the detectable objects. These approaches may be particularly useful for object detection and in autonomous vehicles. This approach may also be beneficial for other situations in which the same camera configurations may be deployed to many devices. Since these devices may have a consistent set of sensors in a consistent orientation, the training data may be collected with a given configuration, a model may be trained with augmented data from the collected training data, and the trained model may be deployed to devices having the same configuration.”

Among the most notable aspects of Tesla’s recent patent is the use of “cutouts,” which allow Tesla’s neural networks to be trained using an optimized set of images. This was something that was discussed by former Tesla Autopilot engineer Eshak Mir in a Third Row Podcast interview, where he hinted at a system adopted in the electric car maker’s ongoing Autopilot rewrite that helped lay out “all the camera images” from a vehicle “into one view.” Such a process has the potential to help Tesla with 3D labeling, especially since the images used for neural network training are stitched together. Tesla’s patent seems to reference a system that is very similar to that described by the former Autopilot engineer.

“As a further example, the images may be augmented with a“cutout” function that removes a portion of the original image. The removed portion of the image may then be replaced with other image content, such as a specified color, blur, noise, or from another image. The number, size, region, and replacement content for cutouts may be varied and may be based on the label of the image (e.g., the region of interest in the image, or a bounding box for an object).”

Tesla is aiming to release a feature-complete version of its Full Self-Driving suite as soon as possible. Elon Musk remains optimistic about this, despite the company missing its initial timeline that was set at the end of 2019. That being said, Elon Musk did mention previously that Tesla is working on a foundational rewrite of Autopilot. In a tweet early last month, Musk stated that an essential part of the rewrite involves work on Autopilot’s core foundation code and 3D labeling. Once done, the CEO indicated that additional functionalities could be rolled out quickly. This recent patent, if any, seems to give a glimpse at how these improvements are being done.