Apple’s secretive autonomous car project has shifted focus over the years, but this year, it seems to be picking up speed. In April, the company received a permit to test self-driving cars in California, while in June, Apple CEO Tim Cook confirmed that they were working on software that could allow cars — and maybe other things — to drive themselves. During a talk on Friday, Apple’s director of artificial intelligence research, Ruslan Salakhutdinov, spoke about some of the company’s recent advances in machine learning that would be useful for such a project.

Wired reports that Salakhutdinov spoke before a group of AI experts at the end of this year’s Neural Information Processing Systems (NIPS) conference in Long Beach, California. There, he spoke about how Apple is using machine learning to analyze data to from a vehicle’s cameras. He talked about techniques used in a recently published study on the advances that the company has made in using AI to detect pedestrians and cyclists using LiDAR. But he also revealed efforts on some other projects: software that uses a car’s cameras to identify objects such as cars and pedestrians, as well as the drivable lanes on the road. He also showed off images that demonstrated how the system performed even when camera lenses were obscured by raindrops, and how their software could infer where pedestrians were, even when they were obscured by parked cars.

Salakhutdinov also discussed how their software was interpreting the data that it was being fed. One uses a technique called SLAM to allow the software to have a sense of direction, something that’s used in map building and augmented reality, while another project takes the data from the cars and uses it to help build maps with more detail. According to Wired, he didn’t speak specifically about how these projects fit into Apple’s project, but it seems as though Apple’s focus will be on developing the brains that will eventually steer the cars safely.