*Technology news, trends and opinions*

🚗 Department of Driverless Cars

a. Large incumbents

Waymo vs. Uber: Anthony Levandowski has pleaded the 5th amendment to not incriminate himself in court. He then announced to the Uber ATG team that he would recuse himself from working on all LiDAR-related work and management for the remainder of the litigation. This won’t make much difference if the rest of the team have access to the alleged stolen Waymo files. Last week saw the last hearing between Uber and Waymo before federal judges decide whether to temporarily shut down Uber’s work on self-driving cars. In the session, a Waymo lawyer argued that Otto was a “clandestine plan” from the get go; Uber however said that they’ve still not found any evidence that any of the 14k files touched Uber servers. What a saga!

Waymo is also taking signups for their early rider program in Phoenix, Arizona. The company is adding 500 self-driving Chrysler Pacifica Hybrid minivans to their existing fleet of 100 vehicles that are already on public roads.

Meanwhile, Apple received a permit from the DMV to test its self-driving vehicles in California. The company is now the 30th authorised tester in California and will use three Lexus RX 450h vehicles. Not much more information out there just yet except job ads for: software engineering on Maps Special Projects and computer vision engineers for the Technology Investigation team. This might also be for their mobile AR initiatives!

NVIDIA published a method for inspecting what parts of a street level scene the neural networks of their driverless car focuses on when mapping visual inputs to steering directions check out (paper here). This will be useful for debugging, regulators and supplier comparisons. In MIT Tech Review, Will Knight runs a longer study of initiatives towards interpretability of ML, while the same publication also argues that black box deep learning isn’t an issue in certain domains like healthcare. NVIDIA also hired Tesla’s former VP of Autopilot Vision, David Nistér. He will focus helping customers create centimeter-accurate HD maps for the company’s AV suite. NVIDIA also announced a new video-based smart cities product for public safety, traffic management and resource optimization.

Tesla announced it would release its Automatic Emergency Braking feature on all Autopilot 2.0 compatible vehicles. This comes after Consumer Report downgraded the top safety rating of the Model S and X. What’s more, the company updated its data sharing agreement to include the capture of short driving video clips to power its fleet learning ability. Interesting! Tesla also settled a lawsuit against Aurora, a startup founded by the company’s former head of Autopilot, Sterling Anderson.

Mobileye’s co-founder and CTO, Amnon Shashua, delivered a 1hr talk exploring the machine learning components to solving sensing, planning and mapping from the perspective of Mobileye.

Baidu plans to open source much of their self-driving technology stack this July in an effort to “innovate at a higher level” and not “reinvent the wheel”, according to Qi Lu, GM for the company’s Intelligent Driving Group. At this point in the driverless car race, the move doesn’t appear to come from a position of strength.

Germany’s Parliament passed a law allowing the testing of AVs on public roads provided there is a driver who can take responsibility if required. Furthermore, 15 EU countries are asking the European Commission not to impose data localisation rules as they conduct their 2 year review of the Digital Single Market. Doing so would prevent the free flow of between self-driving cars driving in different countries, thus hampering fleet learning and data network effects.

b. Startups

Oxbotica are testing a 10mph prototype autonomous shuttle in Greenwich, London that navigates using five cameras and three LiDAR sensors. Hear more from their co-founder, Ingmar Posner, at RAAIS 2017 in June!

nuTonomy has signed a deal with Groupe PSA to equip their Peugeot 3008 with self-driving technology developed by the startup. Tests will start from September 2017 in Singapore, where nuTonomy is already experimenting their taxi fleet service since last year.

Zoox, the secretive full-stack autonomous car company, hired Mark Rosekind, the former head of the US National Highway Traffic Safety Administration. Other former senior NHTSA officials are employed at General Motors, Waymo and Faraday Future.

DeepMap emerged out of stealth last week with its high-resolution 3D mapping service aimed for self-driving car companies. This positions the company against larger incumbents like Waymo, and TomTom’s HD mapping initiative. The DeepMap team has an impressive heritage from Google Earth, Apple Maps and Leica Geosystems.

Luminar also announced its work (5 years in the making) on a 1550 nanometer LiDAR system that provides a 200m range vs. 100m-140m achieved by Velodyne’s 905 nanometer wavelength systems. At 70 mph, another 100m of vision will give a car an extra 3 seconds of reaction time when it sees an obstacle. Here’s a pretty picture of the Lumiar output :) Note that Velodyne is working seriously on updated LiDAR designs (termed ‘solid state’) that reduce the form factor and cost, while also improving range.

Luminar’s view of the world

💪The big boys

Shortly after losing Andrew Ng, Baidu have announced the opening of a second AI research center in Silicon Valley. This adds 150 scientists to the 200 who are already working at the first site.

Tencent and Intel have both followed suit, launching their own Silicon Valley AI labs.

In his letter to shareholders, Jeff Bezos of Amazon brilliantly articulates what it means to be a Day 1 vs. a Day 2 company. Making the point of embracing external trends, Bezos writes that “machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more.”

Buzzfeed ran a profile on the Facebook AI Research group led by Yann LeCun in NYC. It covers some history of deep learning, the founding of FAIR and its goals, as well as windows into the focus areas for the team. The group also announced an updated release of fastText, its library for text classification, in 294 languages and with a reduced memory footprint to optimize running on small memory devices. Other areas of research interest include unsupervised learning and predicting future frames in video. On the subject of video prediction, check out the neat results from this research by Google/Adobe/Michigan that focuses a neural network on human pose to make long-term predictions about motion.

🍪 Hardware

Google published a detailed research paper and presentation that evaluates the performance of their tensor processing unit (TPU), their custom ASIC, operating in a data center environment. A blog post highlights how inference of different neural networks (CNNs, MLPs and LSTMs) running on a TPU is 15x faster and 30x more power efficient (TPOS/Watt) than when running on an NVIDIA K80 GPU in the same datacenter. The TPU has 25x more multiplier-accumulator units for matrix multiplication — the core computational operation of the chip — and 3.5x as much on-chip memory as the K80 GPU. As a result, Google employs TPUs since 2015 for web and image search, Google Photos and Cloud Vision API, Translate and AlphaGO. The project started in 2014 and was running in a datacenter 15 months later. Impressive!

🏦 Financial services

Actively managed funds are experiencing large scale withdrawals due to their fees and largely mediocre performance. Instead, investors are capitalising passively managed peers that rely on systematic trading models. Blackrock, a $5.4 trillion asset management company, has targeted $30bn in assets to refocus on quantitative strategies. Laurence Fink, CEO, said “We have to change the ecosystem — that means relying more on big data, artificial intelligence, factors and models within quant and traditional investment strategies.” At least 36 managers associated to these funds are leaving the firm as a result.

A report by consultancy Opimas suggests that capital markets teams could spend $1.5bn on AI technologies in 2017, growing to $2.8bn by 2021. This will result, the report states, in the loss of 230,000 jobs by 2025, of which 90,000 will be from asset managers.

WorldQuant, a $5bn systematic trading firm, shares the strategy behind it’s “Alpha Factory”, a distributed community of data scourers and quant modellers. These include the firm’s full-time employees but also amateur quants from around the world who access data from the WebSim portal to extort signal from which trading strategies are generated.

📚Policy and governance

Many think tanks and governmental organisations are producing reports on the impacts of software and machine learning on the workforce. However, there appears to be a dearth of data to quantitatively measure this impact. For example, how do we track progress in the capabilities of various AI techniques and the use cases they affect? How are different demographics changing their skills and how is the temporary workforce evolving? We need a dedicated information infrastructure to properly instruct policy decisions.

The Economist runs a briefing on how data is giving rise to a new economy. It explores the question of how to value data, the challenges with pricing it for external consumption, the resulting lack of data exchanges and incentives to simply buy entire data-creating companies altogether.

For more on the practical business implications of AI, come attend a three-part discussion forum at Oxford’s Said Business School on May 11, May 18th (I’m at this one!) and June 1st in which attendees are the participants. The events are moderated by Kenneth Cukier of The Economist and coauthor of the NYT Bestseller “Big Data”. Sign up here.

Historian Yuval Noah Harari makes the point in a TED piece that the AI revolution will create a new unworking class. His point boils down to this: as a civilisation, we have professionalised our roles and tasks over time such that machines can now displace human labor only by recapitulating these specific capabilities. While there are certainly many situations where domain-focused AI can trump human performance, however, the real world requires artificial agents to exhibit a generalised learning ability for which we don’t currently have the tools to create.

On the same topic, the Pew Research Center published a study on the future of jobs and jobs training. In total, 1,408 respondents commented on whether they see the emergence of new educational and training programs that can successfully train a large number of workers in the skills they need for jobs in 10 years time. Some 70% were mostly optimistic about the future, while others believe that capitalism itself is in real trouble.

A concern for AI developers is the fear of propagating biases that are embedded within the training data from which a system learns. A study in Science shows how a language model trained on a corpus of human-generated text does adopt, rather predictably, similar stereotypes. Paper here.

🆕 New initiatives and frontiers for AI

The University of Toronto officially launched it’s new Vector Institute, an independent deep learning-focused research center. It features a star-studded cast of researchers including Geoffrey Hinton, Brendan Frey, Raquel Urtasun, Sanja Fidler and David Duvenaud. It will draw funding from the $125m funding Pan-Canadian AI Strategy announced mid-March, as well as $80m of dedicated funding from 30 companies including Google and Shopify. Uber has also just committed $5m to fund Urtasun’s group to further its clasp on AV research.

In an interview with Jack Clark, Hinton implores the research community to push further into neuroscience to draw inspiration for building AI. He points to mechanisms for long-term memory, reasoning, focused attention as areas for exploration.

A study in Science has uncovered new fundamentals for how memories are formed and mature over time (research paper here). It was previously believed that memories are first created in the hippocampus (short-term) and subsequently transferred to the neocortex for long-term storage. Here, the authors show that episodic memories are created both in the hippocampus and neocortex. In the longer term, cells responsible for these memories in the hippocampus become silent while those in the neocortex retain their activity.

Generative adversarial networks (GANs) are all the rage in the AI community (see: GAN zoo for a running list). While discriminative models are used to separate input data (e.g. via classification), generative models can learn to create new examples of the input data they are trained on (e.g. images). A WIRED feature on Ian Goodfellow, who began working on GANs in 2014, explains how GANs work and their potential impact. The adversarial learning framework works by having one neural network tasked with generating a target data type (e.g. images) and another neural network tasked with calling the real from the fake on the outputs. Over time, the generator learns to fool the discriminator and thus reproduces inherent structure we see in the real world.

To be truly useful in the real world, AI systems must do more than achieve state-of-the-art performance on a specific task within one domain. They must generalise to new problems without needing to be retrained entirely. To that end, transfer learning offers an approach to leveraging labelled data and knowledge learned from one source domain/task in order to solve a related target domain/task. Sebastian Ruder shares a detailed post on the what, why, how for transfer learning.

Wait But Why are back at it with a new piece on Elon Musk’s Neuralink. It’s a chunky one!

The Toyota Research Institute (TRI) has committed $35m over four years to research machine learning applications to materials science. TRI is with Stanford, MIT, Michigan and others to develop new models and materials for batteries and fuel cells.

Lenovo too plans significant investments into AI, albeit to the tune of $1.2bn over the next four years. This represents 20% of the company’s total annual R&D expenditure by March 2021.