💪 From the big boys

Backchannel run a rare piece on how Apple uses machine learning . It states that a 200mb software package runs on the iPhone encompassing “app usage data, interactions with contacts, neural net processing, a speech modeler and a natural language event modeling system”. I’ve held the view for a while now that today’s AI techniques and infrastructure will re-open a class of historically intractable problems while also enabling us to rethink how products and features should be designed. Apple seem to think the same: “Machine learning is enabling us to say yes to some things that in past years we would have said no to. It’s becoming embedded in the process of deciding the products we’re going to do next.”

, modestly called

Salesforce announced their internal umbrella AI initiative , modestly called Einstein , which will go on to power many of the company’s cloud services, as well as expose AI tools to end users. The team of 175 data scientists includes talent from acquired startups MetaMind, PredictionIO and RelateIQ. The company’s flagship event, Dreamforce, will attract 170k people into SF next week.

Six of the most powerful technology companies have set up the have set up the Partnership on AI as a non-profit aimed at advancing public understanding of AI and formulate best practices on the challenges and opportunities within the field. An important catalyst to this end will undoubtedly be the continuation of open source technology development, which Seldon’s founder articulates in this piece.

🌎 On the importance and impact of AI on the World

Stanford’s 100 year study on AI published their first report . It finds “no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future”. From a public policy perspective, it recommends to:

Define a path toward accruing technical expertise in AI at all levels of government.



Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.



Increase public and private funding for interdisciplinary studies of the societal impacts of AI.



a16z’s Chris Dixon sets out 11 reasons to be excited about the future of technology with short soundbites for each. Five of these are either directly related to or will be enabled by AI and machine learning.

👍 User-friendly AI

UC Berkeley announced a new Center for Human-Compatible AI to study how AI used for mission-critical tasks act in a way that is aligned with human values. One enabling technique is inverse reinforcement learning, where an agent (e.g. robot) can learn a task by observing human actions instead of learning to optimise a task on its own.

Designer Ines Montani makes the case for how front-end development can improve AI . Music to my ears! I take the view that although AI can be used to solve fascinatingly complex problems, wrapping a service with an API for others to dream up the most powerful use case isn’t the path to building a valuable company. Instead, one should productise technology with user-centered design as a top priority. Ines walks through how design can “improve the collection of annotated data, communicate the capabilities of the technology to key stakeholders and explore the system’s behaviours and errors.”

💻 AI running at scale

Google has published a high-level description of their deep learning-based recommendation system for YouTube using TensorFlow. The system uses two networks, one to generate potential candidates from the corpus of videos and a second to rank these candidates using video features, user history and context. In contrast to many deep learning models, the ranking models uses hundreds of engineering features because the raw data doesn’t lend itself well as a direct input. Two weeks later, the company open sourced a data set of 8 million YouTube video URLs along with labels from a set of 4,800 classes.

Spotify takes us through the evolution of their machine learning teams who drive the recommendations behind their Discovery Weekly and Radio products.

Ten years after the original release of Google Translate, the Google Brain team announce a new state of the art Neural Machine Translation System paper here ). The system takes the entire text to be translated as an input to a recurrent neural network instead of breaking the input sentence into words and phrases. The network pays attention to a weighted distribution over the encoded input vector (i.e. Chinese word) most relevant to generate output word (i.e. English word). Of note, the Chinese to English Google Translate service is 100% machine translation based, producing 18 million translations per day!

🔬 AI in healthcare and life sciences

Google DeepMind announced a research partnership with the Radiotherapy Department at University College London Hospitals NHS Foundation Trust. The project focuses on improving the process of segmenting normal tissue from cancer in the head and neck region so that radiotherapy causes less collateral damage to non-cancer regions.

The Next Platform track research publications in deep learning since the summer and find a particular emphasis on medical applications for prenatal ultrasound, breast mammography, brain cancer and melanoma.

Slightly more left field, Elon Musk announced that he’s made progress on a design for a neural lace . This would would effectively serve as an interface between our brains and a machine to avoid the benign situation that humans become “house cats” in the age of superintelligent AI.

🚗 Department of Driverless Cars

DRIVE PX 2, an in-car GPU computing platform available in three configurations to enable automated highway driving (1x GPU @ 10 watts), point-to-point travel (two mobile processors + 2 GPUs) or full autonomy (multiple PX 2 systems).



DRIVEWORKS, a software development kit that provides a runtime pipeline framework for environment detection, localisation, planning and a visualisation dashboard for the passenger.



DGX-1, a deep learning “supercomputer” to train the multiple networks running on the DRIVE PX 2.



The BB8 self-driving car (watch this video), which learned to drive in both rainy and dark conditions, take hard corners, navigate around cones and construction sites, and drive without needing any lane paths.



A HD mapping partnership with TomTom built on the DRIVE PX 2 platform.



Fortune run a piece on the journey of Justin.tv founders from building a live streaming business to a self-driving car technology company , both of which sold for over $1bn.

The US Federal Government released its first rulebook on autonomous vehicles , including regulation on the safe testing and deployment of AVs (including data sharing) as well as a model US state policy framework to regulate AVs.

Mapillary , the Swedish company operating a crowdsourced street level imagery service joined UC Berkeley’s Deep Drive where it will focus on semantic segmentation of real-world imagery and structure from motion to help drive research in deep learning and computer vision for autonomy.

I attended NVIDIA’s GPU Technology Conference (GTC) in Amsterdam last week and was positively taken aback by the extent of the company’s investment into driving autonomy. Jen-Hsun Huang, who founded the company in 1993 and still leads as CEO, spent the better part of his 1.5h opening keynote talking through the integrated hardware and software platform NVIDIA is launching to power autonomy. These products and services are pluggable such that their 80+ partners can choose what they want to buy vs build. NVIDIA is clearly positioned to provide the shovels for the self-driving gold rush, much like Google’s TensorFlow enables the company to sell more compute infrastructure time. Announcements included: