On January 1st, 2018, I made predictions (here) about self driving cars, Artificial Intelligence and machine learning, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

So, today, January 1st, 2019, is my first annual self appraisal of how well I did. I’ll try to do this every year for 32 years, if I last that long.

I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below. I have changed the header of the third column in each case to “2018 Comments”, but left the comments exactly as they were, and added a fourth column titled “Updates”. In one case I fixed a typo (about self driving taxis in Cambridgeport and Greenwich Village) in the left most column. I have started highlighting the dates in column two where the time they refer to has arrived, and I am starting to put comments in the updates fourth column.

I will tag each comment in the fourth column with a cyan colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. No Tomato dates yet. But if something happens that I said NIML, for instance then it would go Tomato, or if in 2019 something already had happened that I said NET 2020, then that too would go Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa).

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables. So far none of my predictions have been at all wrong–there is only one direction to go from here!

No predictions have yet been relevant for self driving cars, but I have added one comment in this first table.

Prediction

[Self Driving Cars] Date 2018 Comments Updates A flying car can be purchased by any US resident if they have enough money. NET 2036 There is a real possibility that this will not happen at all by 2050.

Flying cars reach 0.01% of US total cars. NET 2042 That would be about 26,000 flying cars given today's total. Flying cars reach 0.1% of US total cars. NIML First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.

NET 2021 This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane. Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to drive NET 2024 First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day. NET 2022 The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only. 20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely. Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US cities NET 2025 A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense. Such "taxi" service as above in 50 of the 100 biggest US cities. NET 2028 It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.

Dedicated driverless package delivery vehicles in very restricted geographies of a major US city. NET 2023 The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.

A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment. NET 2023 The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure. A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.

NET 2032 This is what Uber, Lyft, and conventional taxi services can do today. Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035 Unless parking and human drivers are banned from those areas before then. A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area. NET 2027

BY 2031 This will be the starting point for a turning of the tide towards driverless cars. The majority of US cities have the majority of their downtown under such rules. NET 2045 Electric cars hit 30% of US car sales. NET 2027 Electric car sales in the US make up essentially 100% of the sales. NET 2038

Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph. NIML There might be some small demonstration projects, but they will be just that, not real, viable mass market services.

First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked. NIML Recall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

Right after the Artificial Intelligence and machine learning table I have some links to back up my assertions.

Prediction

[AI and ML] Date 2018 Comments Updates Academic rumblings about the limits of Deep Learning BY 2017 Oh, this is already happening... the pace will pick up. 20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table. The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play. BY 2018 20190101 Likewise some technical press stories are linked below. The popular press starts having stories that the era of Deep Learning is over. BY 2020 VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning". NET 2021 I am being a little cynical here, and of course there will be no way to know when things change exactly. Emergence of the generally agreed upon "next big thing" in AI beyond deep learning. NET 2023

BY 2027 Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out. The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML. NET 2022 I wish, I really wish. Dexterous robot hands generally available. NET 2030

BY 2040 (I hope!) Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years. A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc. Lab demo: NET 2026

Expensive product: NET 2030

Affordable product: NET 2035 What is easy for humans is still very, very hard for robots. A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution. NET 2028 There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots. A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door. Lab demo: NET 2025

Deployed systems: NET 2028

A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns. Lab demo: NET 2023

Deployed systems: 2025 Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment. An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse. NET 2030 I will need a whole new blog post to explain this... A robot that seems as intelligent, as attentive, and as faithful, as a dog. NET 2048 This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there. A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans. NIML

With regards to academic rumblings about deep learning, in 2017 there was a new cottage industry in attacking deep learning by constructing fake images for which a deep learning network gave high scores for ridiculous interpretations. These are known as adversarial attacks on deep learning, and some defenders counter claim that such images will never arrive in practice.

But then in 2018 others found images that were completely natural that fooled particular deep learning networks. A group of researchers from Auburn University in Alabama show how an otherwise well trained network can just completely misclassify objects with unusual orientations, in ways which no human would get wrong at all. Here are some examples:

We humans can see why or how a network might get the first one wrong for instance. It is a large yellow object across a snowy road. But other clues, like the size of the person standing in front of it immediately get us to understand that it is a school bus on its side across the road, and we are looking at its roof.

And here is a paper from researchers at York University and the University of Toronto (both in Toronto) with this abstract:

We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this “object transplanting”. Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena.

In all their images a human can easily see that an object (e.g., an elephant, say, and hence the very clever title of the paper, “The Elephant in the Room”) has been pasted on to a real scene, and both understand the real scene and identify the object pasted on. The deep learning network can often do neither.

Other academics took to more popular press outlets to express their concerns that the press was overhyping deep learning, and showing what the limits are in reality. There was a piece by Michael Jordan of UC Berkeley in Medium, an op-ed in the New York Times by Gary Marcus and Ernest Davis of NYU and a story on the limits of Google Translate in the Atlantic by Douglas Hofstadter of Indiana University at Bloomington.

As for stories in the technical press there were many that sounded warning alarms about how deep learning was not necessarily going to the greatest most important technical breakthrough in the history of mankind. I must admit, however, that more than 99% of the popular press stories did lean towards that far fetched conclusion, especially in the headlines.

Here is PC Magazine talking about the limits in language understanding, Forbes magazine on the overhyping of deep learning. A national security newsletter quotes a Nobel prizewinner on AI:

Intuition, insight, and learning are no longer exclusive possessions of human beings: any large high-speed computer can be programed to exhibit them also.

This was said by Herb Simon in 1958. The newsletter goes on to warn that over hype is nothing new in AI and that it could well lead to another AI winter. Harvard Magazine reports on the dangers applying a an inadequate AI system to decision making about humans. And many many outlets reported on an experimental Amazon recruiting tool that learned biases against women candidates from looking at how humans had evaluated CVs.

The press is not yet fully woke with regard to AI, and deep learning in particular, but there are signs and examples of wokeness showing up all over.

Developments in space were the most active for this first year, and fortunately both my optimism and pessimism were well place and were each rewarded.