When we started though, we couldn’t yet control where they went and we couldn’t yet make them come down when we wanted to (which we can also do now). We were just working out a lot of the basic avionics issues of making a cell tower in the sky that was 1% the weight of what you’d put on a cell tower, using 1% of the power, at about 1% of the cost, and making sure it worked at 2% of normal air pressure and at temperatures down to 90 degrees below zero. Since we couldn’t steer them yet and since we couldn’t tell them to come down when we wanted, and since we really didn’t want them wandering off into other countries whose permission we hadn’t yet asked, we built the balloons to fail. We do it differently now but we used latex for those early balloons. Latex stretches so if you put some helium in it and let go and as it goes up, it expands because the air higher up is less dense. But that expansion makes the balloon less density so it rises some more. And this continues until about 100,000 feet when the Latex gets so thin (and so brittle from the cold) that it explodes. You can see such an explosion right here. So failure was, for the early Loon testing, a critical safety valve for the project. No balloon would stay up in the air more than a few hours.

Sometimes though, failure isn’t a feature. In the worst cases, it isn’t even something you can learn much from. Sometimes it is just a cost you pay for the learning you’re doing. Even then, getting out into the real world is the right thing to do. Our simulators and spreadsheets said, yes, sure you can hypothetically provide continuous coverage with a fleet of balloons sailing based on stratospheric wind patterns. But nothing beats actually getting balloons into the sky for months on end that need to ride all these winds around the globe so we can test these hypotheses. We’ve been doing just that for the past 2 years and we have it working great now. We can routinely let go of a balloon on one side of the world and guide it to within a few hundred meters of where we want it to go on the other side of the world, 10,000km away. But it wasn’t always that way. It took many hundreds of tries and experiments and failures to get them working that well — and every failure meant a balloon headed somewhere we didn’t want it. And that meant taking it down and going to collect it. Sending teams north into the arctic circle to stuff a balloon into the back of a helicopter and out into the south pacific by boat to collect balloons. Not how we want to be spending our time, obviously, but it was worth it to get the practice we’ve gotten steering the balloons by teaching them how to sail.

One of our projects is focused on building a fully self-driving car. If the technology could be made so that a car could drive all the places a person can drive with greater safety than when people drive in those same places, there are over a million lives a year that could be saved worldwide. Plus there’s over a trillion dollars of wasted time per year we could collectively get back if we didn’t have to pay attention while the car took us from one place to another.

When we started, we couldn’t make a list of the 10,000 things we’d have to do to make a car drive itself. We knew the top 100 things, of course. But pretty good, pretty safe, most of the time isn’t good enough. We had to go out and just find a way to learn what should be on that list of 10,000 things. We had to see what all of the unusual real world situations our cars would face were. There is a real sense in which the making of that list, the gathering of that data, is fully half of what is hard about solving the self driving car problem.

A few months ago, for example, our self-driving car encountered an unusual sight in the middle of a suburban side street. It was a woman in an electric wheelchair wielding a broom and working to shoo a duck out of the middle of the road. You can see in this picture what our car could see. I’m happy to say, by the way that while this was a surprising moment for the safety drivers in the car and for the car itself I imagine, the car did the right thing. It came autonomously to a stop, waited until the woman had shoo’d the duck off the road and left the street herself and then the car moved down the street again. That definitely wasn’t on any list of things we thought we’d have to teach a car to handle! But now, when we produce a new version of our software, before that software ends up on our actual cars, it has to prove itself in tens of thousands of situations just like this in our simulator, but using real world data. We show the new software moments like this and say “and what would you do now?” Then, if the software fails to make a good choice, we can fail in simulation rather than in the physical world. In this way, what one car learns or is challenged by in the real world can be transferred to all the other cars and to all future versions of the software we’ll make so we only have to learn each lesson once and every rider we have forever after can get the benefit from that one learning moment.

So most of you have probably heard of Glass. This is an example of an [x] product that we knew we had to get out into the real world at a very early stage to see how it might work. People have been envisioning how our physical and digital lives will merge through the use of smart glasses in sci-fi TV shows and movies for more than 30 years now. Knowing how to convert that into a product that can be made today and will really work for people is a very different matter. This is exactly why we created the Glass Explorer program.

The program allowed us to get an early version of the device into the hands of a lot of different people. The Explorer edition of Glass wasn’t for everyone, but the Explorer program pushed us to find a wide range of near term applications and uses for something like Glass. From firefighting to surgery, from cooking to learning to play the guitar, interacting with information hands free clearly has a lot of use cases. We also quickly saw areas for technical improvements — the battery life was a major obstacle and an area where we had to invest — but the program was designed just as much for social testing as it was for technical testing. We needed fearless pioneers, and we’re grateful to everyone — probably many of you in this room — who came on this adventure with us.

In retrospect, we made one good decision and one bad decision around the Glass Explorer program. The good decision was that we did it. The bad decision was that we allowed and sometimes even encouraged too much attention for the program. Instead of people seeing the Explorer devices as learning devices, Glass began to be talked about as if it were a fully baked consumer product. The device was being judged and evaluated in a very different context than we intended — Glass was being held to standards that launched consumer products are held to, but the Explorer edition of Glass was really just an early prototype. While we were hoping to learn more about how to make it better, people just wanted the product to be better straight away — and that led to some understandably disappointed Explorers.

But of course, we learned a lot from the very loud public conversations about Glass and will put those learnings to use in the future. I can say that having experimented out in the open was painful at points, but it was still the right thing to do. We never would have learned all that we’ve learned without the Explorer program and we needed that to inform the future of Glass and wearables in general.

Glass graduated from [x] earlier this year, so stay tuned for that future. And in the meantime, those of you weighing up your own execution risks and trying to figure out a plan for testing market readiness for a new product or technology, my advice is — go out and talk to people, and prototype, and talk some more, and prototype some more, and create as many opportunities to learn as you can. You’re never going to figure out the right answer sitting in a conference room.

One of our earliest projects at [x] was called Genie. We worked on it for about 18 months and then spun it out into a stand alone business where it has been growing and thriving for the past two and a half years. The original goal of the Genie project was to fix the way buildings are designed and built by building, basically, an expert system, a software Genie if you will, that could take your needs for the building and design the building for you. The problem is there and very real. The built environment is an $8 trillion per year industry that is still basically artisanal. It produces almost half the world’s solid waste and nearly a third of the worlds CO2 emissions. Over the first 18 months of the project, though, we found out that the system we envisioned couldn’t connect into the infrastructure and ecosystems for building the built environment because that software infrastructure is piecemeal and often not software at all but just knowledge trapped in the heads of the experts in the field.

Having learned this, the company, now called Flux, took a huge step back. The goal for the company is the same but it had realized through these extended rounds of interaction with structural engineering firms, architecture firms, developers, and contractors that before such a software Genie could even be contemplated, a software foundation and data layer had to be laid, much as you would do with a building.

The picture here in blue are the zoning areas for downtown Austin. You see that lighthouse-like spray-out from the center of the map? Those are site lines — you can’t build a building in Austin that blocks the view of the state capitol building dome along these lines. And every one of the other circles and squares on that map is another zone with its own special rules. There are many areas where a half dozen or more zoning regions applies to the same plot of land. Imagine for a single plot of land trying to figure out from all those rules (many of which change from year to year) what exactly you’d be allowed to build there. Even worse, imagine trying to ask, across the whole city, “I want to build a building like this. Where are places where the zoning would allow me to build it?” In the lower right hand here you can see Flux now answering this question automatically. This is an example of the groundwork the company is laying: creating an automated way to keep track of various cities’ building codes and their ramifications for building design.

Flux is one of the successful graduations from Google[x], but the only one to date that we’ve moved out into an independent company. We don’t have a playbook for how these graduations “ought” to work and that has allowed us to remain flexible, to run experiments on the gradation process itself, and learn how to get the best possible graduation style and timing for each project given its unique needs and opportunities.

Project Wing is our project for delivering things via self-flying vehicle. There is a huge amount of friction left in how we move things around the world. If much of the remaining cost, safety issues, noise, and emissions could be removed from deliveries while making them take minutes instead of hours, we see great positives that can come from this. Sergey pushed that team out the door last summer…literally out the door to the Australian bush, telling them to go try to deliver something in the real world to someone who wasn’t a Googler. This actually managed to both prolong a failure of ours and help us to end it and how that worked out will be useful learning for us for other [x] projects.