FRANCESC: Hi, and welcome to the sixth episode of the weekly Google Cloud Platform podcast. I am Francesc Campoy, and I'm here with Mark. Hey, Mark.

MARK: Hey, Francesc. How you doing?

FRANCESC: Pretty good. Back to San Francisco, very ha--very excited to be back.

MARK: We're in the same place at the same time. It's very exciting.

FRANCESC: That's something new, yup. So today we have a very exciting episode. We're gonna be talking with Ilya Grigorik.

MARK: Yes, very excited about this, talking about HTTP/2 and HTTP/2 on the Google Cloud Platform, hot topic.

FRANCESC: Yeah.

MARK: It's gonna be really good.

FRANCESC: Yeah, that's gonna be really, really interesting. There's a lot of really interesting things not only about HTTP/2, but we're gonna also be discussing QUIC and, like, cool things that people are trying to do with HTTP. I'm very excited about that.

MARK: Yeah.

FRANCESC: But before that, we're gonna be discussing about the cool thing of the week. So what is the cool thing of the week?

MARK: I like this. I'm admittedly not a particular Minecraft user, but I do love this whole space. I think it's really creative. A gentleman by the name of Steve Sloka came up with Kubecraft, which is a lovely way to visualize what pods are running inside your Kubernetes cluster inside Minecraft. It's well worth watching the video. It's open source code. It's really cool.

FRANCESC: Yeah, I've seen the video too. I think it's really amazing. I'm really looking forward to more people collaborating on that 'cause if you're able to include that in kind of, like, a game where you could actually be killing hostile--or killing pods and seeing how they get rescheduled somewhere else, that could be such an amazing demo.

MARK: Yeah, that would be so much fun, and then, like, you kill one, and it comes back up, and you kill one, it comes back up. It'd be great. I could really, really like it.

FRANCESC: Yeah, but yeah, so we will put the link to the video and the GitHub repo on the show notes.

MARK: Definitely.

FRANCESC: That's for sure.

MARK: Wonderful. Well, it sounds like then why don't we get stuck into our interview [inaudible]?

FRANCESC: Let's go for that.

MARK: Brilliant. So we are joined today by the wonderful and illustrious Ilya Grigorik. Thank you very much for joining us today. Did I manage to pronounce your name right is probably the first question I need to ask?

ILYA: Yes, you did, but wonderful and what was the other one?

MARK: Illustrious.

ILYA: Illustrious.

FRANCESC: Yeah, illustrious.

MARK: Yes. I do like my adjectives.

FRANCESC: Such a great word.

MARK: So yes, thank you so much for joining us today. We're very excited to sit down and have a chat with you about HTTP/2 and how it affects Google Cloud Platform. Before we get into that, why don't you give us a bit of a, you know, background on you and what you do and sort of your position here at Google and all that sort of fun stuff?

ILYA: Sure thing. So first of all, I guess, thank you for inviting me to join you guys to talk about this stuff. This is a topic that's near and dear to my heart that I've been working on for a while, SPDY and HTTP/2 in particular. So I'm a developer advocate also within Google. I sit not very far from you guys. I could probably wave at you in the adjacent building, and I work with the Chrome team, or have been working for a while, and also I've been working with Make Google Fast team here at Google, and our mission has always been to try and figure out how to make the Internet faster as a whole and, of course, make Chrome faster and the Google products, and part of that was the SPDY effort that started back in--in around 2009, later graduated to HTTP/2, and now we're doing exciting things with QUIC, so lots of exciting things there.

MARK: Awesome. Well, okay, so you mentioned three things there which is kind of fun. You mentioned SPDY. You mentioned HTTP/2. You mentioned QUIC. Why don't we start with the first one? So what was SPDY? What was that? And we can probably continue over from there.

ILYA: Sure, so when we started working on Chrome within Google, one of the things that we realized very early on was that the current web pages that we're building are much more complicated than what the HTTP protocol was originally designed for, right? So the protocol that the web is built on today was more or less standardized ten years ago, even actually more so, back in 1999, and since then we've started building much more complicated and interesting applications. We went from pages to applications that fetch hundreds of resources and all the REST, so when the effort to start working on Chrome kicked off in around 2007, 2008, we very quickly realized that first of all, speed is one of the tenets of Chrome. We wanted to focus on speed, and we identified HTTP as one of the primary bottlenecks where just the number of connections that you had to manage, it's inefficient both on the server and the client, and we were really limited in the parallelism for how many resources we could fetch, what kinds of signals we can communicate to the server to help prioritize how the data is returned, and all the REST, so that was the genesis of SPDY, where we took a step back and just looked at what could we do to HTTP to help reduce or eliminate some of the bottlenecks? So the basic insights were, well, if we add things like framing such that we can split messages instead of having one message occupy the entire connection, which is how the HTTP/1 protocol works, where you send the request. You have to wait for the entire response before you can send the next request. What if we could actually send multiple requests and responses over the same connection? So that was framing. Then we looked at, well, what about prioritization? So it turns out that modern applications are both smart and complicated. Things like as you start scrolling, it'd be nice if we could reprioritize things like, say an image scrolls out of the viewport. It'd be nice to let the server know that, hey, this is maybe not as important as it was before 'cause it's no longer visible or even simpler things like, well, I need this JavaScript file before this image file because it's blocking rendering. That’s not something you could actually express in a protocol before, so we started looking at priorities, and then there's another feature that has to do with compression, so we realized that the headers of HTTP are always transferred in plain text. They're not compressed. HTTP allows the bodies to be compressed, but the--like, the get header and the user agent string and all those things are always transferred uncompressed, and frankly, they're kind of redundant oftentimes, and there's a lot of that data going back and forth, so we could--if we could compress that data, that would also speed up the protocol, and then kind of building on those primitives, there's additional things that went into the protocol to just make it more efficient, and that was effectively the genesis of SPDY. The Chrome team implemented that--or started experimenting with it. They started implementing it also on the server, so our Google servers and Chrome were some of the first ones to kind of test this out and gather some data from out in the wild to see, like, does this work? And lo and behold, it turns out that it did work. It actually helped some of our services quite a bit, so we took that proposal to IETF and kind of started the discussion around what could we do in the space? Like, SPDY is just one way of tackling this problem, and after some back and forth, the IETF started a new effort to define HTTP/2, and they actually adopted SPDY, which at the time was draft 2 or version 2 of the protocol, as the starting point, and then kind of the working group took over, and, you know, we continued to contribute to it and work with the team there, and that effectively became HTTP/2.

FRANCESC: Nice. So what you're saying is that at the beginning, HTTP/2 was basically SPDY, which was a Google product but--Google project, sorry, but from there it kept on evolving to become a standard? What are the main differences between SPDY and HTTP/2? Like, what was added or removed from it?

ILYA: So there's a lot of low-level details, like, you know, the framing is slightly different. Like, if you care about how the bits are aligned on the wire, there's quite a few subtle changes. You know, we changed the definition of some of the frames. We changed the how the prioritization scheme works, so for example, in SPDY we had a very simple model where you could just assign a weight. In HTTP/2 it's much more comprehensive. It allows you to express both weights and dependencies between resources. We've done some additional work on server push, which is something we actually haven't talked about, but in HTTP/2 we have this new capability where in response to a single request, the server can actually push multiple responses back, so a use case here would be, "Hey, you came and asked for the index.html, but I know that you're gonna ask for the JavaScript file and this style sheet," right? "So why should I wait for another roundtrip for you to just come back to me and ask me for this thing? So here, just have it," so that's server push, and there's some other kind of cleanups and all the REST that was added, so the HTTP/2 draft or the--kind of that work, I think it went through--I don't want to lie, but it was definitely more than a dozen iterations, and it was actually really, really good because unlike a lot of the other protocols, this was being developed and tested in the field, so we--very early on, of course, we had Chrome support. Firefox also supported SPDY. IE and Safari also jumped on board, so while were kind of testing HTTP/2, we were actually, like, very, very data-driven, so we had these meetings every three or four months where we would come back and do interrupt testing between all the browsers and the major servers. Of course, there was Google. There was Twitter. There was Facebook and others that were experimenting with this stuff, so we could make hypotheses, right, and say, "Hey, we would like to propose this feature." Instead of just arguing about it, we could actually try and implement it, see what the data says, and then come back to the drawing board and say what worked and what didn't, so by the time the HTTP/2 spec was actually finalized and went out as a standard, we already had implementations in all the browsers. We had very good implementations in many servers including open source versions, so it was probably one of the best-tested protocols that we've released in a long time amongst all the different stacks.

FRANCESC: Okay, so from the beginning we already had support for SPDY and HTTP/2 on some Google services and Chrome, and other people also jumped into it. When did we start having that kind of support in other Google services, like, especially Google Cloud Platform?

ILYA: That's a good question. I don't know the exact date, but I can say that, of course, most if not all of our services at Google actually are served by more or less the same frontend, which is our Google Frontend, or GFE, as we call it internally, and the GFE was where we implemented SPDY support, so the support for SPDY was effectively enabled for any and all Google service as long as they used HTTPS. This is actually one thing we haven't talked about, but in order to use SPDY in HTTP/2, you have to use HTTPS because the negotiation for the protocol happens when the TLS handshake is done. That's how the client advertises that, "Hey, I support HTTP/2," and the server picks that, and I believe, actually, that we had kind of silent support for SPDY for a very long time, especially in places like App Engine, where we actually haven't advertised it much, but we were certainly, like, the first platform that supported it, so if you happened to have run your service on top of App Engine and with HTTPS, you probably didn't even know it, but you were already SPDY-enabled, which is pretty cool.

FRANCESC: So one more reason to use HTTPS on top of security is that it will be even faster. That's nice.

MARK: It'll be even faster. So--okay, so extending on from there, it leads us quite nicely into--so, all right, people are saying, "Yup, HTTP/2's awesome. I want the extra speed. You know, it's gonna make my sites faster. How do I get HTTP/2 now on, like, Google Cloud Platform? What do I need to do to get that enabled?"

ILYA: Right, so actually, I think that's an even better question for me to phrase to you guys. You know it probably even better than I do, but depending, I guess, on how you set it up, you need to make sure that, first of all, we said HTTPS, right? That's kind of point one. Once you have HTTPS, you need to make sure that whatever you're using is able to negotiate HTTP/2 on your behalf, and then it's a question of how is that routed to your backend? So if you're using something like a TCP load balancer, then you have to terminate the HTTPS and have a server that is capable of talking HTTP/2, whereas if you're using the, like, HTTPS load balancer which is HTTP aware, then it will do that work on your behalf, and I think we actually--we have a pretty nice kind of diagram on our blog that shows all of that.

MARK: We do. That might have been a leading question considering that you and I worked on that blog together, but that's fine. I'm happy with that. That's totally fine. So yeah, I mean, it's pretty cool. Like, you put a load balancer in front of it, you go HTTPS, you get--you know, you get, you know, HTTP/2 support. You use App Engine, you use HTTPS, you get HTTP/2 support. I really like this. I think it's actually really, really, really powerful.

ILYA: Yeah, it's actually--this--I think this highlights something very important. So SPDY and HTTP/2 does not--do not change anything semantically about how HTTP works, right? It's all the same methods. It's all the same headers. It's all the REST, so when I said earlier that if you're running on HTTPS and on App Engine, you're probably running SPDY and you didn't even know it, that's actually--that's very important, right? So even today you're thinking, like, "So how do I enable HTTP/2?" It's like, "Well, switch over to HTTPS. Put a load balancer or a server that is able to speak HTTP/2, and your application will work." There's nothing extra that you need to do. Now, there are things that you could perhaps do within your applications to take better advantage of some of the things that HTTP/2 provides, but that's a whole different matter.

MARK: No, it's great. That's great. So if they want to take advantage, we talked a little bit about server push, like, pushing out assets. Are there any other things that people might be able to take advantage of they might want to?

ILYA: Well, I think that's where we get back--or we get into what kind of things you can do to get the most out of HTTP/2, right? So HTTP/2 removes a lot of the bottlenecks that we previously had to work around, things like concatenated files, and for the most part that was primarily because the requests--having multiple independent requests was just very costly from a latency perspective, so we would take all of our, like, beautiful modular JavaScripts and CSS and put it into one file and say, "Here. Here's a workaround," right? "Just fetch this one big blob of data," which is fine. Like, it works except that it does run into issues with things like caching, so now your designer comes along, changes one color, and all of a sudden your entire style sheet is invalidated. Could you have split that such that, you know, the things that are changing rapidly are small such that next time it's updated you just update that one file? That's fewer bytes for the client to fetch when they come back to your site. It also means that the content is fetched faster, so it's less load. It's faster load. It's a win all around, right? So now you can actually start thinking about that and saying, "Well, you know, I probably already put in some, like, automation in place that puts all the files together into one bundle. Maybe I want to undo all of that." Like, I probably don't want to ship hundreds of files 'cause that has its own limitations, right, but, like, dozens is probably a very good kind of safe spot. So that's one very concrete example. The other one is server push, as you mentioned. So this is actually something that developers have been sort of using unknowingly. This is very similar to inlining. So with inlining, it's kind of a similar hack that says, "Well, maybe the request is so small or I don't want you to come back, so I'm just gonna embed it in another resource. Like, I'm gonna take this image and just Base 64 encode it and put it into HTML such that you don’t have to--you don't even have to ask me for it," but that's what server push accomplishes as well, except server push, when the resource is pushed, actually goes into the cache, so it's separate from the HTML files, so that resource can be reused across multiple files. It can be validated, all the same things. So that's another example, and for server push, I think we have some exciting news coming up, maybe once this podcast goes live, that'll show you how to use that on App Engine and other places as well, so I'm really excited about that.

FRANCESC: That sounds pretty exciting. So one of the things that I had to deal before with and I didn't really enjoy was the fact that to serve better a bunch of images, I had to put them all in a single image and then do CSS tricks for that.

ILYA: Uh-huh.

FRANCESC: Does that mean with HTTP/2 that is not needed anymore?

ILYA: Right, so that's--it's the exact same issue as the JavaScript and CSS, right? We have a lots--we have lots of small icons, and it's just very expensive to fetch them all because we have--effectively, the maximum parallelism is six connections--or six requests, and my page just happens to have 100 icons, right? It's like, that's gonna take a long time. That's a lot of roundtrips, so we put them all into one file and then use the CSS hacks. So yeah, this is no longer necessary because now the server--or the client, rather, can just send all hundred requests over the single connection, and then the server can respond and interleave that data as it wishes.

FRANCESC: That is really awesome.

ILYA: Now, as I said before, like, if you're--if you have thousands of images, there are other concerns where it may still be useful to do some sort of spriting, so for example, let's say all of your icons are kind of similar. You could actually get much better compression if you put some of them together, right? Like, in terms of your actual image format. So, like, there's some interesting areas to explore here. It's not just, like, unbundle everything, but at the same time, I think that's an exception, not the rule now, whereas before, like, as a rule we had to put everything into one image file or one CSS file or one JavaScript, and now that's more of an [inaudible].

MARK: It kind of changes the rules of thumbs that you would necessarily, like, apply when you start building this stuff. That makes a lot of sense.

FRANCESC: Yeah, this makes me think about the demo that we'll put on the show notes for sure. It's this HTTP/2 demo developed by Brad Fitzpatrick from the Go team. So basically they show the difference between loading--I think it's around, like, maybe a hundred little images on HTTP/1 versus HTTP/2, and it's quite amazing. Like, the differences is really night and day.

ILYA: Yeah, and it's really important that, like, as you look at those images you kind of try and grok as to why that is, right? And in HTTP/1 world it's loading slowly because we can fetch at most six tiles at a time, and then there's just many roundtrips to fetch all the necessary tiles, and with HTTP/2, the client just sends all the requests at once, so effectively, we're limited by the speed of the link between the client and the server.

FRANCESC: Cool. So there's a lot of benefits from moving from HTTP/1 to HTTP/2 which actually came via first that experiment that was SPDY. What about QUIC? You mentioned that. What is QUIC?

ILYA: Sure. So--and actually, before I go to QUIC can I just make one more note about SPDY?

MARK: Sure.

FRANCESC: Of course.

ILYA: So SPDY evolved--it co-evolved with HTTP/2. SPDY was effectively the experimental protocol for HTTP/2, so while we were working on HTTP/2 spec, we were testing things in SPDY, and some people have a good question of, like, so what does that mean? Like, do we keep SPDY and do we keep HTTP/2? And the plan there is to deprecate SPDY, so we announced--Chrome announced that we will deprecate SPDY sometime in early 2016 because we want to encourage everybody to shift over to HTTP/2, and for example, when you look at support for HTTP/2 across--and SPDY across on the new browsers like Edge, Microsoft Edge, they only support HTTP/2 now, so if you've been running SPDY, start looking at HTTP/2, and the good news is there's actually very good server support now. Like, EngineX and Apache all have support. App Engine, of course, and all the load balancers and all the REST on Google Cloud Platform have it, so just FYI, if you've been running SPDY, look at HTTP/2.

FRANCESC: No, that's a very interesting point, yeah.

MARK: That's a fair point.

ILYA: Yeah, and then for QUIC. So this is an interesting one. Even when we started working on SPDY, we knew that, like, the moment you address issues at the HTTP layer, you're gonna run into issues at the next layer down, which is TCP, and TCP has some behaviors that--like head-of-line blocking, that are not optimal, so for example, if you have a pa--if you have packet loss, if--because TCP guarantees in-order delivery, if you lose just one packet, even though you may have other packets already sitting in your buffer, you cannot deliver them because you need to retransmit that first packet that was dropped such that you can guarantee in-order delivery, right?

MARK: Yup.

ILYA: So this seems like, eh, so what? But in practice it can actually be quite bad because if you think about sending 100, like, requests out and then everything's stalled on that one lost packet when you could have been processing something else, that's sub-optimal, so that's just one example. Other examples would be things like mobility or mobile connections where sometimes TCP--or not sometimes, oftentimes, actually, TCP gets--misunderstands the behavior of some of the mobile networks and considers it to be, like, packet loss where it's really not, so it has sub-optimal behaviors. So looking at all that, we kind of knew that, you know, at some point we're gonna have to dig a little bit deeper and try to change TCP or go some other route, but when we started working on SPDY, that was, frankly, a bit too much to tackle at once at both layers, so we focused on the HTTP layer, and then as that kind of migrated towards the IETF working group and the HTTP work kicked off, we started looking at the next layer down, so effectively, QUIC is HTTP/2 over UDP. We ha--

MARK: Wow, interesting.

ILYA: Yup. And you can imagine how that can be very interesting and complicated. So we've tried and we can--we do, actually. We have a team within Google that works on TCP on a Linux kernel and other things, so we've upstreamed many improvements over time, but one thing we learned is that TCP's actually very slow in terms of the update cycle, getting support out on the client, rolling it out on the server, and all the REST, which is, frankly, for a good reason, right? Because there's so much riding on it that the last thing you want is crazy experiments running out in the wild and causing networks to collapse, so the people that maintain all those branches--and plus you have all the hardware, right, with, like, baked-in implementations of it that you frankly can't even update at this point, and UDP offers a very simple interface that provides kind of no primitives on server in terms of reliability and all the REST, and that's something we could experiment with, so we started with that, and we started effectively re-implementing some of the basic primitives of TCP but also adding hooks for new capabilities that we could explore, so things like pluggable congestion control, so maybe we can use different congestion control strategy for mobile networks or 2G networks than when you're on Google Fiber and you have, like, a gigabit link, right? Those are--those have very, very different characteristics in terms of how we should--how we want to handle congestion and other things. We also spent a lot of time looking at eliminating latency during the handshake, so we talked about requiring TLS for HTTP/2. It turns out that once you deploy HTTP/2, because you don't need the extra connections, it actually mitigates a lot of the latency cost of TLS. So TLS handshake itself adds two roundtrips of latency for the handshake if it's not optimized, and if you're--like, if you've optimized it well, it's another roundtrip, which is still--you know, sometimes can be significant, but that latency is offset by HTTP/2 because it does not require as many connections and you kind of amortize that cost over the lifetime of the connection. With QUIC we actually wanted to get to a zero RTT handshake, secure handshake where we can guarantee that it's always encrypted, but we can send application data in the first roundtrip, so I can just start talking to the server and immediately send unencrypted data such that there's no penalty cost whatsoever for using encrypted sessions. So that was another big change, and this work has been going on for a while within Chrome. It is--I think we're on version 23 or 24 of QUIC protocol. Some of those changes are, of course, smaller than others, but it's actually--I think it speaks volumes that we're on version 25 and in the meantime zero new versions of TCP have shipped.

MARK: Wow.

ILYA: Right?

MARK: So how much faster is QUIC than, like, HTTP/2? Like, obviously it seems to be determinate on certain situations, but, like, sort of ballpark-ish?

ILYA: So it really depends. It--actually, the same question applies to HTTP/2, right? Like, how much faster is HTTP/2 than HTTP/1? And the answer there is it really depends on your application. It depends on where your users are based on the characteristics and all the REST. We do have some numbers that we've shared for some of the Google products. I think those--this is data from maybe a year or so ago where we've seen improvements anywhere between 30% to 50% in terms of latency, overall latency for our own product. Now, you know, if all you do is you take an HTTP/2 server and--like, replace your current server with an HTTP/2 server, I can't guarantee any sort of, like, "Oh, it'll go two times faster," right? It really depends on how well your server is implemented and how you're currently serving your application, like, how many third-parties do you have? Are you concatenating your files? Are they unbundled? And all the REST, so there's kind of a lot of gotchas in here, and then to answer your question, on QUIC, you know, when we compare it to HTTP/2, once again, like, it really depends on the scenarios that we look at. The areas where we found most success so far are actually in some of the developing markets like India, where we have very slow users with very slow connections, and we've shared some numbers, and I can give you guys the links to the Chromium blog that has more details on this. For example, YouTube saw a significant improvement in the latency, like, reducing the number of rebuffers and improvement somewhere in the range of, like, 20% to 30% again for the latency of those users because they were able to eliminate some of those retransmission delays and other things. So, you know, there's no one number that I can give you that's like, "Oh, QUIC is x% faster," but--

MARK: What, no silver bullet?

ILYA: Right.

FRANCESC: No, but at least that's very interesting that in the connections that are the slowest, that's where we're gonna get the best improvement. That's really interesting.

ILYA: Well, yeah. So at a very high level, the benefits that we're seeing--you can think of it as kind of a longtail distribution, right? For a lot of the connections in terms of latency and all the REST, and what QUIC really helps us with, at least based on our current observations, is to rein in those tails and kind of chop them off because we can explicitly guard for those things and implement smarter behaviors for big or slow networks, so it allows us to be much more--like, to just eliminate a class of really bad behaviors, which is actually very important for a lot of people and for a lot of applications.

FRANCESC: Cool.

MARK: No, that's good. So just to be clear as well, so Google is currently, like, experimenting with, like, QUIC on, like, the GFEs and on Google Cloud Platform. Is that something that's sort of being experimented on, I'm guessing, within Chrome as well?

ILYA: That's right, yeah, so the--we have an implementation in Chrome, and if you're curious, you can actually just pull up the Chromium source and look at the--and look at the code. There's a toy server in there as well that I think was just built for kind of small testing, and then our GFE, again, also implements QUIC just as we did with SPDY, so most of the SPDY--sorry, most of the Google services are already speaking QUIC. One thing that you could do, and I recommend doing this, is you can go to the Chrome Store and search for--what's it called? I think it's, like, the SPDY extension.

MARK: The--yeah, the plug-in that shows the stuff, yeah.

ILYA: Yeah, and what it'll do is it'll show you, like, a little lightning bolt in your navigation bar. It'll show green when you're talking to a server that's HTTP/2, and it'll show red when it's talking to a QUIC server, and if you install that and you start navigating to your favorite Google products, you'll very quickly find that, you know, it's showing the red bolt quite often, and that's because most of our services, like YouTube, are already being served over UDP, so if you're watching this or something else on YouTube in Chrome, chances are it's actually talking UDP today.

MARK: Yup. Yeah, I've got that installed. It's actually really fun to go round the Internet and have a look.

FRANCESC: I'm installing it right now. That sounds very interesting.

MARK: That's really great.

ILYA: Yeah, and so speaking of not kind of advertising these things too much, I think the same will happen if you open your App Engine app, and--that's running over HTTPS, you'll probably see a red bolt, which tells you that your application is being served over QUIC for the browsers that support it.

MARK: Yes, for the browsers that support it. Yeah, I think I--I remember this, and I think I had to tweak a couple of things inside Chrome to get QUIC enabled in the build that I was running a while ago, but yeah.

FRANCESC: Okay, I just tried, with normal Chrome, our own page, which is hosted by App Engine, and I get HTTP/2.

MARK: Cool.

FRANCESC: And we didn't do anything to get it, so that's nice.

MARK: That is nice.

ILYA: Yup, and so QUIC is already enabled by default in Chrome. We do run experiments. We always have a group--a random group that we hold back just for AB testing and other things, so it may be the case that, you know, if you try to open it and you don't see it, you may be in that lucky group, so you can force it. You can override it if you go into Chrome settings, but just FYI.

MARK: Wonderful. All right, well, before we finish up, Ilya, is there anything else you want to, like, mention or plug or anything like that?

ILYA: Well, I guess a couple of things. First off, if you're running on App Engine, definitely go and check out the blog for--that covers HTTP/2 and kind of that whole flow of which service are you using and how do you enable HTTP/2. Start with enabling HTTPS, and then make sure that you have the other things enabled. Watch out for the server push support. I think that's coming soon. We'll--it's actually already there, I think. We just haven't really talked about it. I think there's a theme here. And then finally, there's--I think there's QUIC, which lots of people are really excited about, so if that's--you know, if protocols is, like, something that you get excited about, then I definitely encourage you to go read the draft on ATF, give us feedback, install the extension, play with it, and of course, if you're running on App Engine, then, you know, poke around and see how it works under the hood.

FRANCESC: Nice.

MARK: Wonderful. All right, well, thank you very much for joining us today, Ilya. That was very informative. I really enjoyed that.

FRANCESC: Yeah, thank you. I really learned a lot today.

ILYA: Awesome. Thank you guys.

MARK: So thank you. Thanks so much for joining us, but we have a wonderful question of the week. I've seen this come up a few times. People are looking at different solutions for how do you do, like, long-running jobs like, say, video transcoding, image processing, maybe even code compilation for build systems, and what options do they have available to them to sort of do that sort of stuff on Google Cloud Platform but in a--like, a cost-efficient way? Like, there should be something there. Francesc?

FRANCESC: Okay, yeah, so I mean, the obvious way to do it would be, of course, to just have a machine that it's running all the time, but the problem is that maybe that's not the most ef--most cost-efficient way, so if you have something that you can divide into small chunks of work that you can retry if something fails, then let's imagine that you put those tasks in some queuing system like Pub/Sub, for instance, or some task queue.

MARK: Pub/Sub would be a great fit.

FRANCESC: And then what you can do is have a bunch of pre-emptable machines, and pre-emptable VMs, basically what it means is that those machines could be pre-empted or in other ways you could be asked to stop using those machines at some point, but you could--if you can retry that task later on, then that's not really a big issue, and now the cool thing is that the cost is much, much lower.

MARK: Pre-emptable VMs are up to 70% cheaper.

FRANCESC: Yeah, so that's a huge thing.

MARK: That's a good savings. That's a good savings, and it's pretty much a regular machine. They can last for up to 24 hours, which is really cool, but yeah, if, you know, they fall over and you're able to just be like, "Okay, that's fine. We'll just try again for this thing."

FRANCESC: And anyway, if you're writing anything that is running on the Cloud, you should always be aware of the machine could fail at any time, so basically, most of the time you don’t need to add anything that you could not have been doing before.

MARK: Yeah. I can totally see this make sense, you know, Pub/Sub with a pre-emptable VM pulling stuff off. You know, that makes so much sense, and it could give you, like, some really good savings.

FRANCESC: Yeah, and actually, if some of you may have heard about this really cool thing that Google created, MapReduce, which is just, like, a more sophisticated way of doing exactly the same idea, doing just a bunch of jobs that could fail and they will be retried and so on, but it's much more complicated. Running MapReduce on pre-emptable VMs is a great match 'cause MapReduce will be managing the fact that if a job fails, it will retry later on, so you don't need to add anything else. Just say, for instance, maybe you will have an instance group, and in that instance template you will say, "Oh, this VM should be pre-emptable." That's it.

MARK: That's great.

FRANCESC: Just because you say that, you get a huge, huge saving.

MARK: Yeah. You can do, also, really cool stuff. All our stuff is, you know, API-driven, so if you want to spin these up with APIs and sort of manage them yourselves, you can totally do that as well. I've seen people do it. It works really, really well.

FRANCESC: Yeah. Yeah, lots of cool things that you can do with pre-emptable VMs, lower cost, and still the same performance, just it might be pre-empted.

MARK: That's right. Wonderful. Well, Francesc, where are you gonna be in the next coming time? You going anywhere? It's slowing down. It's the end of the year.

FRANCESC: Well, it's the end of the year, so my next trip is actually holidays finally, so.

MARK: Oh, lovely.

FRANCESC: Yeah, that's pretty much it. After that, I know that late January I will be going to FOSDEM in Brussels.

MARK: Very nice.

FRANCESC: Yup.

MARK: I will be in Canada in Vancouver towards the end of--mid-December on the 12th for a DevFest. I'll be up there with a couple of other of my colleagues, but yeah, that's it for me as well, so it's sort of heading towards the end of the year.

FRANCESC: Great.

MARK: Well, thank you once again, Francesc, for joining me for a wonderful episode of our podcast. It was a delight and a pleasure to be here with you.

FRANCESC: As you say, a delight and a pleasure too. So talk to you all next week.

MARK: See you next week.