Tubi streams thousands of free movies and series to millions of users and we hope to leverage technology to deliver happiness to our users. Over the last year, we combined gRPC and Elixir to run several mission critical services in production. These services are used to serve listings of titles, content metadata and other exhaustive details, all essential parts of the core viewing experience of Tubi.

gRPC is a high-performance RPC(Remote procedure call) framework derived from Google and uses Protobuf to define RPCs, which can be used to generate code in different languages. You can think of it as using HTTP/2 to transport encoded Protobuf. Using gRPC we can have a consistent interface between teams.

Elixir is a modern functional language designed for building scalable and maintainable applications. Elixir is built on top of Erlang, whose actor model and OTP(libraries to manage actors) allow us to build low-latency and fault-tolerant systems with only a handful of engineers.

As the primary author of elixir-grpc, I’m grateful for all contributions from the community and very happy to see my work help us in building our services. Let’s talk about some interesting lessons learned during our recent work.

Performance

One of our protobuf messages has about 30 fields and we need to return hundreds of them sometimes. We found it’s slow to encode and decode so much data, this is one reason why our gRPC requests were slow. We improved decoding performance by 120% and encoding by 30% in the end. I’ll explain the decoding improvement in detail.

We can’t manipulate binary data flexibly in a high-level language like Erlang/Elixir. So our program may be slow because of unnecessary memory allocation especially in a project like protobuf, which is all about processing binary data. But we can follow the best practices in Erlang to avoid this kind of problem.

For example, if we want to parse a binary and sum last 7 bits of every byte up, there are two possible alternatives in Elixir:

BinaryParseFast is faster because Erlang does optimization by avoiding creating sub binaries when it knows the remaining binary is just passed to other functions directly. As the benchmark shows, the fast version is twice as fast as the slow version. Erlang provides a compile option to give us hints about the potential problems:

export ERL_COMPILER_OPTIONS=bin_opt_info $ mix run binary_parse_slow.exs

warning: BINARY CREATED: binary is used in a term that is returned from the function

binary_parse_slow.exs:15 $ mix run binary_parse_fast.exs

warning: OPTIMIZED: match context reused

# This is good

binary_parse_fast.exs:18

OPTIMIZED means Erlang compiler will optimize the code, otherwise, you should try to improve it. This helps us a lot when optimizing protobuf-elixir’s performance.

Besides this, elixir-grpc now supports compression. Protobuf can produce smaller data compared to JSON generally, but compression can still have a good result in some cases like when protobuf messages have many strings. Now we can use a utility like gzip to compress the data, this may help you reduce the network traffic and improve performance.

Performance tuning will never be finished, many areas can be improved like Protobuf encoding, HTTP/2 library optimization and so on. Even if we improved the performance and have the benchmark, you still need to run benchmark by yourself using your test cases. Please let us know if you find the performance doesn’t fit your situation.

Stability

It’s difficult to judge a software’s stability before it’s used by many users. If some Elixir services are unavailable, our users can’t watch titles, which is perhaps the worst thing for a video streaming company.

We gain confidence by:

Erlang/OTP and cowboy, a production-ready HTTP server that provides a solid foundation.

Interoperability tests in elixir-grpc cover all features a gRPC implementation should have: like large responses, streaming requests, errors, etc.

Plenty of tests are run for hours against our services with our dataset.

Now we have multiple business-critical production services running on elixir-grpc.🎉

Envoy and interceptors

Envoy is a proxy as the sidecar of services. It manages the connections between services to provide useful features like dynamic service discovery, load balancing, retries and so on, so we don’t need to implement all of these features in every service. It has already been a first-class component in Tubi’s infrastructure for some time. Using it reduces our work a lot(like gRPC load balancing because elixir-grpc doesn’t have built-in client-side load balancing as the official guide describes).

Envoy has lots of metrics like request rate, response time, which are very useful. But some detailed metrics are still missing, like request rate and response time per gRPC method. Now you can collect these detailed metrics using interceptors(middleware). elixir-grpc has built-in interceptors, like statsd interceptor and prometheus interceptor, and you can even write your own. For example, we have many platforms like FireTV, Web, iOS, Android and so on, whose performance and QPS are different. So we wrote an interceptor to add platform tag to the metrics.

Corner cases

Though Cowboy and Gun(HTTP client) are very good, the HTTP/2 support is still a new feature, so they didn’t handle some corner cases well, like you may get errors when deploying your services. Some problems can be solved by simple retries, but this will make your requests slow. We fixed these problems and are trying to merge the improvements upstream.

First is flow control. HTTP/2 adds flow control to allow applications to control how fast peers send data. Windows on both sides control if DATA frames(like HTTP body) can be sent to the peer, and WINDOW_UPDATE frames need to be sent to update the peer’s windows after DATA frames are sent. If WINDOW_UPDATE frames are not sent correctly, HTTP/2 communication will be stuck and this is a big problem.

In Cowboy, the windows are correct most of the time but can be wrong if one of the streams terminates early, like raising an error. This will prevent the other streams from sending messages in the same connection. Then the streams will timeout and new connections need to be recreated.

HTTP/2 flow control example

In the above graph, the client maintains a connection-level window and a stream level window to know how many bytes it can send to the server. Let’s keep it simple and say the initial windows are 20000 and the flow control algorithm is simply sending WINDOW_UPDATE after receiving DATA. Then we have:

Step 1: The client sends a DATA frame with 10000 bytes in stream 1, then the windows are decreased by 10000.

Step 2: The server sends a WINDOW_UPDATE frame back, then the client’s windows are increased by 10000. This is a normal case.

Step 3: The client sends a DATA frame with 20000 bytes in stream 3, then the windows are decreased to 0.

Step 4: The server closes the stream before sending a WINDOW_UPDATE.

Step 5: HTTP/2 spec requests peers should send WINDOW_UPDATE even in the case a stream is closed abnormally. Otherwise, the connection-level window will be wrong. This is where Cowboy didn’t handle it well.

Step 6: Now a new stream can’t send DATA any more because the connection-level window is 0. The client will hang here.

Another issue is Gun didn’t handle GOAWAY well. GOAWAY frames can be used to shut down connections gracefully. But because of Gun’s problem, you may get some errors when deploying your services.

In the below graph. When deploying a service, the HTTP/2 server first sends a GOAWAY frame containing the stream identifier of the last peer-initiated stream. Then, the client will continue handling the streams(stream 1, 3) with stream id less than the id in GOAWAY until they are finished, but won’t create new streams on that connection(stream 5).

Gracefully shut down a connection

The problem is Gun returns an error instead of continuing to handle existing streams at that moment as the below graph shows. So clients will get errors for unfinished streams(stream 3).

Gun doesn’t handle GOAWAY well

Conclusions

With the success of the combination of gRPC and Elixir, we are going to build more and more services using elixir-grpc in the future. Building a project like elixir-grpc was not very hard in the beginning. Making it ready to use in production is where the real fun and deep learning happens. As our business grows rapidly, we will have more and more challenges and we are always ready to solve these kinds of interesting technical problems. If you want to have fun working on cutting edge technology at scale, come join Tubi!