Download

Good morning.

My name is Josh Graessley, and I am really excited to be here this morning to tell you about Network.framework.

Network.framework is a modern alternative to sockets.

Today, we're going to talk about modernization transport APIs. This will help give you some context to understand what Network.framework is and how it fits into the system and whether or not it's the right thing for your application to be using.

We'll introduce you to the API by walking you through making your first connections.

We'll talk about how you can use this API to really optimize your data transfers and go way beyond the performance of anything you can do with sockets.

We'll talk about how this API can help you handle some complex mobility challenges, and we'll wrap up with information on how you can get involved and start adopting.

First, I'd like to spend a little bit of time talking about modernizing transport APIs.

Now when I say transport API, I'm talking about any API that lets you send and receive arbitrary data between two endpoints on a network, and that's a pretty broad definition, and there are a lot of APIs that could fall under this category.

Perhaps the most ubiquitous is sockets. Sockets has been within us for over 30 years, and I don't think it's an exaggeration to say that sockets has changed the world, but the world has kept changing.

And as a consequence, using sockets to write apps for today's internet is really hard.

There are three primary areas in which it's very difficult to use sockets well.

The first one is Connection Establishment.

There's a whole host of reasons that establishing connections can be really difficult with sockets.

For starters, sockets connect to addresses, so you have to, most of the time you have a host name, so you're going to have to resolve that host name to an address. When you do that, you often end up with way more than one address. You'll have some IPv4 addresses, some IPv6 addresses, and now you've got this challenge, which address should you try and connect to, in what order? How long do you wait before you try the next one? You can spend years trying to perfect this.

I know because we have.

Once you get past the dual stack host problems, you run into a whole bunch of other issues.

There are some networks that use something called Proxy Automatic Configuration or PAC.

On these networks, there's a JavaScript that you get, and you have to pass a URL into the JavaScript, and the JavaScript runs and spits out an answer that says either you can go direct or you have to use this SOCKS proxy over here or that HTTP connect proxy over there.

And now your app has to support SOCKS proxies and HTTP connect proxies, and this can be really difficult to do well.

And the most difficult thing is that you may not have one of these networks to test on, so you may get a bug report from one of your customers, and they may complain that it's not working well on their environment.

And you may want to add code to fix the problem, but that once you've got it in there, you really don't have a good way to test it. You have to end up building the whole environment to reproduce the same environment they have.

It can be a real challenge.

So connecting with sockets is really hard.

The second thing that becomes challenges with sockets is data transfers.

There's a lot of reasons that transferring data with sockets can be really difficult.

The primary problem is the read and write model itself. If you're using blocking sockets, it pretty simple, but you're tying up a thread, and it's really not a great idea to be tying up a thread while you're waiting to read or write data.

You can switch to nonblocking, but then you end up with a whole lot of other challenges that you run into.

When you're using nonblocking, you may tell the kernel I'd like 100 bytes, and the kernel will come back and say, I've got 10 bytes for you, why don't you come back later.

And now you have to build a state machine to keep track of how many bytes you read versus how many bytes you want to read.

This can be a lot of work, and getting it to perform well can be a real challenge.

On top of all of that, you really shouldn't be reading and writing to sockets directly because you should be using something like transport layer security or TLS.

Sockets don't support TLS, so you're probably using some other library that is handling TLS for you and reading and writing to the sockets on your behalf, or you're writing the glue code between that library and sockets, and you have to figure out how to get all this to work with all the crazy connection logic that you put in ahead of time.

There's a lot here that can be really difficult.

Finally, mobility can be really challenges with sockets for a variety of reasons.

I think a lot of this boils down to the fact that when sockets came out, a lot of the devices required more than a single person to move them, and they were connected with a single wire, and they had a static IP address, and everything was stable and simple.

And today, we have these incredibly powerful devices in our pocket with multiple radios that may be on at the same time, and some of them are moving from network to network, and your application has to handle all these transitions well to provide a seamless experience to your customers.

Sockets does nothing to help you with this.

You can use routing sockets, but it's really, really difficult.

We think a transport API should do better.

Fortunately, on our platform as an application developer you have a great API in URLSession.

URLSession handles all of these problems for you.

It's really focused on HTTP, but it also has stream task that gives you raw access to TCP and TLS connections.

Now you might be looking at this, and you may not have cheated by looking at the description in the WWDC app.

You might think that URLSession is built on the same primitives that you would use yourself.

But it turns out that's not the case. URLSession is built on top of something we call Network.framework.

URLSession really focuses on all of the HTTP bits, and it offloads a lot of the transport functionality to Network.framework.

Network.framework is something we've been working on for a number of years, and in supporting URLSession, we've learned a lot, and we've taken a lot of those lessons to the IETF. A number of our engineers regularly participate in the IETF and meet with engineers from other companies, and they've been discussing a lot of what we've learned in the transport services working group.

And in those discussions, we've got some great feedback, and we've brought that back in and improved Network.framework based on that.

We are really excited to announce this year that your applications can take advantage of this same library directly now.

Now we know that one of the things people love about sockets is that it gives them very fine-grain control over just about everything, and they're really loathe to give that up. So as we developed Network.framework, we wanted to make sure that it did the right thing by default in the way that sockets doesn't, but it gave you all the knobs that sockets does.

And it's kind of got this gradient, so the more knobs you turn, the more complex it becomes.

It gives you all the power you need, but you don't have to pay for the complexity unless you actually need some of it.

Network.framework has incredibly smart connection establishment.

It handles the dual stack cases. It handles IPv6 only networks. It handles PAC. It handles proxies.

It will help you connect on networks that are otherwise very difficult to deal with.

It has an incredibly optimized data transfer path that lets you go way beyond the performance of anything you can do with sockets, and Tommy will cover that in a little bit.

It has support for built-in security.

It supports TLS and DTLS by default.

It's really simple to use.

It has great support for mobility. It provides notifications about network changes that are relevant to the connections that your application is establishing.

It's available on iOS, macOS, and tvOS as a CAPI with automatic reference counting, so it's easy to use from Objective C, and it has an incredible Swift API.

With that, I'd like to turn it over to Tommy Pauly, to walk you through making your first connection.

Thank you.

All right, hello everyone.

My name is Tommy Pauly, and I'm on the networking team here at Apple.

And so I'm sure a lot of you are really excited to start seeing how you can start adopting Network.framework in your apps.

And the best place to start and just dive right in is by making your first connection. And you're going to be making your connection from your local device, to your server, or to some other peer device that's on your local network.

But you may be wondering, what kind of connections are appropriate to use with Network.framework.

What are the use cases? And so let's explore first some scenarios of apps that may be using sockets today and would really benefit a lot by taking advantage of Network.framework going forward.

So the first one of these I want to highlight is gaming apps.

Gaming apps often use UDP to send real-time data about the game state between one device and another.

And they really care about optimizing for latency and making sure there's no lag or anything being dropped there.

If you have an app like this, you're going to love how Network.framework allows you to really optimize your UDP, sending and receiving to be faster than ever before with the least latency possible.

Another type of app that would take a lot of advantage from Network.framework is live-streaming apps.

So live streaming often will use a combination of UDP and TCP in their apps, but the key point here is that it's generating data on the fly. If you have new video frames or audio frames, you need to make sure that those are paced well and you're not incurring a lot of buffering on the device or on the network.

The asynchronous model for reading and writing in Network.framework is going to be perfect for making sure you reduce that buffering.

And the last case I want to highlight are mail and messaging apps.

So these are going to be using a lot more traditional protocols, just TLS over TCP.

However, it's really critical for apps like this to handle network transitions really gracefully.

Oftentimes if you have a messaging app, your user is going to be using your app as they're walking out of the building, texting their friend to let them know that they're on their way.

And you want to make sure that you're handling that transition from the WiFi network in the building to the cell network that they're going onto and that you don't hand a long time for that message to actually get to their friend.

And these are just three examples of the types of apps that may use low-level networking like this.

There are many other types of apps that could take advantage of this, so if you have an app like one of these or some other use case that currently uses sockets, I invite you to follow along and see how your app can benefit. So to get started, I want to focus on that last case, the simplest case of mail and messaging apps and looking at how they establish connections.

So when you want to establish your connection to a server, let's say it's for a mail connection, iMap with security, with TLS, you start with your hostname, mail.example.com.

You have a port you want to connect to, port 993, and you want to be using TLS as well as TCP. So how would this look in sockets traditionally? Something like this to get started.

You would take your host name.

You would call some DNS API to resolve that host name.

Let's say this is getaddrinfo.

You'll get back one or more addresses.

You'll have to decide which one you want to connect to first.

You'll call socket with the appropriate address family.

You will set a series of socket options.

Let's say you want to make your socket nonblocking like Josh mentioned before.

Then you call connect to start TCP, and then you wait for a writable event.

And this is before you do anything with TLS, and that's a whole host of other problems.

So how does this look in Network.framework? And we hope that it looks very familiar to you but a little bit simpler.

So the first thing you do is you create a connect a connection object.

And a connection object is based on two things.

You have an endpoint, which defines the destination you want to get to, and this could be the address, the IP address that you had before, but usually, like in this example, we have a host name and a port, and so our end point can just be that host name and port.

It could also be a bonjour service that I want to connect to.

And then I also have parameters.

Parameters define what protocols I want to use, TLS, DTLS, UDP, TCP. It defines the protocol options that I want as well as which paths I want to use to connect over.

Do I want to just connect over anything, or do I only want to use WiFi? Once you've configured your connection, you simply call start to get things going, and then you wait for the connection to move into the ready state.

And that's all you need to do to bring up a full TLS connection to your server.

And I think you're going to love how this looks in Swift.

So here's what you do.

You first import the network module.

Then, you create an NWConnection object.

So an NWConnection in either Swift or in C is the fundamental object for reading and writing data.

In this case, we have a convenience that initializes your endpoint with a host in the port, so I give it my hostname, male.example.com, and the port. And in this case, it's a well-known port.

It's imaps. So I can just put that in Swift very easy, but I could also put any other numeric literal there.

And then to define what protocols I want to use, I pass parameters, and since this is a client connection, I only want default, TLS, and TCP parameters.

It can be as simple as just writing dot TLS, and now I have a fully-fledged TLS connection.

The next thing I do is I said estate update handler to handle all the transitions that my connection might go through.

The first state and the most important one that you want to handle is the ready state.

Ready means that your app is ready to read and write data on this connection, it's totally established, and if you're using TCP and TLS, this means that the TLS handshake is finished.

We also though let you know about the waiting state.

So last year in URLSession, we introduced waits for connectivity, and the waiting state of an NWConnection is exactly the same thing.

And this is on always by default.

So when you create your connection and when you start it, if there is no network available, we won't fail, we'll just tell you we're waiting for a network to be available. We'll give you a helpful reason code, but you don't have to do anything more to watch network transitions yourself.

Mobility is an essential, critical part of this API.

And we'll also let you know if there's a fatal error. Let's say we had to reset from the server or TLS failed, and we'll give you that as a failed event.

So once you've set this up, you simply call start and provide the dispatch queue upon which you want to receive callbacks. So I want to dig into what happens when you call start. What's actually going on? So here's a little state machine, the innards of the NWConnection. When we begin at the setup state and we call start, we move into the preparing state.

So the preparing state does a lot more than just calling connect on a TCP socket.

For TCP socket, that would just send out a SYN packet across to the server that you're trying to reach.

But when you call start on an NWConnection, it actually handles all of the things that Josh was mentioning earlier.

It evaluates the network you're on and tries to make the fastest connection possible for you. I want to dig into that a little bit more.

So this is what we call Smart Connection Establishment.

So the very first thing that we do when you call start is that we take your endpoint, and then we evaluate what are all the networks that are currently available to me.

In this case, we have WiFi and cellular.

And generally we prefer the WiFi network because it has less cost to the user.

So we'll look at that one first.

Then we check, are there any special configurations on this network.

Is there a VPN? Is there a proxy? And we'll evaluate that for you.

In this case, let's say that there is a proxy configured with an automatic configuration file that also lets you go direct if the proxy doesn't apply to your connection.

So we'll evaluate both of those options.

We'll check if we need to use the proxy, go ahead and connect to it, create a TCP connection there.

But if we don't need it, we'll do DNS on your behalf going directly, get back all of the DNS IP address answers, and connect to them one after the other, leaving them going in parallel. We're racing them to get you the fastest connection possible.

And then, if something goes wrong with WiFi, let's say the WiFi radio quality goes really bad because you're walking away from the building, we can actually take advantage of the feature called WiFi assist and fall back seamlessly to the cellular network, do DNS resolution there, and try the connections one after the other. So this way your connection establishment is very resilient, handles VPNs, handles proxies for you, and gets you the best connection possible. Now, of course, you may not want to try all of these options. You may want to restrict what the connection establishment does, and so we have many different knobs and controls to let you do that, and I want to highlight just three of them today.

The first is you may not want to use expensive networks, like a cellular network, because this connection is only appropriate to use over WiFi.

So within the parameters for your connection, there are options to control the interfaces that you use.

So if you don't want to use cellular, simply add cellular to the list of prohibited interface types.

It's even better, actually to also prohibit expensive networks in general because this will also block the usage of personal hotspots on a Mac let's say.

Another way that you can restrict your connection establishment is by choosing specifically the IP address family that you want to use.

So let's say that you really love IPv6 because it's faster and it's the future.

You don't want to use IPv4 at all on your connection.

And you can do this by going to your parameters, digging down into the IP-specific options, and here you'll have options that you'll find familiar from your socket options on a socket today, and you can also define specifically which IP version you want to use.

And this will impact your connection as well as your DNS resolution.

And lastly, you may not want to use a proxy on your given connection. Maybe it's not appropriate for your connection to go through a SOCKS proxy.

In that case, you can simply prohibit the use of proxies as well.

So that's what happens in the preparing state.

I mentioned before that things can go wrong. You can have no network when you try to establish, and what we'll do after going to preparing is if we find there are no good options, DNS failed, there was no network, maybe you're in airplane mode, we'll move into the waiting state and let you know the reason for that.

And we'll keep going back into preparing every time the network changes and the system thinks, yeah, there's a good chance that your connection will become established now, and we'll handle all that for you and let you know every time that we're reattempting.

Eventually, hopefully your connection does get established.

At this point, we'll move into the ready state.

And the ready state, as I mentioned before, is when your connection is fully established. So this is all of the protocols in your stack up to TLS, for example.

At this point, you can read and write, and this is also where we give you callbacks about the network transitions that you're going through.

So if your connection is established and then you change networks, we'll give you updates about that so you can handle the mobility gracefully, and we'll be talking about this later on in the talk.

If there's an error on the connection, either during the connection establishment or after you've already connected, we'll give you the failed state with an error, and then once you're totally done with the connection, let's say you already closed it or you received a close from the other side, and you want to just invalidate the connection, you call cancel, and we move into the cancelled state. And this is guaranteed to be the very last event that we will deliver to your object so you can clean up any memory that you have associated and move on.

So that's it. That's an overview of the basic lifetime of a connection object in Network.framework, and to show you how you can use this to build a simple app, I'd like to invite Eric up to the stage.

Thanks Tommy.

I'm Eric Kinnear, also from the networking team here at Apple, and I'm really excited to build with you an example application using Network.framework.

We're going to use the live streaming example that Tommy mentioned earlier to build an application that can take the camera input on one device and send it over a network to be displayed on another device.

Because we're going to be continuously generating live video frames, we're going to use UDP to send those packets across the network.

So how do we do this? Well, first, we need a capture session with the camera so that we can receive the video frames from the image sensor.

For the sake of this example, we're not going to use any video codecs or other compression.

We're simply going to take the raw bytes from the camera, ship them across the network, and display them on the other side.

In order to make this happen, we need to divvy those frames up into smaller chunks that we can send in UDP packets.

Of course, to send those UDP packets on the network, we need a connection. Switching over to the other device, we need a listener that can receive that incoming connection and read the data packets off the network.

From there, we simply reverse the earlier process, reassembling the video frames and sending them to the display so that we can see them on the screen.

To keep things simple, we've already abstracted out the camera and the display functionalities so that we can focus just on the parts that use Network.framework.

There's one piece we haven't yet covered here, and that's the listener.

So we're going to take a minute to do that now.

Listener functionality is provided by the NWListener class, which you can create using the same parameters objects that you used to configure connections.

It's really easy to set up a listener to advertise a bonjour service.

In this case, we'll use camera.udp. When a new connection is received by a listener, it will pass that connection to a block that you provide as the newConnectionHandler.

This is your opportunity to perform any configuration that you choose on that connection, and then you need to call start to let that connection know that it's time to get going. Similarly, you need to call start on your listener, and again, just like with connections, you provide a dispatch queue where you want these callbacks to be scheduled.

So that's listeners.

If you think about it, we just implemented the equivalent of calling listen on a UDP socket.

Except listen doesn't actually work on UDP sockets. Now we're ready to build our app in Xcode.

So here we've got our application, and we've got a bunch of files over here that already handle the camera and the display functionality, so we're going to focus just on the UDPClient class and the UDPServer class.

The UDPClient is going to be responsible for creating the connection to the other side and sending the frames across.

Likewise, the server is responsible for creating the listener, accepting incoming connections, reading the data off those connections, and sending them up to the screen.

Let's start with the client. My client class has an initializer that takes a name, which is a string describing the bonjour name that we want to connect to. I'll create my connection by simply calling NWConnection and passing in a service endpoint.

Using the name that I was provided and camera.udp as the type. We also passed the default UDP parameters.

As Tommy mentioned, we can use a state update handler to check for the ready and the failed states.

Here, when our connection is ready, we'll call sendInitialFrame, which we'll implement in a minute. Because we're using UDP and there's no other handshake, we're going to take some data and send it across to the network to the other device and wait for it to be echoed back before we start generating lots of video frames and dumping them on the network.

We need to remember to call start on our connection, and we provide the queue that we created up above. Let's implement send initial frame.

Here we're going to take the literal bytes hello and create a data object using them. To send content on a connection, we can call connection.send and provide that data object as the content.

We provide a completion handler in which we can check for any errors that may have been encountered while sending.

Since we expect this content to be immediately echoed back, we turn right around and call connection.receive to read the incoming data off of the connection.

In that completion handler, we validate that the content is present, and if so, we let the rest of the application know that we're connected, and it should bring up the camera hardware and start generating frames.

When those frames are generated, the rest of the application knows to call send on our UDPClient class and pass it an array of data objects representing the video frames that we're trying to send. Because we're going to be doing a lot of send operations in very quick succession, we're going to do them within a block that we passed in connection.batch. Within this block we're going to go through every frame in that array of data objects and pass each one to connection.send.

Similarly to above, we use the completion handler to check for any errors that were encountered while sending.

And that's it.

We've got our UDPClient class, and we're ready to go. Let's look at the server.

On the server side, we need a listener that can accept the incoming connections.

We need to respond to that handshake that we just sent from the client, and we need to read data off the network so that we can push it up to the display.

Starting with the listener, we simply create an NWListener using the default UDP parameters.

If I wanted to, this is also my opportunity to use those parameters to tell the listener to listen on a specific local port.

But since we're using a bonjour service, we don't need to do that.

To set up that service, I'm going to set the service property on the listener to a service object of type camera.udp.

Notice that I don't pass the name here because I want the system to provide the default device name for me.

I also provide a block to the serviceRegistration UpdateHandler, which is going to be called anytime the set of endpoints advertised by the system changes.

Here, I'm interested in the case where an endpoint is added, and if it's of service type, I tell the rest of the application the name that is being advertised, that default device name that I ask the system to provide so that I can display it in the UI and have my users type it in somewhere else.

I'm going to set a new connection handler on the listener, which will be called every time the listener receives a new incoming connection. I could do some configuration on these connections, but the default settings are fine here, so I simply call connection.start and pass it to queue. Here, I notify the rest of the application that I've received an incoming connection, so it can start warming up the display pipeline and become ready to display the video frames.

I'll also call receive on myself, which we'll implement in a minute, to start reading that data off of the network and shipping it up to the display pipeline. Just like with connections, listeners have state update handlers, which I'll use to check for the ready and the failed states.

I need to not forget to start my listener, which I do by calling listener.start and passing it that queue we created up above.

So I've got my listener ready, I just need to read data off the network and implement this receive function.

Here, we start by calling connection.receive and passing it into completion handler.

When data comes in off of that connection, we'll see if we're not yet connected.

If we weren't connected, this is probably that handshake that the client is starting by sending. We'll simply turn right around and call connection.send, passing that same content back so it will be echoed over to the client.

We then remember that we're connected, and on all subsequent received callbacks, we will simply tell the rest of the application that we received this frame, and it should send it up to the display pipeline so that we can see it on the screen. Finally, if there were no errors, we call receive again so that we receive subsequent frames and send them up to the display to book together a video from each of these individual images.

So that's it. We've got our UDPClient, we've got our UDPServer, let's try it out.

I'm going to run the client on my phone here, and I'm going to run the server on my Mac so we can see it on the big screen.

Here, the server just came up, and we see that it's advertising as Demo Mac, which is where I told the rest of the system to just give me the name.

That's on my phone. If I tap connect, all of a sudden, I can see video frames being streamed across the network over UDP Live.

So here we just saw how quickly I was able to bring up a UDPClient that could connect to a bonjour service, it can send a handshake, wait for that to be processed, take the video frames coming from the camera, and ship them across the network.

The server side brought up a bonjour listener.

It advertised a service, it received incoming connections, responded to the handshake, and sent them all to the display so that we could see them.

And now to take you through optimizing that data transfer in a little more detail, I'd like to invite Tommy back up to the stage.

Thank you, Eric.

It was a really cool demo. It's super easy to get this going, and so now we've covered the basics, and we know how to establish connections outbound, how to receive connections inbound, but the real key part of Network.framework that's going to be the killer feature here is the way that it can optimize your performance and that we can go beyond what sockets was able to do.

And I want to start with the way that you in your application interact with your networking connections in the most basic way, which is just sending and receiving data.

And these calls are very simple, but the nuances about how you handle sending and receiving and really make a huge difference for the responsiveness of your app and how much buffering there is going on on the device and the network.

So the first example I want to walk through is when we're sending data, in the application very much like what Eric just showed you, something that's live streaming, something that is generating data on the fly.

But in this case, let's talk about when we're sending it over a TCP stream, a TCP stream that can back up on the network, that has a certain window that it can send.

So how do we handle this? So here's a function to send a single frame. This is some frame of data that your application has generated.

And the way that you send it on the connection is you simply call connection.send and pass that data.

Now if you're used to using sockets to send on your connections, you either are using a blocking socket, in which case if you have a hundred bytes of data to send, if there's not room in the send buffer, it'll actually block up your thread and wait for the network connection to drain out, or if you're using a nonblocking socket, that send may actually not send your complete data. It'll say, oh, I only sent 50 bytes of it. Come back some other time to send the next 50 bytes.

This requires you and your application to handle a lot of state about how much you have actually done of sending your data.

So the great thing about a network connection is you can simply send all of your data at once, and you don't have to worry about this, and it doesn't block anything.

But then, of course, you have to handle what happens if the connection is backing up, because we don't want to just send a ton of data unnecessarily into this connection if you want a really responsive, live stream of data.

And the key here is that callback block that we give you.

It's called contentProcessed.

And we'll invoke it whenever the network stack consumes your data.

So this doesn't mean that the data has necessarily been sent out or acknowledged by the other side.

It's exactly equivalent to the time in which a blocking socket call would return to you, or when the nonblocking socket call was able to consume all of the bytes that you sent.

And in this completion handler, there are two things you can check for.

First, you can check for an error.

If there is an error, that means something went wrong while we were trying to send your data, generally it indicates a overall connection failure.

Then, if there wasn't and error, this is the perfect opportunity to go and see if there's more data from your application to generate. So if you're generating live data frames, go and fetch another frame from the video stream, because now is the time when you can actually enqueue the next packets.

This allows you to pace all of your data out.

And so as you see here, we essentially form a loop of using this asynchronous send callback to continue to drain data out of our connection and handle it really elegantly.

The other thing I want to point out about sending is the trick that Eric showed earlier that's great for UDP applications that are sending multiple datagrams all at one time.

So if you have a whole bunch of little tiny pieces of data that you need to send out or essentially individual packets, you can use something that we've added called connection.batch.

So a UDP socket previously could only send one packet at a time, and this could be very inefficient because if I have to send a hundred UDP packets, these are each a different system call, a different copy, and a context switch down into the kernel.

But if you call batch within that block, you can call send or actually receive as many times as you want, and the connection will hold off processing any of the data until you finish the batch block and will try to send all of those datagrams all as one single batch down into the system, ideally have just one context switch down into the kernel, and send out the interface.

This allows you to be very, very efficient.

So that's sending.

Receiving, like sending, is asynchronous, and the asynchronous nature gives you the back pressure that allows you to pace your app.

So in this case, I have a TCP-based protocol, and it's very common for apps when they're reading to essentially want to be reading some type of record format.

Let's say that your protocol has a header of 10 bytes that tells you some information about what you're about to receive, maybe the length of the body that you're about to receive.

And so you want to read that header first and then read the rest of your content, and maybe your content's quite long. It's a couple megabytes let's say.

Traditionally with a socket, you may try to read 10 bytes.

You may get 10 bytes, you may get less. You have to keep reading until you get exactly 10 bytes to read your header.

And then you have to read a couple megabytes, and you'll read some, and you'll get a whole bunch of different read calls and essentially go back and forth between your app and the stack.

With an NWConnection, when you call receive, you provide the minimum data that you want to receive and the maximum data.

So you could actually specify if you want to receive exactly 10 bytes because that's your protocol, you can just say, I want a minimum of 10 and a maximum of 10. Give me exactly 10 bytes.

And we will only call you back when either there was an error in reading on the connection overall or we read exactly those 10 bytes.

Then you can easily just read out whatever content you need for your header, read out the length, and then let's say you want to read a couple megabytes, and you essentially do the same thing to read your body, and you just pass, well I want to read exactly this amount for my connection, and this allows you to not go back and forth between the stack and your app but just have a single callback of when all of your data is ready to go.

So it's a really great way to optimize the interactions.

Beyond sending and receiving, there are a couple of advanced options that I'd like to highlight in your network parameters that allow you to configure your connection to get very good startup time as well as behavior on the network when you're actually sending and receiving.

So the first one is one that we've talked about many times here at WWDC, which is ECN. This is explicit congestion notification.

It gives you a way to smooth out your connection by having the network let the end host know when there's congestion on the network so we can pace things out very well.

The great thing is that ECN is enabled by default on all of your TCP connections.

You don't have to do anything.

But it's been very difficult in the past to use ECN with UDP-based protocols.

And so I'd like to show you how you can do that here. The first thing you do is that you create an ipMetadata object.

ECN is controlled by flags that go in an IP packet, and so you have this ipMmetadata object that allows you to set various flags on a per-packet basis, and you can wrap this up into a context object, which describes all of the options for the various protocols that you want to associate with a single send as well as the relative priority of that particular message.

And then you use this context as an extra parameter into the send call besides just your content.

So now when you send this, any packet that's going to be generated by this content will have all the flags that you wanted marked.

So it's really easy.

And you can also get these same flags whenever you're receiving on a connection. You'll have the same context object associated with your receives, and you'll be able to read out the specific low-level flags that you want to get out.

Similar, we have service class.

This is a property that is available also in URLSession that defines the relative priority of your traffic, and this affects the way that traffic is queued on the local interfaces when we're sending as well as how the traffic works on Cisco Fastlane networks. So you can mark your service class as a property on your entire connection by using the service class parameter in your parameter's object.

In this case, we show how to use the background service class, and this is a great way to mark that your connection is relatively low priority. We don't want it to get in the way of user interactive data.

So we really encourage you if you have background transfers, mark them as a background service class.

But you can also mark service class on a per packet basis for those UDP connections.

Let's say that you have a connection in which you have both voice and signaling data on the same UDP flow.

In this case, you can create that same IP metadata object that we introduced before, mark your service class now instead of the ECN flags, attach it to a context, and send it out. And now you're marking the priority on a per-packet basis.

The other way that you can optimize your connections is to reduce the number of round trips that it takes to establish them.

So here I want to highlight two approaches to do this.

One is enabling fast open on your connections.

So TCP fast open allows you to send initial data in the first packet that TCP sends out, in the SYN, so that you don't have to wait for a whole handshake to start sending your application data.

Now in order to do this, you need to enter into a contract from your application with the connection saying that you will be providing this initial data to send out.

So to enable this, you mark allow fast open on your parameters.

You then create your connection, and then before you can call start, you can actually call send and get your initial data sent out. Now I want to point out here that the completion handler here is replaced by a marker that this data is item potent, and item potent means that the data is safe to be replayed because initial data may get resent over the network, and so you don't want it to have side effects if it gets resent.

Then, you simply call start, and as we're doing the connection bring up, all the attempts that we mentioned before, we will use that initial data if we can to send in TCP Fast Open.

There is one other way I want to point out to use TCP Fast Open that doesn't require your application to send it's own data.

If you're using TLS on top of TCP, the first message from TLS, the client hello, can actually be used as the TCP Fast Open initial data.

If you want to just enable this and not provide your own Fast Open data, simply go into the TCP-specific options and mark that you want to enable Fast Open there, and it will automatically grab the first message from TLS to send out during connection establishment.

There's another thing that you can do to optimize your connection establishment and save a roundtrip, and this is something that Stuart mentioned in the previous session, which we're calling Optimistic DNS.

This allows you to use previously expired DNS answers that may have had a very short time to live, and try connecting to them while we do a new DNS query in parallel.

So if the addresses that you had previously received that did expire are still valid, and you mark the expired DNS behavior to be allow.

When you call start, we'll try connecting to those addresses first and not have to wait for the new DNS query to finish.

This can shave off a lot of set-up time from your connection, but if your server has indeed moved addresses, because we're trying multiple different connection options, if that first one doesn't work, we will gracefully wait for the new DNS query to come back and try those addresses as well.

So this is a very simple way that if it's appropriate for your server configuration, you can get a much faster connection establishment.

The next area I want to talk about for performance is something that you don't actually need to do anything in your app to get.

This is something that you get totally for free whenever you use URLSession or Network.framework connections, and this is user-space networking.

So this is something that we introduced last year here at WWDC, and it's enabled on iOS and tvOS. This is where we're avoiding the socket layer entirely, and we've moved the transport stack into your app. So to give you an idea of what this is doing, I want to start with what the legacy model of the stack generally is.

So let's say that you're receiving a packet off the network.

It's the WiFi interface.

That packet will come into the driver, will be sent into the TCP receive buffer within the kernel, and then when your application reads on a socket, that's going to do a context switch and copy the data up from the kernel into your app, and then generally if you're doing TLS, it will have to get another transformation to decrypt that data before you can actually send it up to the application.

So how does this look when we do user space networking? So as you can see, the main change is that we've moved the transport stack, TCP and UDP, up into your app.

So what does this give us? Now, when a packet comes in off the network, comes into the driver like before, but we move it into a memory mapped region that your application automatically can scoop packets out of without doing a copy, without doing extra contexts switch, and start processing the packets automatically.

This way the only transformation we're doing is the decryption that we have to do anyway for TLS.

This really can reduce the amount of CPU time that it takes to send and receive packets, especially for protocols like UDP in which you're going to be sending a lot of packets back and forth directly from your application.

So to show how this works and the effect it can have, I want to show you a video that was taken using the same app that Eric showed you earlier to demonstrate UDP performance with user space networking.

So in this example, we're going to have two videos running simultaneously.

The device on the left is receiving a video stream from an application that was written using sockets.

And the device on the right is going to be receiving exactly the same video stream from a device that has an app written using Network.framework so it can take advantage of the user space networking stack.

And in this case, we're streaming the video.

It's just raw frames. It's not compressed. It's not great quality or anything, but there's a lot of packets going back and forth.

And we chose specifically for this demonstration to not lower the quality when we hit contention or when we couldn't send packets fast enough or to not drop anything but just to slow down if we had to.

Now this is probably not what your app would do in real life, but it highlights exactly the difference in the performance between these two stacks.

So let's see it right now.

So there's exactly the same data, exactly the same frames being sent over as fast as they possibly can, across this network, and we see the one on the right is pretty easily outstripping the one on the left.

And in fact, if you look at the difference, it's 30 percent less overhead that we're viewing on the receiver side only.

And this is due to the huge difference that we see in the CPU percentage that it takes to send and receive UDP packets when you compare sockets and user space networking.

Now, of course this is just one example. This is not going to be what every app is going to be like, because you're going to be compressing differently. You're going to be already trying to make your connections more efficient.

But if you have an app that's generating live data, especially if you're using UDP to send and receive a lot of packets, I invite you to try using Network.framework within your app and run it through instruments.

Measure the difference in CPU usage that you have when you're using Network.framework versus sockets, and I think you'll be really happy with what you see.

So the last topic we want to talk about today is how we can solve the problems around network mobility, and this is a key area that we're trying to solve with Network.framework.

And the first step of this is just making sure that we start connections gracefully.

So we already mentioned this, but I want to recap a little bit.

The waiting state is the key thing to handle network transitions when your connection is first coming up.

It will indicate that there's a lack of connectivity or the connectivity changed while you were in the middle of doing DNS or TCP. We really encourage you please avoid using APIs like reachability to check the network state before you establish your connection.

That will lead to race conditions and may not provide an accurate picture of what's actually happening in the connection.

And if you need to make sure that your connection does not establish over a cellular network, don't check up front is the device currently on a cellular network, because that could change.

Simply restrict the interface types that you want to use using the NWParameters.

So once you've started your connection and you're in the ready state, there are a series of events that we will give you to let you know when the network has changed.

The first one is called connection viability.

So viability means that your connection is able to send and receive data out the interface it has a valid route.

So to give a demonstration of this, let's say that you started your connection when the device was associated with a WiFi network.

Then, your user walks into the elevator, they don't have a signal anymore.

At this point, we will give you an event, letting you know that your connection is no longer viable.

So what should you do at this point? Two things.

We recommend that if it's appropriate for your app, you can let the user know that they currently have no connectivity.

If they're trying to send and receive data, it's not going to work right now.

But, don't necessarily tear down your connection.

At this point, you don't have any better interface that you could use anyway, and that first WiFi interface may come back.

Oftentimes, if you walk back out of an elevator onto the same WiFi network, your connection can resume right where you left off.

So the other event that we give you is the better path notification.

So let's take that same scenario in which you connected over the WiFi network.

You walk out of a building let's say, and now you no longer have WiFi, but you do have the cellular network available to you.

At this point, we'll let you know two things.

First, that your connection is not viable like before, but we'll also let you know that there is now a better path available.

If you connected again, you would be able to use the cellular network.

And the advice here is to, if it's appropriate for your connection, attempt to migrate to a new connection, if you can resume the work that you were doing before. But only close the original connection once that new connection is fully ready.

Again, the WiFi network may come back, or the connection over cellular may fail.

And the last case I want to highlight here is a case in which you connect initially over the cellular network, and then the user walks into a building and now they have WiFi access.

In this case, your connection, the original one, is totally fine. You're still viable, but you now also have a better path available.

In this case, again, if you can migrate your connection, this is probably a good time to try to establish a new connection and move your data over.

That will save the user their data bill. But, continue to use the original connection until you have the new one fully established.

Just to show how this looks in code, we have the viability update handler that you can set in your connection, we'll give you a boolean back to let you know whenever you're viable or not, and a better path update handler to let you know when there's a better path available or is no longer available.

And now the better solution to all of this to handle network mobility is something that we've talked about in previous years, which is multipath connections, Multipath TCP.

So if you were able to on your server enable Multipath TCP and you can enable it on the client side with the multipathServiceType in your parameters, then your connection will automatically migrate between networks as they come and go.

It's a great seamless experience that doesn't require any work in your application to handle.

And this is also the same service type that's available in URLSession.

A couple points I want to highlight here, specific to Network.framework.

If you restrict the interface types that you allow to be used with your NWParameters that will apply to MPTCP, so you can still not want to use the cellular network with a multipath connection, and instead we'll just seamlessly migrate between different WiFi networks as they become available.

Also, the connection viability handler that I mentioned before is slightly different from Multipath TCP because whenever we change a network, we'll automatically move for you, your connection is only not viable when you have no network available to you at all. So between waiting for connectivity, viability, better path, MPTCP, we really hope that all of the use cases in your apps for using tools like SC Network Reachability to check network changes manually have been replaced.

However, we do recognize that there are some scenarios in which you still want to know what's the available network, when does it change.

For that Network.framework offers a new API called the NWPathMonitor.

So the Path Monitor instead of watching the reachability and trying to predict the reachability of a given host simply lets you know what is the current state of the interfaces on your device and when do they change.

It allows you to iterate all of the interfaces that you can connect over in case you want to make a connection over each one, and it will let you know whenever those networks do change.

So this can be very useful if you want to update your UI to let the user know, are they connected at all. And as Stuart mentioned in the previous session, there could be scenarios in which the user has a long form to fill out and they don't necessarily want to go and fill out something just to realize that there's no connectivity anyway.

So use Network Path Monitor in any of these scenarios in which just having a waiting connection is not enough.

So between all of these things, we'd really like to see people move off of reachability and handle network transitions more gracefully than ever before.

So with that, I'd like to invite Josh back up to the state to let you know how you can get involved and start adopting Network.framework.

Thank you, Tommy.

So I've got a great new API for you that we think you're going to love.

We'd like to talk about the things that you can do to start using it today, but first I want to talk about a few things that we'd like you to stop doing so we can really take advantage of the new technologies like user space networking.

If you're on macOS, and you have a Network Kernel Extension, and there's something you're doing in that Network Kernel Extension that you can't do any other way, please get in touch with us right away.

We need to provide you a better alternative because Network Kernel Extensions are not compatible with User Space Networking.

We wanted to give you a heads-up that with URLSession, FTP and file URLs are no longer going to be supported for Proxy Automatic Configuration. Going forward, the only supported URL schemes will be HTTP and HTTPS. There are a number of APIs at the CoreFoundation layer that we would like you to stop using.

They will be deprecated eventually. They are not yet marked as deprecated.

These are CFStreamCreatePairWith, anything related to sockets as well as CFSocket.

These cannot take advantage of a lot of the connection establishment that we've put in there with the new Network.framework, and they can't take advantage of the new User Space Networking. So we really want you to move off of these to take advantage of the incredibly robust connectivity improvements that you'll get with Network.framework and URLSession and the improved performance.

There are some foundation APIs as well that we'd like you to move away from.

If you're using any of these NSStream, NSNetService, or NSSocket for APIs, please move to Network.framework or URLSession.

Finally, if you're using SCNetworkReachability, we feel that the Wait for Connectivity model is a much better model, so we'd really like you to move to that.

And for those few cases where Wait for Connectivity isn't the right answer, NWPathMonitor is a much better solution going forward.

So now that we've talked about some things we'd like you to stop doing, I want to focus on the things we really want to see you do.

Going forward, the preferred APIs on our platforms for networking are URLSession and Network.framework.

URLSession is really focused on HTTP, but Stream Task provides pretty simple access to TCP and TLS connections.

If you need something more advanced, Network.framework gives you great support for TCP, TLS, UDP, DTLS.

It handles listening for inbound connections as well as outbound connections, and we've got Path Monitor to handle some of the mobility stuff.

Next steps, we really want to see you adopt these, adopt Network.framework and URLSession.

Your customers are going to appreciate how much better your connections, how much more reliable your connections are established, and they'll appreciate the longer battery life from the better performance.

While you're working on these, focus on how you're handling your sending and receiving to really optimize that performance.

And take a lot of time to get that support for the viability and better route changes in there. It can make all the difference for providing a seamless networking experience.

Now we know that Network.framework doesn't support UDP Multicast yet, so if you're doing UDP Multicast, we'd really like to understand your use cases so we can take those into account going forward.

In addition, if you have any other questions or enhancement requests, we'd love to hear from you.

Contact developer support or better yet, meet us in one of the labs. We have a lab after lunch at 2 p.m. and another one tomorrow morning at 9 a.m. For more information, see this URL.

Don't forget the labs tomorrow morning and after lunch. Thank you so much and have a great WWDC.