In December, we wrote about researcher Bob Briscoe's problems with TCP's fairness in handling different protocols. Specifically, unattended peer-to-peer bulk data transfer applications (e.g., BitTorrent) use bandwidth 24/7 rather than intermittently and open up hundreds of connections, while interactive applications, such as web browsers, are less bandwidth- and connection-intensive; but the costs of both types of connections are equal to users.

It turns out that Briscoe has even less love for TCP's congestion control than he let on in the previously cited Internet Draft. In another one, he pretty much wipes the floor with the current state of the art in congestion control:

The outstanding barrier to realistic resource allocation for the Internet is purely religious. In much of the networking community, you have to put fairness in terms of flow rates, otherwise your work is 'obviously' irrelevant. At minimum, you are an outcast, if not a heretic. But actually basing fairness on flow rates is a false god—it has no grounding in philosophy, science, or for that matter 'commercial reality'.

In 1964, an engineer named Paul Baran wrote a series of memoranda proposing a network where locations were connected through redundant links and where information was exchanged in packets. If a link between two locations were to go down, the packets would be rerouted through another location. Only a few years earlier, the digital T1 phone trunk had been invented, where the bits of 24 different phone calls were carefully assigned a fixed place in a hierarchy to emulate a switched connection. This packet switching network of Baran's could never work, according to Ma Bell's engineers of the day.

There's congestion, and how you handle congestion



Maybe Ma Bell really did know what they were talking about, and 40 years later, the chickens have come home to roost. Again.

In the mid-1980s, the ARPANET—or Internet, there wasn't much of a difference back then—suffered from massive congestion for about a year. Congestion is the situation where the network receives more traffic from its users than it can transport to where it needs to go. Internet routers deal with congestion by simply removing (dropping) the excess packets. In the original design of the TCP/IP protocols, the Transmission Control Protocol (TCP) just retransmitted the lost packets at a fixed rate without reacting to the congestion. This didn't help matters. This was fixed by implementing four congestion control algorithms in TCP that make sure it doesn't start to send packets too fast, too soon and that it slows down when it thinks there is congestion. These algorithms take lost packets as an indication of congestion in the network. Since the late 1980s, they have been refined, but not drastically changed.

The dynamics of TCP congestion control algorithms are such that every session gets roughly equal bandwidth. That's fair for applications that behave the same way, but not so much when you compare a web user to a BitTorrent user. (See the article from December for details.) Basically, TCP scales back its transmission rate until it reaches a certain balance in bandwidth use and packet loss. More sessions means higher packet loss, which means that the loss/speed equilibrium happens at a lower speed, which is common to all TCP sessions (if some other variables are also the same).

Briscoe argues that the current way of doing things isn't working: peer-to-peer applications eat up an inordinate amount of bandwidth. This creates counterproductive incentives for ISPs: if they install extra bandwidth, most users don't gain much speed, so doing this can be seen as a waste of money. Instead, ISPs opt to install bandwidth management products. This of course irks heavy peer-to-peer users and net neutrality proponents alike. It also won't help when (if?) the time comes that people start dropping their cable TV subscriptions the same way they've been dropping their landline subscriptions, and millions of people start downloading the latest episode of Lost as soon as it's put up on the iTunes Store. The underlying bandwidth requirements aren't going away.

Bits of different value

Earlier this week, ZDNet's George Ou wrote a lengthy article on Briscoe's efforts and on the ridiculousness of a model where users pay metered bandwidth charges (he cited steep metered broadband plans in Australia). However, it looks like the two are strongly related. An important underlying principle in congestion control and network optimization is that every user has a "utility function." A little bandwidth is worth a lot per bit. An SMS text message can easily cost 0.01 cents per bit, and is obviously worth more to the user, or he wouldn't be sending them. Nobody would be pirating DVDs at this price, though, as this is almost $1,000,000 per GB. So the "utility" of extra bits decreases as you get more of them. The trouble is, the core of the Internet has no idea whether any particular packet holds those expensive SMS bits or those throw-away DVD pirating bits.

The ability to see this difference is exactly the thing that Bellhead engineers have been putting in their protocols for the past 40 years. An X.25, Frame Relay, or ATM network has no problem giving every user exactly the quality of service (QoS) that he or she paid for—no more, no less. Unfortunately, these protocols also have enormous overhead and couldn't be made to scale to the Internet's current size, or even its size a decade ago.

The "Nethead" engineers over at the IETF haven't been sitting on their hands all this time, either. They invented some QoS mechanisms of their own. Hopefully, this research in addition to that of Briscoe and others, will help with the development of QoS products that give interactive users their fair share of the bandwidth when they need it, while unused capacity can be used to its fullest by peer-to-peer users. But in order to be able to do that, there must be some incentive for peer-to-peer users to allow their downloads a lesser priority—when the mechanisms to do so become available.