Building and using dedicated networks

As we recently reported, the Internet's backbone should be able to scale to handle the sheer volume of traffic that it's expected to face in the foreseeable future. But a number of factors complicate any analysis based on the simple volume figures. Many services, such as VoIP and streaming video, create expectations of guaranteed bandwidth that may be tough to maintain in the face of vast volumes of spam and P2P traffic; everything may get there, but not necessarily when we'd like it to. Meanwhile, problems with the "last mile" networks can obscure the capacity of the network backbone.

The academic world has faced similar issues for a while, and will soon be facing a flood of data from the biggest news in physics, the activation of the Large Hadron Collider. The data gathered at the LHC, located at CERN outside of Geneva, will be distributed to a worldwide grid of computer clusters for analysis, which will require sustained transfers well in excess of 10 Gigabits per second. To get a sense of how the academic world is solving its networking needs and what that might mean for the future of general networking, we spoke with executives at Internet2 and the European network provider DANTE.

Academic network structure





DANTE's GÉANT2 network

Despite differences in detail, both Internet2 and DANTE take similar approaches to providing academic users with dedicated, high-bandwidth capacity. The groups lease capacity on what's termed "dark fiber"—unused capacity on commercial fiber networks. They then purchase equipment to light that fiber. There is nothing special about that equipment, as the providers use the same hardware from companies like Juniper and Alcatel that commercial providers are using.

Both groups typically take the traffic from the dedicated fiber and hand it off to local service providers. The precise details on how that happens differ due to the realities of geopolitical boundaries. In the US, given that geographically remote locations fall within a single national border, these local providers are generally individual companies or organizations. In Europe, where there are a multitude of national boundaries, DANTE hands data off to National Research and Education Networks, or NRENs. These groups are responsible for actually getting the data into the individual research centers.

The networking capacity is typically divided three ways. Some is allocated for a normal, packet-based network, with traffic contending for bandwidth as it does in normal, commercial networks. In addition, both network providers discussed the allocation of dedicated Point-to-Point links, in which a bandwidth is allocated and guaranteed. Finally, Internet2 maintains a parallel capacity that's dedicated for experimental use, designed to test new protocols for enhanced data transfer.

So far, this structure has been able to easily handle the academic world's capacity demands. As Dai Davies of DANTE put it, "in earlier years, network capacity was the issue—that's not true anymore." Davies suggested that, even if demand increased significantly, adding additional capacity wouldn't be a problem; once the structure is in place, "adding capacity incrementally is cheap," he said.





Internet2 links major US Universities

Success stories in science and networking

Both DANTE and Internet2 were happy to emphasize a number of cases where the networking capacity has clearly aided the scientific community. One example both cited was the use of the network for radio astronomy. Facilities like the Very Large Array pioneered the use of interferometry, which uses very fine timing information on signals received by physically separated receivers to reconstruct an image with far higher resolution than any of the individual receivers. That reconstruction means that all the data needs to be on a single computer system at the same time.

Initially, this required that all the individual radio telescopes ship their data on tape to a single data center, a process that could take months. Davies described how the costs for dedicated networking bandwidth wound up being in line with the actual shipping fees, while allowing the data to arrive at the center in real-time. About the only ones that resisted the change, he suggested, were the few observatories that didn't have access to the dedicated network.

The staff at Internet2 (we spoke with CTO Rick Summerhill and deputy operating officer Steve Cotter) were also enthused about some of the research into networking that was being tested on their infrastructure. They specifically described some research that was designed to limit the impact of the packet routing and acknowledgement that forms the basis of current TCP/IP communications. To get around the problem that both sending and acknowledgement are sensitive to traffic conditions, researchers are testing a system where the initial packets of a large data transfer are used to determine the current traffic conditions. If they're favorable, most of the transfer then occurs via a large, point-to-point data flow, avoiding the overhead of the acknowledgements.

Both DANTE and Internet2 also discussed how the technology they develop for specialized uses makes its way back to the commercial arena. In Internet2's case, the work they emphasized was in the software involved in data control, where the technology involved was designed to keep data on the Layer 1 network and out of routers for as long as possible. Davies also described how his organization was frequently involved in the product development of the hardware that will eventually be running future versions of their networks.