NASA goes 'one step short of bleeding edge' with trial network

Climate simulation center tests new 40G Ethernet equipment for research network

The NASA Center for Climate Simulation is putting some of the first 40G Ethernet technology through its paces to see if equipment using the new high-speed networking standards is ready for its high-performance research network.

Engineers at the supercomputing facility, housed at the NASA Goddard Space Flight Center in Greenbelt, Md., began working with beta versions of the 40G Virtual Interface Module from Extreme Networks on a laboratory network this month and will be doing a public demonstration at the Super Computing 2010 conference Nov. 13-19 in New Orleans. The modules will return to Goddard for more testing, said J. Patrick Gary, leader of the center’s High End Computer Networking Team.

Gary described the technology, built on the newly ratified IEEE 802.3ba standard for 40/100G Ethernet, as “one step short of bleeding edge.”

The modules have largely met expectations in early testing, Gary said, and he expects to use them when the production models become available early next year. The announced price of about $1,000 per 40G port is a little more expensive than 10G Ethernet equipment and will provide an efficient alternative to aggregating individual 10G links, he said.

Related stories:

100G networking on the horizon

Work begins on new 40/100G Ethernet standard

Ethernet is a family of local-area networking standards. The Institute of Electrical and Electronics Engineers ratified a standard for 40 gigabit/sec and 100 gigabits/sec Ethernet in June. Previously, the highest speed under the IEEE 802.3 standard had been 10 gigabits/sec. One of the earliest announced 40G Ethernet standard products is the Extreme Networks module for its X650 Switch.

Climate research requires the accumulation and manipulation of large datasets in the gigabyte and terabyte range. Goddard is home to two of the world’s premier climate modeling groups, at the Greenbelt facility and the Goddard Institute of Space Studies in New York. It also collaborates with other organizations, such as the National Oceanic and Atmospheric Administration.

“This is the kind of problem that takes supercomputing capacity,” Gary said. It also takes high-performance networking to move the data. Goddard’s Greenbelt campus has between 16,000 and 20,000 computers in more than 30 buildings, with a local-area network that provides connections from 10 megabits/sec to 1 gigabit/sec. There also is a second high-performance science and engineering network for the 5 percent to 10 percent of the scientists who need access to speeds of 1 to 10 gigabits/sec.

“We’ve been deploying this kind of networking for at least five years, and we’re beginning to get requests to look beyond that,” Gary said. The next step is 40/100G Ethernet, and Gary’s team has established a laboratory network to test the new technology. “This stuff doesn’t happen overnight,” he said.

Networks can achieve higher throughput rates by aggregating 10G links. But aggregation algorithms assign each traffic flow to only one link at a time, so using multiple 10G pipes is not as efficient as a single, larger pipe. With 40G Ethernet, “you know that the flows are not going to get queued up behind each other because of the link aggregation algorithm,” Gary said.

The new module from Extreme Networks provides four 40G Ethernet ports as an option to its X650 10G Ethernet switch. Now that the standards have been finalized, “what is challenging is that the switch has to have the computing power to do this,” said Darius Goodall, senior product marketing manager at Extreme Networks. The technology has reached the point that “we’re ready to go to a customer and put our money where our mouth is.”

Gary said initial tests have been satisfactory but work remains to be done on the software suite that enables the module to work with other equipment. “It’s not there yet, but it’s coming along, and things are very positive,” he said.