Engineers at Caltech and the University of Victoria in Canada have smashed their own internet speed records, achieving a memory-to-memory transfer rate of 339 gigabits per second (53GB/s), 187Gbps (29GB/s) over a single duplex 100-gigabit connection, and a max disk-to-disk transfer speed of 96Gbps (15GB/s). At a sustained rate of 339Gbps, such a network could transfer four million gigabytes (4PB) of data per day — or around 200,000 Blu-ray movie rips.

These record-breaking demonstrations took place at Caltech’s booth at the SuperComputing 2012 (SC2012) conference in Salt Lake City, Utah. In the booth, Caltech set up the mother of all high-speed networks: A handful of IBM x3650 M4 servers, each equipped with 16 OCZ Vertex 4 SSDs, connected to a Juniper MX 480 router (which has a total capacity of 1.92Tbps, or 300GB/s). From there, three 100-gigabit fiber-optic links connected the Caltech booth in Utah to the CANARIE, BCNet, Internet2, StarLight, and CENIC networks, which in turn transported data to end points at the Caltech university campus, the University of Victoria, and the University of Michigan. At each end point there’s another bunch of IBM servers, loaded up with SSDs.

Updated: The University of Victoria contacted us with more information about the hardware setup. There was also a Data Direct Networks (DDN) system, with a total of 288 SAS (serially-attached SCSI) 15,000 rpm drives. There was also two 24-bay 2U SuperMicro chassis, housing yet more OCZ Vertex 4 drives. Finally, there were 10 1.2TB Fusion-io PCIe storage cards, installed in a couple of SuperMicro servers. Just FYI: A single 1.2TB Fusion-io card costs somewhere in the region of $30,000.

In the case of the 339Gbps memory-to-memory record, we are talking about an aggregate of all three universities connecting to the Caltech booth in Utah. If you look at the graph below, and add together the inbound and outbound peaks, you get 339Gbps. The peak aggregate disk-to-disk transfer speed across the three 100-gigabit links was 187Gbps.

Over a single duplex 100-gigabit link between the University of Victoria and Salt Lake City, a memory-to-memory transfer rate of 187Gbps was obtained — just beating out last year’s record of 186Gbps.

The max disk-to-disk transfer rate of 96Gbps (1.5 gigabytes per second) was achieved between Salt Lake City and the University of Victoria. If we break it down, each IBM server was able to read data at 38Gbps (4.75GB/s) and write at 24Gbps (3GB/s) — so in the case of 96Gbps, we’re probably looking at 3 or 4 IBM servers working in tandem, on each end of the link.

These LANd (ha) speed records are all very impressive, but what’s the point? Put simply, the scientific world deals with vasts amount of data — and that data needs to be moved around the world quickly. The most obvious example of this is CERN’s Large Hadron Collider; in the past year, the high-speed academic networks connecting CERN to the outside world have transferred more than 100 petabytes of data. It is because of these networks that we can discover new particles, such as the Higgs boson.

In essence, Caltech and the University of Victoria have taken it upon themselves to ride the bleeding edge of high-speed networks so that science can continue to prosper. There’s also the distinct possibility that advances to Internet2, which is slowly expanding to encompass hundreds of educational and scientific institutions around the world, might eventually trickle down to consumers, too — much in the same way that the original ARPAnet became the internet.

Now read: Will 100Mbps internet connections destroy the web as we know it?