John Carmack tweeted,

I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?

Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).