I was thinking about sorting algorithms in software, and possible ways one could surmount the O(nlogn) roadblock. I don't think it IS possible to sort faster in a practical sense, so please don't think that I do.

With that said, it seems with almost all sorting algorithms, the software must know the position of each element. Which makes sense, otherwise, how would it know where to place each element according to some sorting criteria?

But when I crossed this thinking with the real world, a centrifuge has no idea what position each molecule is in when it 'sorts' the molecules by density. In fact, it doesn't care about the position of each molecule. However it can sort trillions upon trillions of items in a relatively short period of time, due to the fact that each molecule follows density and gravitational laws - which got me thinking.

Would it be possible with some overhead on each node (some value or method tacked on to each of the nodes) to 'force' the order of the list? Something like a centrifuge, where only each element cares about its relative position in space (in relation to other nodes). Or, does this violate some rule in computation?

I think one of the big points brought up here is the quantum mechanical effects of nature and how they apply in parallel to all particles simultaneously.

Perhaps classical computers inherently restrict sorting to the domain of O(nlogn) , where as quantum computers may be able to cross that threshold into O(logn) algorithms that act in parallel.

The point that a centrifuge being basically a parallel bubble sort seems to be correct, which has a time complexity of O(n) .