If we have a long sequence of, say, integers and we want to test them for randomness we can measure the Shannon entropy. The more entropy you have, the more disordered or random the sequence is. Alternatively, we could apply Chaitin's compressibility test. If you can generate the sequence of integers with an algorithm that takes up less space than sequence, itself, then the sequence had redundancy - if you cannot compress the sequence then it contains maximum information. However, a truly random sequence is incompressible! Therefore it seems that maximal information corresponds to disorder! The question is what is the best framework to adopt so that we can accommodate this view without getting confused? Also a related question is this: the digits of pi are not random, in the Chaitin view, because we can compress the sequence by writing an algorithm to generate pi - however if I presented you with the digits of pi but with the first 100 digits deleted, could you compress the sequence? I can compress the sequence because I know that all I have to do is write down an algorithm for pi and remove the first hundred digits. But to you the sequence looks totally random and it might take you forever to guess that the sequence was pi (in disguise), so you will quickly give up trying to compress it and conclude it is incompressible. So another question would be to ask if Chaitin's viewpoint is truly a useful one, as it seems to depend on one's foreknowledge - in other words ignorance of the data can affect your viewpoint.