Sat, Mar. 26th, 2011, 11:41 am

Python tests used to compare different OSes.



All OSes are installed on the same machine (each is own partition) with default install parameters. All Hardware elements remains the same during the tests. The machine is a 4 years old PC running AMD CPU 2.4GHZ with 2GB Ram.



Tests were made with the latest Python2 release: 2.7.1.



To avoid binary packages glitches, I've downloaded and compiled Python myself on each OSes by executing the standard ./configure; make. Compiler, ... are the default one proposed by each OSes.



To get the time spent on each test, I've just add 1 line in the regretest.py script provided within the python tar file:

line 581: print test_times

In other words, at the end of the tests, this print a list of tuple: (time spent, test name).



I've tested it with FreeBSD 8.2,



To be detailed, the command used after compilation was:

./python -Wd -3 -E -tt Lib/test/regrtest.py -ucpu



(This is was you will execute if you perfrom a "make test"). I've just focussed the tests on CPU set of tests.



The global results are:

------------------------



Netbsd 5.1 wins 64

FreeBSD 8.2 wins 14

Ubuntu 10.10 wins 84

Arch 03/2011 wins 196



But having 2 results very close to each other does not really means that 1 OS is significantly better than the others. To take this into account, I've introduced the tolerance principle. This means that every OSes having a result very close (x percent) to the best result will also win the test.



If we accept a tolerance of 5%, the results are:

Netbsd 5.1 wins 97

FreeBSD 8.2 wins 34

Ubuntu 10.10 wins 145

Arch 03/2011 wins 245



If we accept a tolerance of 10%, the results are:

Netbsd 5.1 wins 136

FreeBSD 8.2 wins 54

Ubuntu 10.10 wins 186

Arch 03/2011 wins 275







The detailed results are:

----------------------------------

(Livejournal does not allow me to post the full reports, because they are too long. I, thus, post a subset of them. PLease contact me (william _dot os4y at_ gmail dot_ com)if you want the full one by email).



With tolerance 0:

=============

test_abc Netbsd 5.1: 0.06109, delta = 0.00% FreeBSD 8.2: 0.06670, delta = 9.18% Ubuntu 10.10: 0.06814, delta = 11.54% Arch 03/2011: 0.08548, delta = 39.92% winner is: Netbsd 5.1 test_abstract_numbers Netbsd 5.1: 0.00120, delta = 0.16% FreeBSD 8.2: 0.00256, delta = 114.16% Ubuntu 10.10: 0.00140, delta = 17.51% Arch 03/2011: 0.00119, delta = 0.00% winner is: Arch 03/2011 test_aepack Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_aifc Netbsd 5.1: 1.49062, delta = 28.61% FreeBSD 8.2: 1.17668, delta = 1.52% Ubuntu 10.10: 1.19231, delta = 2.87% Arch 03/2011: 1.15902, delta = 0.00% winner is: Arch 03/2011 test_al Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_anydbm Netbsd 5.1: 0.00736, delta = 0.00% FreeBSD 8.2: 0.01428, delta = 94.00% Ubuntu 10.10: 1.17291, delta = 15838.95% Arch 03/2011: 0.03939, delta = 435.24% winner is: Netbsd 5.1 ...



- With 5% tolerance:

================

test_abc Netbsd 5.1: 0.06109, delta = 0.00% FreeBSD 8.2: 0.06670, delta = 9.18% Ubuntu 10.10: 0.06814, delta = 11.54% Arch 03/2011: 0.08548, delta = 39.92% winner is: Netbsd 5.1 test_abstract_numbers Netbsd 5.1: 0.00120, delta = 0.16% FreeBSD 8.2: 0.00256, delta = 114.16% Ubuntu 10.10: 0.00140, delta = 17.51% Arch 03/2011: 0.00119, delta = 0.00% winners are: Netbsd 5.1, Arch 03/2011 test_aepack Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_aifc Netbsd 5.1: 1.49062, delta = 28.61% FreeBSD 8.2: 1.17668, delta = 1.52% Ubuntu 10.10: 1.19231, delta = 2.87% Arch 03/2011: 1.15902, delta = 0.00% winners are: FreeBSD 8.2, Ubuntu 10.10, Arch 03/2011 test_al Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_anydbm Netbsd 5.1: 0.00736, delta = 0.00% FreeBSD 8.2: 0.01428, delta = 94.00% Ubuntu 10.10: 1.17291, delta = 15838.95% Arch 03/2011: 0.03939, delta = 435.24% winner is: Netbsd 5.1 ...





- With 10% tolrerance:

==================

test_abc Netbsd 5.1: 0.06109, delta = 0.00% FreeBSD 8.2: 0.06670, delta = 9.18% Ubuntu 10.10: 0.06814, delta = 11.54% Arch 03/2011: 0.08548, delta = 39.92% winners are: Netbsd 5.1, FreeBSD 8.2 test_abstract_numbers Netbsd 5.1: 0.00120, delta = 0.16% FreeBSD 8.2: 0.00256, delta = 114.16% Ubuntu 10.10: 0.00140, delta = 17.51% Arch 03/2011: 0.00119, delta = 0.00% winners are: Netbsd 5.1, Arch 03/2011 test_aepack Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_aifc Netbsd 5.1: 1.49062, delta = 28.61% FreeBSD 8.2: 1.17668, delta = 1.52% Ubuntu 10.10: 1.19231, delta = 2.87% Arch 03/2011: 1.15902, delta = 0.00% winners are: FreeBSD 8.2, Ubuntu 10.10, Arch 03/2011 test_al Netbsd 5.1: NA FreeBSD 8.2: NA Ubuntu 10.10: NA Arch 03/2011: NA no winners test_anydbm Netbsd 5.1: 0.00736, delta = 0.00% FreeBSD 8.2: 0.01428, delta = 94.00% Ubuntu 10.10: 1.17291, delta = 15838.95% Arch 03/2011: 0.03939, delta = 435.24% winner is: Netbsd 5.1 ... A bit like Jaime has did with Ruby ( http://linbsd.org/ ), I've used the python test's scripts to compare different OSes.All OSes are installed on the same machine (each is own partition) with default install parameters. All Hardware elements remains the same during the tests. The machine is a 4 years old PC running AMD CPU 2.4GHZ with 2GB Ram.Tests were made with the latest Python2 release: 2.7.1.To avoid binary packages glitches, I've downloaded and compiled Python myself on each OSes by executing the standard ./configure; make. Compiler, ... are the default one proposed by each OSes.To get the time spent on each test, I've just add 1 line in the regretest.py script provided within the python tar file:line 581: print test_timesIn other words, at the end of the tests, this print a list of tuple: (time spent, test name).I've tested it with NetBSD-5.1 Ubuntu 10.10 and Archlinux . The Archlinux system has been fully updated just before the tests, I'll thus call it Arch 03/2011.To be detailed, the command used after compilation was:./python -Wd -3 -E -tt Lib/test/regrtest.py -ucpu(This is was you will execute if you perfrom a "make test"). I've just focussed the tests on CPU set of tests.The global results are:------------------------Netbsd 5.1 wins 64FreeBSD 8.2 wins 14Ubuntu 10.10 wins 84Arch 03/2011 wins 196But having 2 results very close to each other does not really means that 1 OS is significantly better than the others. To take this into account, I've introduced the tolerance principle. This means that every OSes having a result very close (x percent) to the best result will also win the test.If we accept a tolerance of 5%, the results are:Netbsd 5.1 wins 97FreeBSD 8.2 wins 34Ubuntu 10.10 wins 145Arch 03/2011 wins 245If we accept a tolerance of 10%, the results are:Netbsd 5.1 wins 136FreeBSD 8.2 wins 54Ubuntu 10.10 wins 186Arch 03/2011 wins 275The detailed results are:----------------------------------(Livejournal does not allow me to post the full reports, because they are too long. I, thus, post a subset of them. PLease contact me (william _dot os4y at_ gmail dot_ com)if you want the full one by email).With tolerance 0:=============- With 5% tolerance:================- With 10% tolrerance:==================