Posted by Sander_Marechal on Dec 10, 2007 3:28 PM EDT

LXer Linux News; By H.Kwint (The Netherlands) Mail this story

Print this story

LXer Feature: 10-Dec-2007 Back in 1999 I remember the first PC entered our house coming preloaded with Windows 98. One of the things I liked about it was the defragmentation screen where blocks presenting 'datablocks' on the harddrive were moving over the screen for almost eternally. I remember at that time it seemed like a logical maintenance requirement for any filesystem. However, when I started using Linux four years later, I was told that with Linux I didn't need to defragment my filesystems anymore, since Linux filesystems don't get fragmented in first place. At that time it left me puzzled, but after a few years of using Linux without defragmenting my filesystems - and without any problems! - it seemed defragmentation was something antique. Nonetheless I still wondered how on earth it was possible the 100k+ files in Gentoo's portage system - updated every time I synchronize the portage tree - didn't fragment my filesystem. Or was my filesystem fragmented and did I not know? Only recently, I found a script that is actually able to put numbers to all this gut-feelings, and the results were quite surprising in my opinion. It was in a LXer discussion one of our readers suggested all those portage updates and temporary files in the compilation process lead to serious fragmentation of a Gentoo Linux system. That's true, when looking at the output of an 'emerge' process, there's a tremendous lot of files being created, copied or moved, and finally deleted. Not to speak about when the portage tree is updated. Probably, for normal usage, a Gentoo Linux system has one of the biggest numbers of this kind of file operations. Therefore, I figured, filesystems on a Gentoo Linux system were the ideal test to see if the Linux filesystems could live up to my expectations. However, since most people believe Linux filesystems don't need defragmentation, it was quite hard to find a tool which measures filesystem fragmentation. Nonetheless, I found one in the Gentoo forums. Here's the great part: You can use it yourself to measure filesystem fragmentation of your own filesystems! I don't know under what license the content of Gentoo-forums is, so therefore I'll provide a short HOWTO. Credits go to Gentoo-user _droop_ for producing this perl script.



Copy over the perl code from the forum, and paste it in your favourite editor. Save, for example as frag.pl, change the file-modes so it becomes executable; for example

$chmod 750 frag.pl Run on a filesystem!

$./frag.pl <<argument>>



The argument should be the mount-point of one of your mounted filesystems if you want to know the fragmentation of a particular file system, like / or /usr or so. More interesting maybe is the ability to fill in any directory you like, for example I tested it on a directory where my P2P program occasionally puts some music; /home/kwint/music - only to find out my average mp3 in that dir consists of more than 50 fragments! It seems you must be root to do this scan, so I advise you to use sudo. A typical outcome would be: $ sudo ./frag.pl /root

Password:

3.2258064516129% non contiguous files, 1.04838709677419 average fragments.





Now, let's look at some actual numbers, from my two Gentoo boxes. In both cases, the portage tree has been synchronized more than fifty times, and both boxes are in use for about one year and a half. The results are, less than 4% of the files on my /usr and /var partition are fragmented. On Gentoo, /var is where the actual compile takes place, once compiled Gentoo moves the files to /usr. The files over there have a maximum of consisting of an average of 1,11 fragments. I would consider that as a very neat score - especially since I use the -notail and -noatime options for ReiserFS. It seems those options make my ReiserFS more susceptible to file fragmentation. Back to my results, I'm sure the Windows defragmentation utility would say I don't need to defragment. But what if I _would_ like to defragment my filesystem, would that be possible? Well, though there's no official 'compiled' application for it as far as I could find, you can try Con Kolivas' script. For example, my /home partition contains 13G of files. Some of them are rather large like one-Gigabyte VMWare image files and some Linux iso's, all of it is on ReiserFS again by the way. Results were really bad: 6 fragments per file on average. This seemed like a good testing ground for Con's script, so I gave it a spin. The results were quite stunning. I first tested it on my /home/kwint/music directory, where the average number of fragments per file were the 50 I mentioned above. After running Con's script this was reduced to 1.1, and the percentage of non-contiguous files dropped from 70% to 10%! This took only seven seconds to complete, and only 21 large files were touched. For your information, I'm running 2x5400rpm PATA disks in software RAID0 by means of EVMS here. Since this was a success, I decided I might as well 'defragment' my whole /home partition. This time, over 6000 files had to be defragmented, which of course took far more time, 22 minutes eventually. It was worth it, while the number of fragmented files only dropped from 7% to 8% - that must be because only some large files are defragmented, the average number of fragments per file dropped from 6 to 1,5.



Conclusion Though I only tested two PC's, here's what I found: File fragmentation mainly happens in filesystems which contain some large files. You don't have to worry about your /usr or /var directory being fragmented, since they don't contain many large files. As the results show, it's worth the effort to try Con Kolivas defragmentation script. Please, if you try the scripts share your findings and react in the threads below!