...making Linux just a little more fun!

The Front Page By Heather Stern Ubuntu Linux is a Debian derivative distribution of Linux aimed at being fairly easy for someone new to the Linux environment - the ordinary user - and having a somewhat different and thereby hopefully faster version release process. Their motto is "Linux for Human Beings", and Canonical Ltd. in the UK supports them professionally. This image was inspired by the IRC chat clients found in the Ubuntu default installation (or perhaps their live CD) whose quitline proclaims "set your system free with Ubuntu!" or something like that. It led an IRC friend to joke about herds of wildelaptops wandering free across the African plains... Did you know that the word "wildebeest" refers to a gnu? Or that the GNU project's sourceforge equivalent is named Savannah? I found the opportunity to wish everyone a visual "Happy GNU Year" too good to pass up. The African herd that this image describes consists of an image from Dr. Steve Barrett's safari photos (I hope he doesn't mind) and some pictures of various laptops. Specifically, this shows an Asus 3300, a Toshiba Tecra 8200, and clipart of a Powerbook and something more generic. That's right, kids; Ubuntu has a PPC edition that runs quite nicely on Powerbooks. I've found that the more generic a laptop, the better Linux's chances seem to be... but don't hold your breath about any softmodems. The Blue Wildebeest is a common breed, but the IBM imagery I found didn't seem to suit the angles I needed. May your gnu year be just a little more fun than last year...

Heather is Linux Gazette's Technical Editor and The Answer Gang's Editor Gal. Heather got started in computing before she quite got started learning English. By 8 she was a happy programmer, by 15 the system administrator for the home... Dad had finally broken down and gotten one of those personal computers, only to find it needed regular care and feeding like any other pet. Except it wasn't a Pet: it was one of those brands we find most everywhere today... Heather is a hardware agnostic, but has spent more hours as a tech in Windows related tech support than most people have spent with their computers. (Got the pin, got the Jacket, got about a zillion T-shirts.) When she discovered Linux in 1993, it wasn't long before the home systems ran Linux regardless of what was in use at work. By 1995 she was training others in using Linux - and in charge of all the "strange systems" at a (then) 90 million dollar company. Moving onwards, it's safe to say, Linux has been an excellent companion and breadwinner... She took over the HTML editing for "The Answer Guy" in issue 28, and has been slowly improving the preprocessing scripts she uses ever since. Here's an autobiographical filksong she wrote called The Programmer's Daughter.



More 2 Cent Tips! See also: The Answer Gang's Knowledge Base and the LG Search Engine Just Bad and Wrong... yet cool

BSS and Data Segment

email issue

Running isoLINUX from a REALLY old machine

Tuppence Tip: URL scraper.

Global home movies with cron and Imagemagick Just Bad and Wrong... yet cool Rick Moen (

Question by ()

Answered By Jimmy O'Regan, Ben Okopnik Rick Moen ( LG Contributing Editor Question by ()Answered By Jimmy O'Regan, Ben Okopnik "bash is terrifying me again": http://www.livejournal.com/users/mendel/128965.html Bash pseudo-sockets: Oh my! followed by: "Dude, you've got a smiley in your prompt." [Jimmy] Followed by: fishsticks$ false

fishsticks$ [Raj] How did he do that. I want something like that too. If anyone knows, do tell. [Jimmy] jimregan@jimmy ~ $ echo $PS1 \[\033]0;\w\007 \033[32m\]\u@\h \[\033[33m\w\033[0m\] $ jimregan@jimmy ~ $ smiley () { if [ $? == 0 ]; then echo ':)';else echo ':(';fi; } jimregan@jimmy ~ $ PS1="\$(smiley) \h$ " :) jimmy$ false :( jimmy$ true :) jimmy$ Um... I should probably explain that. $? holds the exit status of the last process: zero for success, non-zero for failure; $PS1 is the primary prompt. It's \$(smiley) because you want bash to evaluate it every time: $(smiley) would just evaluate it at the time you set the variable, as would putting the contents of the smiley function into PS1 \h is the shorthand for \$(hostname) [Ben] Darn it, Jimmy, you always beat me. What, you have a fast connection or something? [Jimmy] My connection, in "the cheque is in the mail" speed is 115.2 Kbps, so no. [Ben] My way was just slightly different: # More-or-less undocumented Bash-ism - a C-style trinary operator smiley () { echo -e ":\\$(($??50:51))"; } BSS and Data Segment Mike Zheng (

Question by tag (tag from lists.linuxgazette.net)

Answered By Lew Pitcher Mike Zheng ( mail2mz from gmail.com Question by tag (tag from lists.linuxgazette.net)Answered By Lew Pitcher Hi All, For a program, the BSS contains un-initialized variables, Data contains initialized variables. Why do we want to separated these two categories of variables? [Lew] It's an optimization thing. If each binary contained an image of 'data' that included the uninitialized variables, then the binary would be larger in size than if the image did not include uninitialized variables. If you don't map these variables to your 'load image' (the contents of the binary), but instead, map them to memory at execution time, your binaries can be smaller without a loss of functionality. The 'typical' (conceptual) memory map (once a program has been loaded into memory) looks something like this... +----------------+ | stack space | | : | | v | | | | | | | | ^ | | : | | heap space | | .. .. .. .. .. | | BSS | +----------------+ | DATA | | | +----------------+ | (CODE) | | | +----------------+ (on most implementations, CODE is kept in a separate address space from DATA/BSS/heap/stack) Heap 'grows' up from the end of BSS towards the highest address Stack 'grows' down from the highest address towards the end of the BSS Heap allocation code keeps a boundary between the top end of the heap and the bottom end of the stack. The 'loadable' part of the binary is the CODE and DATA part. BSS is how the loader accounts for the room needed to hold the uninitialized data. Some program clear the BSS by itself, is it necessary? [Lew] A mere convenience, nothing more. HTH email issue J.Bakshi (

Question by tag (tag from lists.linuxgazette.net)

Answered By Jimmy O'Regan, Rick Moen J.Bakshi ( hizibizi from spymac.com Question by tag (tag from lists.linuxgazette.net)Answered By Jimmy O'Regan, Rick Moen Hi list, At first I wish you all a very ******HAPPY NEW YEAR******. Here is a technical query. Frequently I get some advertisement-emails with EMPTY "form" header & EMPTY "to" header. Even the "detail-header-view" of sylpheed-claws can't show any header information. What is the technology which makes this possible and how can we prevent such emails ? [Jimmy] The technology that makes this possible is called [drumroll]... SMTP! To see how it works, use telnet: $ telnet smtp.o2.ie 25 220 smtp1.o2.ie -- Server ESMTP (Sun ONE Messaging Server) MAIL FROM: test@example.com 250 2.5.0 Address Ok. RCPT TO: jimregan@o2.ie 250 2.1.5 jimregan@o2.ie OK. DATA 354 Enter mail, end with a single ".". From: To: Subject: Test A test . 250 2.5.0 Ok. QUIT 221 2.3.0 Bye received. Goodbye. Now, I check my mail. In Mozilla Mail, it shows up as this: Subject: Test Date: 11:45 A test View->Message Source: See attached mbox.txt What'll be interesting now will be to see how many mail apps that use mboxes are broken by that } [Rick] J.Bakshi's other question was: ...how can we prevent such emails ? The answer is [drumroll]... don't use SMTP e-mail. Or alter your receiving SMTP server's filtering rulesets to reject or drop incoming mails not meeting certain technical requirements, including possession of key header lines. Generally, this requires running your own SMTP host (mail tranfer agent) -- rather like A.J. Liebling's dictum that freedom of the press is limited to those who own one. Thanks to both Rick Moen and Jimmy O'Regan for their explanation. Jimmy's answer is a bonus as he explained through a good practical example. I have just come to know mixmaster. Though haven't gone through its documentation yet, but aptitude show mixmaster shows that it can prevent the recipient knowing your email address. [Rick] Er, you might have a use for mixmaster, but the preceding discussion didn't seem to suggest that, and I fear that you might be misunderstanding its intended purpose. Mixmaster is a client-server software tool for people running MTAs (SMTP machines), and their users, with the client piece allowing the users to issue pseudonymous e-mails, which the mixmaster daemon (server) piece then remails on their behalf. It would be very strange to send your routine, normal outgoing mail through Mixmaster, specifically because it would greatly obscure the identity of the sender. Further, and more to your original point, it would do nothing to prevent you from also receiving spammer-and-malware generated junkmails at your real e-mail address, such as the garbage mail you cited that had a couple of null interior headers. As Ben knows, I've long been of the opinion that pretty much all "hide from spammers" strategies (including, well, "preventing the recipient knowing your address") are a dumb idea, really don't work worth a damn, and interfere with legitimate Internet usage to a degree I find unacceptable. At least for those of us who control our own MTAs, there are effective countermeasures, that don't involve self-concealment in any way. Thanks Rick. You have provided a short as well as good article on Mixmaster & its application ( I may be permitted to say for beginners ). Now I can under stand that as a Workstation owner ( and not an MTA) I don't need Mixmaster. Thanks again. Running isoLINUX from a REALLY old machine R.M.Deal (

Question by tag (tag from lists.linuxgazette.net)

Answered By Ben Okopnik R.M.Deal ( deal from kzoo.edu Question by tag (tag from lists.linuxgazette.net)Answered By Ben Okopnik I just read the enlightening article Booting Knoppix from a USB Pendrive via Floppy by Ben Okopnik from the #116 issue of the LINUX gazette. [Ben] Well, thank you for the compliment - and I'm glad you enjoyed the article. I have a similar problem but with an older machine, a Toshiba Satelite Pro 410 CDT, for which I have an external floppy drive. My problem is that the BIOS does not give me the option of booting from a CD. I can boot from either the hard disk or from the floppy. [Ben] Have you looked for a BIOS update? I'm not saying that one is definitely available, but if it is, then that would be the simplest solution. Take a look at Toshiba's site and see if there's a download available. Great suggestion. I shall do that. Now while I found one article on booting Knoppix from floppies, I wonder if it would be a big deal to modify the script developed by Ben to produce a Knoppix boot, not for a USB pen drive (this portable has NO USB port) but from a floppy. By the way, the Toshiba has installed as an operating system Windows 95 and has only 16 M of RAM so running even KNOPPIX may be a challenge. [Ben] The problem is that, shortly after I wrote the article, the information in it became outdated: the next release of Knoppix had a minimum size for the kernel/modules/etc. that was much too large to fit on a floppy. However, for your application, there's an easy answer - or at least used to be (I don't have a Knoppix CD handy to check): just fire up your Wind0ws, look at the Knoppix CD, and there should be a batch file there called "mkfloppy.bat". Run it; after it writes a boot floppy for you, simply reboot, leaving both the floppy and the CD in the machine. Well, almost. I could not get anything under my Windows boot on the portable (a laptop only if you have strong quadriceps) so I booted my LINUX system (suse 9.3) with KNOPPIX and couldn't find a mkfloppy.bat there. [Ben] That wouldn't have helped anyway, since all the commands in the batch file are DOS programs. Right, but I could not find the batch file from W95 or suse. However, in a file knoppix-cheatcodes.txt, the procedure to handle floppy boot only is described, using a LINUX command in KNOPPIX named mkbootfloppy. However, when I boot with KNOPPIX, I can find no such file, only a mkboot. That mkboot command looks right but when I run it with an installed blank floppy, after writing on the floppy drive, I get an error message, in German. Now I am living now in Germany (Weimar) and am taking a course in German but I do not have the resources to fully translate technical German. [Ben] Whoops... Perhaps you could post that error? My technical German has risen above zero (in part, due to Knoppix , and there are several German-speaking folks here who could help. OK. Attached is a script of what happened when I tried to produce the boot floppy. It was interesting making the script in KNOPPIX and transferring it to /mnt/hda2/home/ralph/ after mounting /mnt/hda2. Please note that I am working with Knoppix V3.8.1 (2005/04/0 . I am not updating right now because I cannot get DSL in Weimar despite having it in Munich and so am having to use ISDN (hence my switch from Fedora to Suse) in which I can get no flat rate! Once DSL is available ("tomorrow"), I'll update my version. Script wurde gestartet: Mo 21 Nov 2005 10:21:56 CET root@4[~]# mkboot Insert a floppy diskette into your boot drive, and press <Return>. Creating a lilo bootdisk... cp: Schreiben von ,,/tmp/boot15663/vmlinuz": Auf dem Gerät ist kein Speicherplatz mehr verfügbar root@4[~]# exit Script beendet: Mo 21 Nov 2005 10:23:11 CET [Ben] "No space left on device". At least the version with the umlauts, etc. - "Auf dem Gerät ist kein Speicherplatz mehr verfügbar" - means that. Seems that it's not all that uncommon: searching the Net produces a number of hits. It seems that in some cases, this is produced by bad floppies - but in other cases, it's a result of bad communication with an external floppy drive. I'd suggest making the floppy from your Wind0ws, just to see if it works. Having two OSes can be handy for resolving "is it hardware or software?" questions. I plan on working on deciphering that error message but meanwhile I took the floppy that resulted from that action and tried to boot my portable with it. Unfortunately, it only booted into W95; it seems the boot floppy is faulty. You referred to two floppy disks and so did the cheatcodes.txt file but there was no prompt to put in a second floppy. It may be time for me to contact the programmer of KNOPPIX directly to see what the problem is. I'm guessing that the cheatcodes.txt file was written for an earlier version of KNOPPIX and doesn't describe the procedure correctly for the new version. Did I say LINUX was fun?? [Ben] Sure. Part of the fun is the ability to make things happen that aren't the norm. Second, related, question: does anyone recommend a version (old if necessary) of LINUX that would work well as the operating system on the Toshiba? If these are in some FAQ, I'd appreciate a link. [Ben] I find that Puppy Linux (http://www.goosee.com/puppy) works well on older machines, although I haven't done a broad range of experimentation with it; it has a nice desktop and a sufficient range of installed applications that I don't find it restrictive. Other folks here may have other suggestions. Ain't LINUX fun, Ralph [Ben] [grin] Actually, yeah. I like it, m'self. Tuppence Tip: URL scraper. Thomas Adam (

Question by TAG (tag from lists.linuxgazette.net) Thomas Adam ( thomas from edulinux.homeunix.org Question by TAG (tag from lists.linuxgazette.net) Hello -- This is really an on-going issue from my post regarding urlview and the logging of URLs. I've since decided to take a different approach, and thus far this method works quite nicely. I'm now using 'multi-gnome-terminal' (MGT), 'multitail', 'gmrun', plus a helper script. The overall aim of all of this, was to be able to: Capture URLs from various #channels that I am in (done via irssi already).

Open the up in a web browser. You might wonder what's so hard with this -- the problem is that X11-forwarding on my server takes forever -- it is only a poor P166 with 64MB of RAM, after all. The irssi session resides on the server, so I needed a way of pseudo-opening the URLs as though the request originated on my workstation. It's unfortunate that I have to use MGT, since it is a memory hog, but needs must. I use it because that has the ability to automatically hotlink URLs -- so that actions can be assigned to it when they're clicked on. Based on this premise, filling in the gaps was easy. I mount my server's filesystem via 'shfs' -- which I'm now using as a replacement for NFS. I really like it (and a lot less buggy than lufs, and its ilk). This way, I can use multitail to keep an eye on that file. The url logging script that I use from irssi, is "url_log.pl" [1]. All of the logged entries are in the format: Sat 10 Dec 2005 00:57:08 GMT nick #chan URL ... and I wanted multitail to colourise the output, as it does for other files. That was easy -- just create a new colourscheme for it in /etc/multitail.conf: ###URLlog colorscheme:urllog cs_re:green:^... .. ... .... cs_re:magenta:..:..:.. ... cs_re:red:.* \#fvwm (http|https|ftp)://.*$ cs_re:blue:.* \#bash (http|https|ftp)://.*$ cs_re:cyan:.* \#elinks (http|https|ftp)://.*$ cs_re:yellow:.* \#hug (http|https|ftp)://.*$ scheme:urllog:/mnt/home/n6tadam/.irssi/urls/url So to break this down a bit, remember a typical entry from this file will look like: Sat 10 Dec 2005 00:57:08 GMT nick #chan http://myfoo.com Hence: "Sat 10 Dec 2005" will appear in green. "00:57:08 GMT" in magenta, and the rest of the line will appear in whichever colour is matched by the channel the URL was quoted in. So, it looks pretty. scheme:urllog:/mnt/home/n6tadam/.irssi/urls/url ... should obviously be changed to match whichever file is going to hold the urls from the url_log,pl script. The next stage was to determine what happens when I clicked on a URL (I say click -- the shortcut to opening a URL via MGT is 'CTRL + middleclick'). I didn't want everything to be sent to my browswer. This is where the "gmrun" utility comes in useful [2]. For those of you that have never used it, it's a very handy, and customisable tool. One of the features it has, is pre-defining prefixes for certain applications. So for instance, I could enter into gmrun: man:bash ... and depending on what I had told gmrun to do with the 'man' prefix, it would open up the bash man page. Neat, eh? So I wanted to have a separate program to open up images, and URLs (it's quite often the case that people post links to screenshots, that I don't want to open in a browser, but would just rather 'see'). I needed to use a helper script to do this, as gmrun accepts no command-line options. The trick I used (in order to make it appear directly in the gmrun window, as though I had typed it), was to append it to gmrun's history file --- if set correctly, gmrun will display the last entered command. No biggie, here it is: See attached runvia.sh.txt (Saved as ~/bin/runvia.sh -- and chmod 700 ~/bin/runvia.sh) So, I'm able to flag to gmrun that if the URL I am clicking on is an image, then tell it so, else, flag it to open up in elinks (this is my primary browser -- although I wanted a specific handler for it.) But in order for that script to process the URL that we clicked on from MGT, we need to tell MGT to perform that action. This is easier than you'd think, and involves editing the file: $HOME/.gnome/Gnome, such that: default-show=runvia.sh "%s" Going back to gmrun, we lastly need to tell it what acrtion to take for the 'elinks:' and 'image:' prefixes. That information is stored in /etc/gmrunrc , although I copy this to ~/.gmrunrc, personally, and edit it, so that for the image handler: URL_image = sh -c 'feh %s' ('feh' has the ability to read images via http). And for the elinks handler: URL_elinks = sh -c '${TermExec} elinks -remote "%s" && FvwmCommand "All (*ELinks*) FlipFocus"' "${TermExec}" is a variable defined further up in the file that looks like this: Terminal = rxvt TermExec = ${Terminal} +sb -ls -e ... and that's it. It seems to be working really well. Since I use FVWM, I wanted to (when I had decided to click on a URL) to focus the webbrowser -- hence the reason why I'm using FvwmCommand. This is optional of course. Although to continue on a similar theme, the style of the "gmrun" dialogue window is set to the following: Style Gmrun GrabFocus .. so that when it pops up, I can hit enter, knowing that the Gmrun window will always have the focus, to execute whatever is inside it. You can see a screenshot[3] of the url-logger in action. Hope someone finds this useful, or can derive other ideas from it. [1] http://www.irssi.org/scripts/html/url_log.pl.html [2] http://www.bazon.net/mishoo/gmrun.epl [3] http://edulinux.homeunix.org/~n6tadam/fvwm/ss/url-logger.png Global home movies with cron and Imagemagick Karsten M. Self (

Question by tag (tag from lists.linuxgazette.net) Karsten M. Self ( karsten from linuxmafia.com Question by tag (tag from lists.linuxgazette.net) I was just showing this to Heather. I've been trying to figure out what's going on with our strange California weather (tropical rainstorms for the past two weeks) and decided that moving satellite images were more interesting than static ones. Among other discoveries, we've been getting a tropical express mainlined to us over the Pacific for the past several weeks, and the movies make this very apparent. There are two parts to this tip: first, you want to start grabbing satellite images as they're available, then you want to string them together to view them. Updates are 30 minutes and 3 hours for regional and hemispheric views, respectively. For the fetch, I use cron to schedule downloads, storing them in a large local area, 'data/weather', with a timestamp added to the filename. Recipie, add to your personal crontab: See attached crontab.txt To animate images, say, from the past five days, you can animate 'find' results. To look at west-coast infraread images: animate -pause 2 -delay 25 $( find . -mtime -5 -name WCIR\* | sort) There's some cleanup you may need to do as the download images occasionally get out of order. I've found deleting duplicates seems to fix this pretty readily. You can also experiment with image enhancements via ImageMagick, such as increasing size and contrast, and reducing noise, in the downloaded images. This page edited and maintained by the Editors of Linux Gazette

HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

The Answer Gang Linux Gazette 122: The Answer Gang (TWDT)

...making Linux just a little more fun! The Answer Gang

By Jim Dennis, Jason Creighton, Chris G, Karl-Heinz, and... (meet the Gang) ... the Editors of Linux Gazette... and You! We have guidelines for asking and answering questions. Linux questions only, please.

We make no guarantees about answers, but you can be anonymous on request.

See also: The Answer Gang's Knowledge Base and the LG Search Engine Contents: ¶: Greetings From Heather Stern DirectPad Joystick LN -- have I gone brain dead? System V or BSD? Yikes. --or-- Missing libraries

...and how to find them Greetings from Heather Stern DirectPad Joystick From Triyan W. Nugroho Answered By: Peter Knaggs Hello Gang, Here in Indonesia we have very cheap joystick using DirectPad Pro driver (it's actually just a modified Playstation joystick). [Peter] Does it connect to the parallel port of your Linux machine? If so, you might find the README and README-parport in the following are helpful to get an understanding: ftp://atrey.karlin.mff.cuni.cz/pub/linux/joystick/joystick-1.2.15.tar.gz If on the other hand, it connects to the game port of your sound card, don't use the parallel port driver, of course :) Yes, it does connect to parallel port. I have use it for a long time in Windows, but still have no idea how to use it with Linux. [Peter] Did you read these pages? http://atrey.karlin.mff.cuni.cz/~vojtech/input http://www.linuxgames.com/joystick Yes. But the documentation in the Linux kernel says that the kernel has support for DirectPad joysticks, using gamecon module. [Peter] Yes, "gamecon" is likely the right module to use. So I think I don't have to compile the package provided in the abovementioned pages.. [Peter] True. Still, they're useful in that they combine the userland tools and the kernel drivers all in one place with some guides and FAQ and other such. It gives a good starting point, and from there you can work on the specific issues of the 2.6 kernel, once you know how to get the device to work correctly in a plain 2.4 kernel. And I'm still confused, because the package in these pages use different filename. [Peter] Yes, well filenames change over time, code hardly changes all that much. For example, Linux kernel uses joydev.c, but in these pages it is joystick.c. And for DirectPad joysticks, Linux kernel uses gamecon.c, while in these pages it is joy-console.c. [Peter] Those would have been the names during the 2.4 series kernel. I have been Googling and found that the Linux kernel actually have support on it. And then I found several documentation in the Linux kernel, but I always fail :( [Peter] What did you try, and how did it fail, specifically? cd /dev rm js* mkdir input mknod input/js0 c 13 0 mknod input/js1 c 13 1 mknod input/js2 c 13 2 mknod input/js3 c 13 3 ln -s input/js0 js0 ln -s input/js1 js1 ln -s input/js2 js2 ln -s input/js3 js3 [Peter] Well, those entries would be created by udev if you were using a 2.6.12 or later kernel, once the module was loaded correctly. It seems, though, that the module "gamecon" isn't loading for some reason. Perhaps because it isn't finding the parallel port device? Or because the parallel port is being used by another module (have you checked that "lp" module isn't loaded)? Are you sure the parallel port is enabled in the BIOS? Do you happen to have a parallel port printer and test that printing works OK, or some other way to know if the parallel port is actually enabled? You mentioned the joystick worked in Microsoft but it wasn't clear if it was on the same machine. But every reboot, my js* is always gone How can I fix it? [Peter] Not sure about that, could be many reasons. It's no harm to create the devices manually, but unless the module is loaded successfully they will just give ENODEV when you try to use them. root@devel:/home/i2c# modprobe joydev root@devel:/home/i2c# modprobe gamecon FATAL: Error inserting gamecon (/lib/modules/2.6.8-2-386/kernel/drivers/input/joystick/gamecon.ko): No such device [Peter] Have a look in "dmesg" to see if there's any other messages. Also, can you see anything mentioning "parport" in your boot messages, if so can you send it, e.g.: parport: PnPBIOS parport detected. parport0: PC-style at 0x378 (0x778), irq 7, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] That would tell you the IRQ line to look for in /proc/interrupts corresponding to the parallel port. I've tested on 2.4 and 2.6 series, but no difference... [Peter] Hmm, so it looks like more debugging will be needed. Have a look in the sources, you can add in some printk(KERN_INFO along the path from module_init(gc_init); so add in some tracing of gc_init and see why the gc_probe function isn't happy. You should see your debug additions come out in "dmesg" when you modprobe gamecon and you can keep adding more debug tracing and doing rmmod gamecon and modprobe gamecon to try again. Be careful though with your code, as you're likely to crash your kernel sometimes doing this sort of thing. Can you explain step-by-step how to configure the kernel to work with this joystick? I use Debian 3.1 and kernel 2.6.18. [Peter] I wouldn't have the same hardware as you, so I couldn't provide a step-by-step guide. My guess woiuld be that the joystick stuff might be easier to get to work using a 2.4 series kernel rather than 2.6 series. I haven't been following the development in the 2.6 series much on joysticks. My sound card (Intel i810 onboard) doesn't work on 2.4, so I have to use the 2.6 series :( [Peter] Well, it is still worthwhile to learn how (or even if) the device will work with the 2.4 kernel driver, even if you can't listen to music while you're doing the investigation :) But sounds like it doesn't work in 2.4 as you say, so easiest to debug what's going on using 2.6 for now. [Peter] Could you try instead: modprobe gamecon map=0,1 Looking in ./drivers/input/joystick/gamecon.c it seems it needs the parallel port number and the pad number. I'm guessing that would be counting from 1, but maybe it should be map=0,0 module_param_array_named(map, gc, int, &gc_nargs, 0); MODULE_PARM_DESC(map, "Describers first set of devices (<parport#>,<pad1>,<pad2>,..<pad5>)"); Solved! First I had to remove lp module. I also stopped CUPS services because my printer is connected via parallel port. [Peter] Thanks, I didn't think of stopping CUPS :) In 2.4 kernel, I try: #modprobe gamecon gc=0,7 where 0 means parallel port 0, and 7 means that it is a Playstation joystick. [Peter] the "map=0,7" would be what to use for 2.6, judging from the sources they mention that the "gc=" parameter is deprecated. When I tried to play xgalaga using my joystick, the ships did some strange movement. [Peter] Instead of tying a game directly, could you try to build the "jstest" and "jcal" programs if you don't have them already. From the README in the download mentioned before: ............... 2.5 Verifying that it works For testing the joystick driver functionality, there is the jstest program. You run it by typing: jstest /dev/js0 And it should show a line with the joystick values, which update as you move the stick, and press its buttons. The axes should all be zero when the joystick is in the center position. They should not jitter by themselves to other close values, and they also should be steady in any other position of the stick. They should have the full range from -32767 to 32767. If all this is met, then it's all fine, and you can play the games. :) If it's not, then there might be a problem. Try to calibrate the joystick, and if it still doesn't work, read the drivers section of this file, the troubleshooting section, and the FAQ. 2.6. Calibration For most joysticks you won't need any manual calibration, since the joystick should be autocalibrated by the driver automagically. However, with some analog joysticks, that either do not use linear resistors, or if you want better precision, you can use the jscal program jscal -c /dev/js0 ............... Maybe it still need some configuration or it's a bug in the kernel (the documentation in 2.4 kernel says that support for Playstation joystick is still under development). [Peter] Well, to debug we'd need a better description of the symptoms than "ships did some strange movement in xgalaga", I mean any kernel developer would probably find that quite insuffuicient :) But I'm quite happy to see that my Linux box is now more fun with joystick support :) I haven't had time to try this on 2.6 kernel. Anyway thanks very much for the help :) [Peter] Could be that in 2.6 the driver would read more frequently from the parport than in 2.4, resulting in a more responsive joystick, so it could be worth a try. Especially to get sound to work at the same time, as you mentioned. LN -- have I gone brain dead? From Bob van der Poel Answered By: Breen Mullins, Thomas Adam, Ben Okopnik I'm trying to do something pretty simple, I thought, and am near the "tear out my hair" stage ... All I want to do is to create a symbolic link to an existing directory: bob$ ln -s tmp foo bob$ ls -l foo lrwxrwxrwx 1 bob bob 3 Dec 1 15:21 foo -> tmp/ No, foo is not a link to tmp. Well, it sort of is bob$ file foo foo: symbolic link to `tmp' But, bob$ ls foo foo@ [Breen] It looks like you're somehow passing -F -d to the ls command. What does 'alias ls' tell you? Oh, that was easy! Here I've been struggling trying to get the *&*(^ ln command to work and it was working all along! Yup, ls was aliased: alias ls alias ls='ls -F --color=auto' Guess how quickly I'm going to fix that! Dumb, dumb, dumb. Don't know who .. the packager who snuck in the -F or me :) [Breen] Moral: always check for aliases. [Ben] For some of us - those who have their "mv", "cp", and "rm" aliased to automatically use "-i", for example - it's the automatic response (sticking, say, 'rm' into a 'for' loop doesn't work too well if you're going to get asked "Are you sure?" for a couple hundred files.) However, there's other stuff of this ilk in shells that can really drive you nuts. In the past, I've been tripped up by looking for programs that didn't exist (they turned out to be functions that the sysadmin put in /etc/profile ), have been driven almost insane by CDPATH ('cd foo' would go to ' /usr/local/foo/bar/zotz/qux/argle/bargle/will/this/never/end/foo ' instead of ./foo ), and have become Quite Annoyed by per-directory source files - although '.exrc' isn't really a shell-related gadget. But still. :) The Daemons of Unix are wily, subtle, and enjoy sucking out your brains through your ears if you let them. They're also a very strange shade of mauve with green dots, so avoid the psychedelic drugs and all will be well. Intoxicants and computing do NOT mix well... remember, never drink and derive. [Breen] Fedora Core 3 (in a fit of complete brain-death) shipped with vi unconditionally aliased to /usr/bin/vim. [Ben] Gaah! I've run into similar idiot's handiwork on a Solaris system - although, to be fair, it was the local sysadmin who perpetrated this one. He had set root's $SHELL to " /usr/bin/bash " - 'cause, y'know, "sh" is just so annoying (and .profile is just some stupid thing that nobody ever uses anyway, right?) The, one fine morning, the " /usr " slice failed to mount... [Breen] I've just taken another look at this -- at some point Fedora fixed the bug. /etc/profile.d/vim.sh now looks like this: if [ -n "$BASH_VERSION" -o -n "$KSH_VERSION" -o -n "$ZSH_VERSION" ]; then [ -x /usr/bin/id ] || return [ `/usr/bin/id -u` -le 100 ] && return # for bash and zsh, only if no alias is already set alias vi >/dev/null 2>&1 || alias vi=vim fi which is much more reasonable behavior. As originally shipped the two tests after then were missing. [Breen] Which doesn't matter much until you're trying to repair a system which doesn't have /usr/ mounted (because you borked /etc/fstab when tweaking your partitioning scheme). Remembering to type /bin/vi is easier when you're not sweating bullets. doesn't give me a listing of the contents of tmp. Ummm, what am I doing wrong here????? [Thomas] Nothing. Although you should qualify which directory you created the link in. Indeed, what does: ls -l ./foo produce as output? Indeed, did you do that the right way around? % ln -s /tmp foo && ls ./foo System V or BSD? From Adam S Engel Answered By: Thomas Adam, Rick Moen, Mike Orr I was hunting down a "fugitive" process and accidently hit the BSD command ps aux (thinking it was an accident because I'm using Mandriva, which I thought was based on System V) [Thomas] In terms of Init level structures then it is, yes. and received something like this: ~ 495 --> ps aux | head -5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 1560 476 ? S Nov30 0:00 init [5] root 2 0.0 0.0 0 0 ? SN Nov30 0:00 [ksoftirqd/0] root 3 0.0 0.0 0 0 ? S< Nov30 0:00 [events/0] root 4 0.0 0.0 0 0 ? S< Nov30 0:00 [khelper] But then, just for the hell of it, I hit the System V command, ps -ef and got this: ~ 494 --> ps -ef | head -5 UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov30 ? 00:00:00 init [5] root 2 1 0 Nov30 ? 00:00:00 [ksoftirqd/0] root 3 1 0 Nov30 ? 00:00:00 [events/0] root 4 1 0 Nov30 ? 00:00:00 [khelper] I thought most Linuxes, particularly the "big ones" like Fedore, Debian, SuSE, Mandriva, were based on System V. [Thomas] They are, but the ps command is an oddity in that sense. Still useful, though. Even so, my MacOSX, based loosely on BSD, will not accept the ps -ef command. Can a system (Mandriva in this case) be comprised of both? [Rick] All modern Unixes (except *BSD, which are holdouts) are the result of (or inspired by) a shotgun marriage of those two cultures (plus SunOS), called System V R4, released by AT&T in 1988. Quoting from the Unix FAQ: From SVR3: sysadmin, terminal I/F, printer (from BSD?),

RFS, STREAMS, uucp From BSD: FFS, TCP/IP, sockets, select(), csh

From SunOS: NFS, OpenLook GUI, X11/NeWS, virtual memory

subsystem with memory-mapped files, shared libraries (!= SVR3 ones?) ksh

ANSI C

Internationalization (8-bit clean)

ABI (Application Binary Interface -- routines instead of

traps) POSIX, X/Open, SVID3 Even prior to that, retrofitting "BSD enhancements" onto System V-based systems was a nearly ubiquitous customisation: Sys V R4 simply acknowledged that reality, bowing to the technical community's mandate. The general architectural description of Sys V R4 got abstracted out as the POSIX.1 specification, which served as the blueprint from much of what Torvalds and co. (and FSF, and others) then performed in building up what today we call GNU/Linux systems: Linux distributions generally are an independent implementation of (most of) the POSIX spec, which in turn was based largely on Sys V R4. And so, here we are. The rc.init files seem, from what I recall from Redhat, to be System V. [Sluggo] See "man ps". The 'ps' Linux uses has two sets of command-line options to please everybody. There are also verbose long options, and two sets of output formats. Linux generally follows the System V "style" but deviates whenever it feels like it. [Rick] And System V wasn't exactly System V, any more, after SysV R3. ;-> SysV R4 was, after all, one of computing biggest exercises in syncretism.[1] [Jimmy] The footnote is missing, but I assume it was to contain a definition of syncretism: Attempted union of principles or parties irreconcilably at variance with each other. (dict.org) [Sluggo] The network stack is closer to BSD's. [Rick] Amen to that. Back when there was a Caldera Systems, Inc. (remember them?), they kept trying to get Torvalds to accept a huge patch to add the System V "STREAMS" interproces I/O system. Torvalds turned them down flat, repeatedly, saying (and I paraphrase): "Why would we need that bloatware in our kernel, when Berkeley sockets already do that same job at least as well?" [Sluggo] The init system is not part of "Linux" (the kernel) but is chosen by each distribution. Most distros use System V style, and even those that don't like Slackware have compatibility directories so that vendor-compiled commercial programs can be installed without too much hassle. However, the init strategy is one of the most varying things between distributions. I just got a new Dell Dimension basic "cheap" no-frills computer, but still, it's 80 gigs of hard-drive space are far more than i need, so I'm installing SuSE, Mandriva, and a third. It was going to be Debian, but now I'm thinking of FreeBSD. Can anyone tell me the major difference between FreeBSD and Linux? [Rick] Speaking in very broad terms, FreeBSD was designed and is maintained by people who never liked AT&T Unix or any of its derivatives and independent reimplementations at all, and who thus were unsatisfied by the System V R4 compromise. Of course, that's hardly all there is to it: If you'd been sued, more or less baselessly, by a Fortune 500 corporation for copyright infringement, you'd probably develop an attitude problem, too. The difference in init scripts, you know about. A minority of Linux distributions have offered BSD init scripts, too, all along, e.g., Slackware, for those who like them. Additionally, there are such a large number of other small but significant architectural differences that I cannot hope to remember them all. The BSDs tend to favour a "BSD slice"-type partition map, as opposed to the IBM/Microsoft-style partition map more typical of Linux systems (at least on IA32/x86-64). Their directory layout is a bit different, not very close to the Filesystem Hierarchy Standard promoted by the Linux community. They tend to use variants on UFS as their main filesystem. (FreeBSD favours a variant called FFS = Fast Filesystem, incorporating Marshall Kirk McKusick's softupdates instead of journaling. Softupdates are a clever and effective way to get most of journaling's advantages in protecting metadata following crashes or power loss -- but sadly does nothing about the problem of long fsck times after such events.) They tend not to have anything resembling /proc. They favour csh over Bourne variants such as bash. They favour nvi over vim. They default to use of the wheel group (which Linuxes usually lack mostly because Stallman at FSF doesn't like the concept). In general, their prejudices tend to come across to Linux users as a bit old-fogey-ish. They offer both binary packages plus a very successful build-from-source architecture called the "ports" system. The closest thing to the latter on Linux would be Gentoo's portage and the GAR build system (and similar things in other build-from-source Linux distros). They have their own libc (not based on GNU's, as Linux always has been) and their own implementation of make. Otherwise, a lot of the userspace toolsets are literally exactly the same (except compiled to a different ELF binary format). Spend some time reading "The FreeBSD Handbook", online: It's a landmark in clarity for technical documentation, and worth reading on that basis alone. I've heard that FreeBSD is known for its security and other network capabilities. [Rick] FreeBSD is not especially known for security. (You might be thinking of OpenBSD.) I would say that it best known for scalable network performance under heavy load, and for stability. It's also known for having relatively narrow scope of hardware compatibility (i.e., driver selection), compared to Linux, and for relatively slow and conservative development: You may or may not find FreeBSD 6.0-STABLE's driver support for your new Dell Dimension's chipsets to be adequate. Specific comparisons of FreeBSD with typical Linux systems have been done repeatedly, but tend to have the problem of being out of date (which doesn't stop people from alluding to them, out of ignorance). E.g., often-heard claims about better FreeBSD VM performance rely on Linux 2.4.x results, back when Linux had a real problem, there. For that matter, Linux's entire TCP/IP stack has been thrown out and rewritten four or five times during the kernel's history, so you can imagine the large opportunity for outdated comparisons, there. It used to be the case that FreeBSD's SCSI subsystem gave significantly better performance than that of the Linux kernels, but I suspect that difference, too, has been eliminated. What probably does remain is the matter of NFS/autofs/amd: If you're a heavy NFS user, you'll find that FreeBSD's NFSv3 implementation still has fewer bugs than Linux's, though the gap is pretty small by now. [Sluggo] Some people say BSD's network stack can handle a higher load. [Rick] That's largely on the basis of data like this 1999 event: http://www.bafug.org/news/NewRecord.html At that time, Walnut Creek CD-ROM set the world record for most bytes of network traffic processed in 24 hours by a single host: One single-processor PII box (a then-famous FreeBSD ftp server) handled 1.39 terabytes. (This burst of traffic was, ironically, occasioned by the release on that machine of Slackware 4.0.) Fast-forward: About a year ago, I helped build what became the second-fastest computational cluster in the world, "Thunder", a set of 1024 quad-Itanium2 Linux boxes. The interconnects for node-to-node data passing used Quadrics cards, and I don't have figures handy but it was a stupendous level of network traffic. [Sluggo] But both systems are widely used in high-performance mission-critical situations, so the difference is a bit academic. Isn't Linux being used for rocket control somewhere? Linux comes in a wide variety of flavors from user-compiled hacker systems (Gentoo) to turnkey Windows clones (Linspire), with RPM/DEB distros in between. BSD sticks to the user-compiled route. [Rick] Misconception. It's actually probably more common, in practice, to install software from the binary packages than to build it from ports. I can tell you that BSD admins will often resource to packages when there are local build problems, if nothing else. [Sluggo] BSD also tends to have less support for new hardware, and last time I checked it had an incompatible disk-partitioning scheme, making dual booting a problem. [Rick] FYI: FreeBSD (at least) is perfectly happy booting from an IBM/Microsoft-style partition table. For that matter, you can boot Linux (but not MS-Windows) from a BSD "slice"-style disklabel. [Sluggo] But if a user-compiled distro appeals to you and you like the BSD traditions, FreeBSD would be worth looking into. That seems to be the most popular one for new users, although NetBSD and OpenBSD are also free and would be worth comparing. [Rick] Don't overlook DragonFly BSD. http://linuxmafia.com/pipermail/conspire/2005-October/001504.html Missing libraries ...and how to find them From Benjamin A. Okopnik Answered By: Jimmy O'Regan, Neil Youngman, Peter Knaggs All of a sudden, lots and lots of stuff - including 'vi' - is crashing when I try to bring it up. The error message I get is "error while loading shared libraries: libpangocairo-1.0.so.0: cannot open shared object file: No such file or directory". Worst of all, grepping the Debian ls-lR doesn't show any such thing - and searching the Net has lots of people having the same problem and not being able to find a package that contains it. This is not sounding good. Folks, if any of you could take a look in your /usr/lib (that's where 'strace' tells me these progs are looking for it) and send me a copy of your libpangocairo-1.0.so.0 - assuming that somebody somewhere has it - I'd be very grateful. Meanwhile, I'm quite annoyed and puzzled - how the heck can so much stuff depend on a lib that's not available??? Sigh. I hope I don't end up having to reinstall my entire system. That would be a really, really big problem while I'm on the road. Update: I've found an RPM that contains libpangocairo - presumably, it's something near what I need. Converting it wasn't useful, since it was going to put the files into a different directory - so I just copied out the files and put'em into /usr/lib. Result: well, I've got Vim, Mozilla, and Firefox back. On the other hand, "mdh" (my MailDoHickey from Freshmeat that I've been using for a year or more) segfaults; so does "gqview". My best guess as to the cause of this: earlier, I did an "apt-get update" and "apt-get dist-upgrade", and I recall seeing "libc" (and a few other libs) go flying by in the list of installed packages. If that's what it is, then I'm a bit shocked: I've never had Debian break my install before, simply via an update. More tomorrow, since it's almost 2a.m. here. [Jimmy] New version of gvim? Pango is Gnome's framework for internationalised text (bidi, strange fonts, etc.), Cairo is a vector drawing library (like DPS or Apple's Display PDF (Quartz?)). All text in Gtk is now rendered through Pango, so everything that depends on Gtk in any way is going to depend on it. It doesn't seem to be in Debian yet. Ah. I see. A few days ago, the maintainer of Jpilot got back to me about a bug that I'd filed, and asked me to recompile Jpilot from source with the latest libpisock library. However, Debian's "official" method of creating a package from source is this nightmarish chase of dependencies, all alike... and Gtk+, Cairo, Pango, and a few other things (all from pretty much the same place - the gtk.org FTP server.) However, everything worked OK back then - including the new version of JPilot. Something in the recent update must be conflicting with the "cutting edge" libs. It still isn't looking good. I've tested a few other GTK-based apps - gtkpool, gtksee, gtop - and they work, although gtop throws out a cryptic warning: glibtop: glibtop_get_swap (): Client requested field mask 0001f, but only have 00007. I have no idea what all those libs may have overwritten. Fixing it is going to take some thought. :| [Jimmy] Um... specifically, the Cairo backend for Pango doesn't seem to be in Debian yet, though Pango is. So, now that I'm actually awake, and possess a functioning brain - in contrast to last night - I have a plan of attack that should let me get past all this bull with the grace of a matador (and without using OLE even once.) 1) Pick a bunch of GTK-based apps and run each one. Add those that fail to a list. Then: for app in $list do ldd `which $app` |perl -wlne's/^.* => (\S+) .*/$1/;/gtk|pango|cairo/&&print'>>list.txt sort -uo list.txt list.txt done while read n; do readlink $n; done < list.txt This should give me a list of all the relevant libs - running this for 'mdh' and 'gtk-gnutella' already shows some promise - that may need to be reinstalled. It may require that I manually remove the offending lib (sometimes, installing the right version doesn't do anything unless the old lib is removed), but that shouldn't be too difficult; the list won't be all that long. Running the above for 'mdh' and 'gtk-gnutella' shows: /usr/lib/libcairo.so.1 /usr/lib/libcairo.so.2 /usr/lib/libgtk-x11-2.0.so.0 /usr/lib/libpango-1.0.so.0 /usr/lib/libpangocairo-1.0.so.0 /usr/lib/libpangoft2-1.0.so.0 /usr/lib/libpangox-1.0.so.0 /usr/lib/libpangoxft-1.0.so.0 I'm betting that one of those - libgtk-x11-2.0.so.0, anybody? - is the bugger that's busting my chops. More tests later; gotta run to work NOW. Update: got it fixed. Most likely. At least, GTK apps now run without complaining. I looked at the list that I got as a result, and ran "dlocate" over the "root name" bits (everything up to the first '.'); other than the obvious, most of the rest pointed to libglib2.0-0. I reinstalled it and removed the newer versions - i.e., the current lib names mostly look like "libpangoft2-1.0.so.0.801.1", while the newer (broken) ones look like "libpangoft2-1.0.so.0.1001.0" - and life is good again. Whew. I'm going to be doing more testing - i.e., by removing "libpangocairo", which should not be getting pulled in by anything - but it seems all right now. [Neil] There was a neat tip on TAG a while back about the LD_DEBUG environment variable. I think it could be useful in identifying the exact problem. neil ~ 15:08:12 501 > LD_DEBUG=help ls Valid options for the LD_DEBUG environment variable are: libs display library search paths reloc display relocation processing files display progress for input file symbols display symbol table processing bindings display information about symbol binding versions display version dependencies all all previous options combined statistics display relocation statistics unused determined unused DSOs help display this help message and exit To direct the debugging output into a file instead of standard output a filename can be specified using the LD_DEBUG_OUTPUT environment variable. neil ~ 15:50:43 502 > Oh, good one! I wish I'd remembered it. I used 'strace' to see what was going on; unfortunately, it didn't show enough detail to be of use. The above may well do that; I'll use it to do a little troubleshooting, just to make sure that this is resolved. Thanks, Neil! [Peter] Ulrich Drepper has a quite readable guide to writing shared libraries, he's been maintaining it for quite a while now and in Jan 2005 put out this: http://people.redhat.com/drepper/dsohowto.pdf It's probably more intended for folks actually writing shared libs in the first place, but it's a good one for debugging

A New Windoze Notebook — Now What? Knoppix! By Edgar Howell Introduction For a brief period of time this office had been Windows-free, in theory at least. The combo printer requiring '95 had stopped using black ink and '95 had stopped booting, making both obsolete. And '98 on the old notebook doesn't shutdown properly, which turns booting into an adventure. Just hadn't quite got around to dumping all this junk. Then fate sent a project my way that required some variety of Windows. The one tool I absolutely need I got familiar with many years ago under OS/2. Unfortunately it no longer exists in any such incarnation. So it was time to clench my teeth, say good-bye to dreams of a notebook completely unexposed to Windows and order a Dell with whatever is current from Redmond now-a-days. The Plan We ordered the Dell with twice the hard-drive and 4 times the memory advertised, so we just reduce the partition a bit and put SuSE 10.0 on there and be up and running in nothing flat. This is something I've done for years. One of my favorite commercial tools had been Partition Magic. Pretty easy to reduce a partition in size and use the free space for some other purpose. But as Robert Burns put it: The best laid schemes o' Mice an' Men, Gang aft agley. The Problem If in the past I have ever said anything nice about products from Redmond, I really do regret it. If anything, I have certainly been far too polite in just using words like "junk". There are other 4-letter and longer words far more suitable. Consider: you buy a machine with one pre-installed operating system on an 80GB drive and cfdisk reports: First Last # Type Sector Sector Offset Length File System Type (ID) Flags -- ------- ----------- ----------- ------ ----------- -------------------- ----- 1 Primary 0 176714 63 176715 Dell Utility (DE) none 2 Primary 176715 146544929 0 146368215 HPFS/NTFS (07) Boot Pri/Log 146544930 146560994 0 16065 free none 3 Primary 146560995 156296384 0 9735390 CP/M / CTOS / . (DB) none Mount the 3 primary partitions and df shows: Filesystem 1K-blocks Used Available Use% Mounted on [...] /UNIONFS/dev/sda1 88136 6654 81482 8% /mnt/sda1 /UNIONFS/dev/sda2 73184104 7132524 66051580 10% /mnt/sda2 /UNIONFS/dev/sda3 4858184 3167052 1691132 66% /mnt/sda3 Why is there more than one primary partition? The hardware architecture only supports 4, they're valuable! None is particularly full. And that gap in the middle?! Can we spell "fragmentation"? Define it? Maybe people living in the wild and wooly world of Windows are used to things like this and don't consider it fraudulent when deprived of about 14GB out of 80. But to have that much real estate taken away without even asking!? Consider the percentage if I hadn't chosen a drive twice the size of normal. Is there any need to point out the bizarre file system types? And I don't do NTFS! Tabula Rasa But it gets better, or worse. Back in '95 days, you got a CD with the operating system, probably also pre-installed on the machine. With '98 the system was pre-installed and you got a so-called "Recovery CD" along with it. At some point in time you were forced to use the infernal thing and that was when you learned that it wouldn't just restore the operating system but would return the entire partition to the status it had originally had when the machine left the manufacturer. Like good-bye data if you don't have adequate back-ups and/or another bootable partition (this was long before Knoppix). But if you modified the partition size in order to install something else as well, the "Recovery CD" fails — after it has formatted the partition it can't use! And the last Partition Magic I bought couldn't help because instead of running under DOS it required Windows. A perverse Catch-22. Been there, done that, wore out the T-shirt. But at least you had a CD with the system software on it. Dell/Microsoft didn't even provide one! Just how expensive are CD-ROMs today? In bulk? That's what those other partitions are all about. In other words it is impossible to do something simple like format the drive, allocate partitions, and install software. So now what?! Tools and Toolboxes In spite of having used other fine tools in the past to work through problems, Knoppix has become my toolbox of choice because it gives me an environment I am familiar with, GNU/Linux, and lots and lots of tools. Of course the first thing I did — after minimal setup, as little as possible — was to change the boot sequence on the notebook to enable using Knoppix. I had to know how to do this (F2, DEL early in the boot process are good candidates): other than one large piece of paper (roughly 2-times legal-size) describing the external features of the hardware, Dell included no significant printed documentation. It's all on the hard-drive. Above, you have already seen some of the information available from use of the tools on the Knoppix CD/DVD. By the way, here I am using a DVD with Knoppix 4.0. So far all we have been able to do is to confirm that we have a major problem. Bizarre HD configuration. Unfamiliar file system. So let's see what tools Klaus Knopper included in his toolbox: knoppix@3[knoppix]$ apropos NTFS libntfs-gnomevfs (8) - Module for GNOME VFS that allows access to NTFS filesystems. mkntfs (8) - create a NTFS 1.2 (Windows NT/2000/XP) file system ntfscat (8) - concatenate files and print them on the standard output ntfsclone (8) - Efficiently clone, image or restore an NTFS filesystem ntfscluster (8) - identify files in a specified region of an NTFS volume. ntfsfix (8) - tool for fixing NTFS partitions altered by the Linux kernel NTFS driver. ntfsinfo (8) - dump a file's attributes ntfslabel (8) - display/change the label on an ntfs file system ntfsls (8) - list directory contents on an NTFS filesystem ntfsprogs (8) - several tools for doing neat things with NTFS partitions ntfsresize (8) - resize an NTFS filesystem without data loss ntfsundelete (8) - recover a deleted file from an NTFS volume. smbcquotas (1) - Set or get QUOTAs of NTFS 5 shares knoppix@3[knoppix]$ Plan B From looking at a couple of the man pages it would seem that this collection of tools has been around for 10 years or so, well beyond release 0.0 by now. So let's use ntfsclone to push an image of the bootable partition over the LAN onto another machine. Once we have that as back-up we can use ntfsresize to shrink the partition down to something reasonable. After that it should be easy enough to create an extended partition with a couple of logical partitions for Linux. I'm not at all familiar with these tools but it is always fun to learn something new. And the time is right. Backing Up Again, nothing ever quite works the way one expects. Making the image was no exception. Here is one of the unsuccessful attempts: knoppix@3[knoppix]$ su root@3[knoppix]# ntfsclone --save-image --output - /dev/sda2 | gzip -c | \ ssh -l web lohgopc2 'cat > /DATA/NO_BACKUP/DELL/sda2.img.gz' ntfsclone v1.9.4 The authenticity of host 'lohgopc2 (192.168.0.102)' can't be established. RSA key fingerprint is 65:32:cc:81:8f:eb:73:24:7b:b3:18:a8:66:fa:7c:ae. Are you sure you want to continue connecting (yes/no)? NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 74940522496 bytes (74941 MB) Current device size: 74940526080 bytes (74941 MB) Scanning volume ... 100.00 percent completed Accounting clusters ... Space in use : 7304 MB (9,7%) Saving NTFS to image ... root@3[knoppix]# After "Saving NTFS to image ..." nothing further happened and I had to cancel with CTRL-C. Here is what finally worked: knoppix@1[knoppix]$ su sudo: unable to lookup Knoppix via gethostbyname() root@1[knoppix]# ntfsclone --save-image --output - /dev/sda2 | gzip -c | \ ssh -l web lohgopc2 'cat > /DATA/NO_BACKUP/DELL/sda2.img.gz' ntfsclone v1.9.4 Password: NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 74940522496 bytes (74941 MB) Current device size: 74940526080 bytes (74941 MB) Scanning volume ... 100.00 percent completed Accounting clusters ... Space in use : 7304 MB (9,7%) Saving NTFS to image ... Warning: No xauth data; using fake authentication data for X11 forwarding. 100.00 percent completed Syncing ... root@1[knoppix]# Other than the obvious difference between success and failure, from a quick glance at the output it isn't clear why one worked and one didn't. But look at the lines ending in "NTFS volume version: 3.1". On the unsuccessful attempt I had failed to note that ssh wanted confirmation that it was OK to establish a connection with the other machine. It was patiently waiting for me to enter "yes" while I was scratching my head trying to figure out why nothing was happening! Success the next morning — a good night's sleep can do a world of good — didn't include that question because I had already told ssh that it was OK and it had "permanently" (until next boot with Knoppix) recorded this fact. Refreshed and thinking clearly, I remembered that the password was needed and saw the request from ssh. Resizing Now that our safety-net was in place the first thing to do was to check out the partition: knoppix@4[knoppix]$ sudo ntfsresize --info /dev/sda2 sudo: unable to lookup Knoppix via gethostbyname() ntfsresize v1.9.4 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 74940523008 bytes (74941 MB) Current device size: 74940526080 bytes (74941 MB) Checking filesystem consistency ... 100.00 percent completed Accounting clusters ... Space in use : 7304 MB (9,7%) Collecting shrinkage constrains ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 16790 MB 0 Multi-Record : 8393 MB 9 You might resize at 7303708672 bytes or 7304 MB (freeing 67637 MB). Please make a test run using both the -n and -s options before real resizing! knoppix@4[knoppix]$ Rather than reducing the partition as far as possible, let's follow that advice and see what it looks like if we leave some room for data and the tool that has yet to be installed: root@0[knoppix]# ntfsresize --no-action --size 10G /dev/sda2 ntfsresize v1.9.4 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 74940523008 bytes (74941 MB) Current device size: 74940526080 bytes (74941 MB) New volume size : 9999995392 bytes (10000 MB) Checking filesystem consistency ... 100.00 percent completed Accounting clusters ... Space in use : 7304 MB (9,7%) Collecting shrinkage constrains ... Needed relocations : 395906 (1622 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... 100.00 percent completed Updating $BadClust file ... Updating $Bitmap file ... Updating Boot record ... The read-only test run ended successfully. OK, go for it. root@0[knoppix]# ntfsresize --size 10G /dev/sda2 ntfsresize v1.9.4 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 74940523008 bytes (74941 MB) Current device size: 74940526080 bytes (74941 MB) New volume size : 9999995392 bytes (10000 MB) Checking filesystem consistency ... 100.00 percent completed Accounting clusters ... Space in use : 7304 MB (9,7%) Collecting shrinkage constrains ... Needed relocations : 395906 (1622 MB) WARNING: Every sanity check passed and only the DANGEROUS operations left. Please make sure all your important data had been backed up in case of an unexpected failure! Are you sure you want to proceed (y/[n])? y Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... 100.00 percent completed Updating $BadClust file ... Updating $Bitmap file ... Updating Boot record ... Syncing device ... Successfully resized NTFS on device '/dev/sda2'. You can go on to shrink the device e.g. with 'fdisk'. IMPORTANT: When recreating the partition, make sure you 1) create it with the same starting disk cylinder 2) create it with the same partition type (usually 7, HPFS/NTFS) 3) do not make it smaller than the new NTFS filesystem size 4) set the bootable flag for the partition if it existed before Otherwise you may lose your data or can't boot your computer from the disk! root@0[knoppix]# Now that we have reduced the size of the area used within the physical partition (comparable to de-frag under Windows, maybe) it is safe to use cfdisk /dev/sda Festplatte: /dev/sda Größe: 80026361856 Bytes, 80,0 GB Köpfe: 255 Sektoren pro Spur: 63 Zylinder: 9729 Name Flags Part. Typ Dateisystemtyp [Bezeichner] Größe (MB) --------------------------------------------------------------------------------------- sda1 Primäre Dell Utility 90,48 sda2 Boot Primäre NTFS [] 10001,95 sda5 Logische Linux ext2 10010,17 sda6 Logische Linux 10001,95 Logische Freier Bereich 14928,89 sda7 Logische FAT16 10001,95 sda8 Logische Linux swap / Solaris 20003,89 sda3 Primäre CP/M / CTOS / ... 4984,52 [ Bootbar] [Löschen ] [ Hilfe ] [ Maxim. ] [Ausgabe ] [ Ende ] [ Typ ] [Einheit.] [Schreib.] This was the step where extreme care was called for. A mistake with ntfsresize produces an error message or warning. To me the partition table is comparable to the base register in some assembler languages: something that cannot be verified, a promise, not a guarantee. Mess it up and you may never be able to recover. Pay very close attention to the changes you make and back out if there is anything at all that you don't fully understand! Finishing Touches Once again, in the white hat, Linux wins! The backup was unneeded, although I never would have continued without one. When booted into Windows, it did whatever it is that it does while it is doing what it does when one doesn't know what it is doing. So that worked. I guess. Creating the extended partition and a couple of logical partitions inside it is quite straight-forward and doesn't deserve further discussion here. Although I grew up with fdisk, I will repeat the recommendation in the documentation to use cfdisk instead. Interactive and keeping one apprised of current status, it was very easy to get used to. Postscript In retrospect it wasn't necessary to jump back and forth between normal user and root as in the cut-and-paste from screens, above. But there is a time and place for each. On more than one occasion I have failed to notice that I had returned to a virtual terminal with root privileges. Bad! The preferred idiom with Knoppix is "sudo". You won't need a root password but it keeps you aware of where you are and what you are doing. After having gone through this exercise it occurred to me that a better long-term solution might be to back up the Windows partition as described after major changes, i.e. installation of the tool I need. The other two as well of course, just in case something goes wrong. And then when the time comes that the Windows partition becomes unusable and restoring is necessary, just do it from the network. That should make those other two partitions obsolete and recover a bunch of disk space. Way down the road. The modem is presumably one of those nefarious "winmodems". I didn't check it out, ISDN currently, not analog. The network card works without problem. And the Kensington Mouse at the USB port was recognized by Knoppix off the starting blocks. It is really neat, the cable winds up into it, to the length needed, from 0 to whatever. I may even take it along when on the road. The Dell hardware is actually very nice. If you could just get it without having to pay for an operating system you normally don't need... Edgar is a consultant in the Cologne/Bonn area in Germany. His day job involves helping a customer with payroll, maintaining ancient IBM Assembler programs, some occasional COBOL, and otherwise using QMF, PL/1 and DB/2 under MVS. (Note: mail that does not contain "linuxgazette" in the subject will be rejected.)



Benchmarking Filesystems Part II By Justin Piszcz Introduction After the last article was published, I have received more than a dozen requests for a second filesystem benchmark using the 2.6 kernel. Since that time, I have converted entirely to XFS for every Linux machine I use, so I may be a bit bias regarding the XFS filesystem. I tried to keep the hardware roughly the same. Instead of a Western Digital 250GB and Promise ATA/100 controller, I am now am using a Seagate 400GB and Maxtor ATA/133 Promise controller. The physical machine remains the same, there is an additional 664MB of swap and I am now running Debian Etch. In the previous article, I was running Slackware 9.1 with custom compiled filesystem utilities. I've added a small section in the beginning that shows the filesystem creation and mount time, I've also added a graph showing these new benchmarks. After the first round of benchmarks, I received a sleuth of e-mails asking for the raw numbers. The numbers are now included in tables at the end of this e-mail for both the last and current set of benchmarks. What's new? 1) Unify the graphs so all but one are the same type.

2) Run tests with a recent distribution and the 2.6.14.4 kernel.

3) Included the ReiserFS Version 4 benchmarks.

4) Included the raw data in matrix form at the bottom of this page.

5) Included three additional graphs:

a) Creation time to make the actual filesystem.

b) Time it takes to mount the filesystem.

c) The amount of space available after formatting with the default options. Specifications Hardware COMPUTER: Dell Optiplex GX1 CPU: Pentium III 500MHZ RAM: 768MB SWAP: 2200MB CONTROLLER: Maxtor Promise ATA/133 TX2 - IN PCI SLOT #1 DRIVES USED: 1] Seagate 400GB ATA/100 8MB CACHE 7200RPM 2] Maxtor 61.4GB ATA/66 2MB CACHE 5400RPM DRIVE TESTED: The Seagate 400GB. Software LIBC VERSION: 2.3.5 KERNEL: linux-2.6.14.4 COMPILER USED: gcc-4.0.3 EXT2: e2fsprogs-1.38/sbin/mkfs.EXT2 EXT3: e2fsprogs-1.38/sbin/mkfs.EXT3 JFS: jfsutils-1.1.8/sbin/mkfs.jfs REISERFSv3: reiserfsprogs-3.6.19/sbin/mkreiserfs REISERFSv4: reiser4progs-1.0.5/sbin/ (Used patch reiser4-for-2.6.14-1.patch w/ libaal-1.0.5 + reiser4progs-1.0.5) XFS: xfsprogs-2.6.36/sbin/mkfs.xfs Tests Performed 001] Create 10,000 files with touch in a directory. 002] Run 'find' on that directory. 003] Remove the directory. 004] Create 10,000 directories with mkdir in a directory. 005] Run 'find' on that directory. 006] Remove the directory containing the 10,000 directories. 007] Copy kernel tarball from other disk to test disk. 008] Copy kernel tarball from test disk to other disk. 009] Untar kernel tarball on the same disk. 010] Tar kernel tarball on the same disk. 011] Remove kernel source tree. 012] Copy kernel tarball 10 times. 013] Create 1GB file from /dev/zero. 014] Copy the 1GB file on the same disk. 015] Split a 10MB file into 1000/1024/2048/4096/8192 byte pieces. 016] Copy kernel source tree on the same disk. 017] Cat a 1GB file to /dev/null. NOTE1: Between each test run, a 'sync' and 10 second sleep were performed. NOTE2: Each file system was tested on a cleanly made file System. NOTE3: All file systems were created using default options. NOTE4: All tests were performed with the cron daemon killed and with 1 user logged in. NOTE5: All tests were run 3 times and the average was taken, if any tests were questionable, they were re-run and checked with the previous average for consistency. Creating the Filesystems EXT2 p500:~# mkfs.EXT2 /dev/hde1 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 48840704 inodes, 97677200 blocks 4883860 blocks (5.00%) reserved for the super user First data block=0 2981 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. p500:~# EXT3 p500:~# mkfs.EXT3 /dev/hde1 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 48840704 inodes, 97677200 blocks 4883860 blocks (5.00%) reserved for the super user First data block=0 2981 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. p500:~# JFS p500:~# mkfs.jfs -q /dev/hde1 mkfs.jfs version 1.1.8, 03-May-2005 \ Format completed successfully. 390708801 kilobytes total disk space. 0.03user 1.01system 0:02.62elapsed 40%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+224minor)pagefaults 0swaps p500:~# REISERFS V3 p500:~# mkreiserfs -q /dev/hde1 mkreiserfs 3.6.19 (2003 www.namesys.com) A pair of credits: Continuing core development of ReiserFS is mostly paid for by Hans Reiser from money made selling licenses in addition to the GPL to companies who don't want it known that they use ReiserFS as a foundation for their proprietary product. And my lawyer asked 'People pay you money for this?'. Yup. Life is good. If you buy ReiserFS, you can focus on your value add rather than reinventing an entire FS. Chris Mason wrote the journaling code for V3, which was enormously more useful to users than just waiting until we could create a wandering log filesystem as Hans would have unwisely done without him. Jeff Mahoney optimized the bitmap scanning code for V3, and performed the big endian cleanups. p500:~# REISERFS V4 p500:~# mkfs.reiser4 -y /dev/hde1 p500:~# XFS p500:~# mkfs.xfs -f /dev/hde1 meta-data=/dev/hde1 isize=256 agcount=16, agsize=6104825 blks = sectsz=512 data = bsize=4096 blocks=97677200, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 p500:~# Benchmark Set 1 of 4

Both extended filesystem revisions take a backseat to their counterparts.

ReiserFS takes a VERY long time to mount the filesystem. I included this test because I found it actually takes minutes to hours mounting a ReiserFS filesystem on a large RAID volume.

People always complain about how hard drive manufacturers equate 1000 kilobytes as a megabyte, well the filesystem is also part of the problem! Benchmark Set 2 of 4

In the first test, ReiserFSv3 continues to lead the pack, with ReiserFSv4 and JFS not far behind.

ReiserFSv4 is now the slowest filesystem to search for files, previously, it had been XFS.

XFS is no longer the slowest filesystem for removing many files. However, EXT2 and EXT3 still win by far.

Similar to the first test, EXT2 and EXT3 lag behind the other journaling filesystems.

Contrary to the first set of benchmarks I ran, it appears XFS has slightly improved, ReiserFSv4 is now the slowest with ReiserFSv3 behind it. Also in the previous test, EXT3 had suffered a huge performance hit in this area, it is now comparable to EXT2.

It appears ReiserFS v3 and v4 have some serious issues deleting large numbers of directories.

In the first benchmark test, EXT2 won the test, now XFS is the new leader.

This benchmark represents how fast the tar ball can be read from each file system. Surprisingly, XFS now exceeds EXT3.

ReiserFSv3 won the last benchmarking round; however, EXT2 and EXT3 now dominate this test.

The best journaling file system here used to be ReiserFSv3; however, JFS now wins the benchmark.

ReiserFSv3 used to be the winner, it is now ousted by EXT2 and EXT3.

The biggest difference here is most of the filesystems have evened out compared to the first benchmarking test.

This test has remained relatively the same.

Once again, mostly the same; however, it is important to note that the performance of EXT2 and EXT3 are much closer this time.

EXT2 and EXT3 take the biggest hit up to about 4096 bytes which is surprising.

EXT2 and EXT3 now outperform ReiserFSv3. In most of these benchmarks thus far, ReiserFSv4 is the slowest; however, here we see that V4 is about 12 seconds faster than V3.

JFS continues to lead this test with EXT2 and EXT3 not far behind.

The following represents the combined test times. Benchmark Set 3 of 4

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

The following is the CPU utilization for this test.

Here is a line chart of representing all of the test times. Benchmark Set 4 of 4

Here is a composite of the total test time for all tests. File Benchmark II Data Seconds Elapsed Test Task EXT2 EXT3 JFS REISERv3 REISERv4 XFS 001 Touch 10,000 Files 48.25 48.25 34.59 33.59 34.08 37.47 002 Find 10,000 Files 0.03 0.03 0.03 0.03 0.07 0.04 003 Remove 10,000 Files 0.16 0.16 1.64 1.84 2.98 2.51 004 Make 10,000 Directories 49.76 49.87 34.32 33.74 34.68 37.17 005 Find 10,000 Directories 0.65 0.65 0.63 1.07 1.46 0.72 006 Remove 10,000 Directories 1.66 1.67 3.58 43.48 119.42 5.39 007 Copy Tarball from Other to Current Disk 5.17 5.15 5.74 5.12 7.34 4.26 008 Copy Tarball from Current to Other Disk 6.96 7.00 6.97 6.89 8.21 6.69 009 UnTAR Kernel 2.6.14.4 Tarball 14.92 15.19 27.64 26.92 21.45 40.81 010 TAR Kernel 2.6.14.4 Source Tree 14.05 14.08 13.05 33.49 25.82 36.19 011 Remove Kernel 2.6.14.4 Source Tree 2.47 2.64 6.17 5.65 10.15 9.10 012 Copy 2.6.14.4 Tarball 10 Times 39.48 38.29 39.13 45.15 62.16 46.34 013 Create a 1GB File 15.02 15.02 15.12 15.96 25.40 15.87 014 Copy a 1GB File 36.87 36.51 38.54 47.60 50.63 41.25 015 Split 10M File into 1000 Byte Pieces 57.26 57.77 2.99 4.35 2.95 4.87 016 Split 10M File into 1024 Byte Pieces 28.73 28.97 2.24 4.04 2.61 4.01 017 Split 10M File into 2048 Byte Pieces 7.02 6.98 1.39 2.26 1.55 1.95 018 Split 10M File into 4096 Byte Pieces 1.85 1.83 0.67 1.05 0.99 0.98 019 Split 10M File into 8192 Byte Pieces 0.58 0.58 0.36 0.56 0.62 0.57 020 Copy 2.6.14.4 Kernel Source Tree 10.02 10.06 35.76 31.64 20.17 43.42 021 CAT 1GB File to /dev/null 18.90 18.59 18.00 37.33 21.37 18.70 CPU Utilization Test Task EXT2 EXT3 JFS REISERv3 REISERv4 XFS 001 Touch 10,000 Files 99.00 99.00 99.00 99.00 99.33 99.00 002 Find 10,000 Files 94.00 93.00 94.00 95.00 97.00 95.66 003 Remove 10,000 Files 98.00 98.66 73.66 99.00 99.00 91.66 004 Make 10,000 Directories 98.00 97.33 99.00 99.00 99.66 99.00 005 Find 10,000 Directories 99.00 99.00 99.00 99.00 99.00 99.00 006 Remove 10,000 Directories 99.00 99.00 88.66 99.00 99.00 97.00 007 Copy Tarball from Other to Current Disk 74.66 74.66 76.00 74.66 61.33 92.33 008 Copy Tarball from Current to Other Disk 60.00 59.33 59.33 62.00 86.00 62.66 009 UnTAR Kernel 2.6.14.4 Tarball 42.33 41.33 27.33 53.00 80.00 26.00 010 TAR Kernel 2.6.14.4 Source Tree 44.00 43.66 51.33 26.66 48.66 21.00 011 Remove Kernel 2.6.14.4 Source Tree 39.66 36.66 33.00 89.33 88.33 63.66 012 Copy 2.6.14.4 Tarball 10 Times 79.33 80.66 93.33 74.33 73.00 90.33 013 Create a 1GB File 56.00 55.66 67.33 57.00 50.00 64.33 014 Copy a 1GB File 42.00 42.00 47.00 37.33 52.00 49.33 015 Split 10M File into 1000 Byte Pieces 99.00 99.00 64.33 96.33 98.00 86.33 016 Split 10M File into 1024 Byte Pieces 99.00 99.00 77.33 97.66 99.00 97.00 017 Split 10M File into 2048 Byte Pieces 99.00 99.00 64.00 96.66 99.00 97.33 018 Split 10M File into 4096 Byte Pieces 99.00 99.00 69.33 99.00 99.00 97.33 019 Split 10M File into 8192 Byte Pieces 99.00 99.00 87.00 89.66 99.00 97.66 020 Copy 2.6.14.4 Kernel Source Tree 65.33 65.00 21.33 41.33 70.33 25.33 021 CAT 1GB File to /dev/null 26.33 27.00 27.33 36.66 46.33 30.00 File Benchmark I Data Conclusion With the second round of filesystem benchmarks, I hope everyone is now satisfied with the benchmarks using the 2.6 kernel. What I gleam from these benchmarks is both EXT2 and EXT3 are now roughly the same speeds in the majority of the tests. It also appears the XFS has improved in the majority of the tests. ReiserFSv3 has slowed in many of the tests with ReiserFSv4 being the slowest in most of the tests. It is important to note that JFS has improved in some of the tests. Personally, I still choose XFS for filesystem performance and scalability. Seconds Elapsed Test Filesystem EXT2 EXT3 JFS REISER XFS 001 Touch 10,000 Files 34.14 40.98 24.50 23.03 27.13 002 Find 10,000 Files 0.02 0.02 0.02 0.02 0.04 003 Remove 10,000 Files 0.14 0.36 1.37 1.76 2.52 004 Make 10,000 Directories 73.53 148.46 24.11 23.42 29.19 005 Find 10,000 Directories 0.44 0.66 0.41 0.87 0.59 006 Remove 10,000 Directories 1.97 56.31 4.05 5.37 6.70 007 Copy Tarball from Other to Current Disk 3.26 5.70 4.62 6.08 4.12 008 Copy Tarball from Current to Other Disk 3.36 3.45 3.49 3.53 3.45 009 UnTAR Kernel 2.4.26 Tarball 24.49 31.73 34.64 12.36 23.79 010 TAR Kernel 2.4.26 Source Tree 17.86 23.40 27.06 22.81 24.85 011 Remove Kernel 2.4.26 Source Tree 4.24 7.26 10.86 3.18 4.48 012 Copy 2.4.26 Tarball 10 Times 18.28 46.68 38.17 49.16 26.22 013 Create a 1GB File 18.93 22.35 28.87 25.80 20.49 014 Copy a 1GB File 45.04 62.48 54.46 71.06 55.89 015 Split 10M File into 1000 Byte Pieces 53.35 62.70 8.11 4.15 4.34 016 Split 10M File into 1024 Byte Pieces 26.24 32.61 7.63 3.90 3.96 017 Split 10M File into 2048 Byte Pieces 6.40 7.94 2.83 2.22 1.95 018 Split 10M File into 4096 Byte Pieces 1.65 2.10 0.50 0.65 0.97 019 Split 10M File into 8192 byte Pieces 0.51 0.69 0.31 0.47 0.53 020 Copy 2.4.26 Kernel Source Tree 24.43 30.65 25.68 22.19 26.53 021 CAT 1GB File to /dev/null 21.70 23.52 20.40 23.28 21.13 CPU Utilization Test Filesystem EXT2 EXT3 JFS REISER XFS 001 Touch 10,000 Files 97.66 96.66 92.66 97.66 97.00 002 Find 10,000 Files 98.33 107.33 109.66 85.66 80.66 003 Remove 10,000 Files 95.66 98.33 76.66 97.66 91.33 004 Make 10,000 Directories 63.00 33.00 96.00 97.66 90.66 005 Find 10,000 Directories 96.66 97.00 99.33 95.33 98.66 006 Remove 10,000 Directories 99.00 6.33 87.66 98.00 89.33 007 Copy Tarball from Other to Current Disk 74.66 79.00 79.33 90.33 81.66 008 Copy Tarball from Current to Other Disk 67.00 64.66 64.00 70.66 75.66 009 UnTAR Kernel 2.4.26 Tarball 16.00 18.66 15.33 78.33 31.66 010 TAR Kernel 2.4.26 Source Tree 21.66 22.66 17.00 32.00 18.33 011 Remove Kernel 2.4.26 Source Tree 11.00 18.33 9.00 96.66 77.33 012 Copy 2.4.26 Tarball 10 Times 96.66 67.33 66.33 96.00 92.00 013 Create a 1GB File 35.66 57.66 32.33 84.33 39.00 014 Copy a 1GB File 30.33 41.33 40.66 53.33 40.00 015 Split 10M File into 1000 Byte Pieces 97.66 96.66 22.33 89.33 92.33 016 Split 10M File into 1024 Byte Pieces 97.66 97.33 20.33 89.00 95.66 017 Split 10M File into 2048 Byte Pieces 95.33 97.33 26.00 88.66 95.66 018 Split 10M File into 4096 Byte Pieces 98.33 98.00 80.00 97.33 92.33 019 Split 10M File into 8192 Byte Pieces 99.33 98.33 87.00 96.00 94.66 020 Copy 2.4.26 Kernel Source Tree 19.33 17.33 18.00 44.33 25.00 021 CAT 1GB File to /dev/null 26.00 26.00 30.00 30.00 33.66

/dev/fanout : A One-To-Many Multiplexer By Bob Smith Introduction

Build and Install /dev/fanout

How /dev/fanout Works

Security and Obsolescence

Introduction This article describes a Linux module that replicates its input on all of its outputs, a so called "fanout" or "one to many" multiplexer. Purpose: The purpose of fanout is to give Linux a simple broadcast IPC. Our own purpose for writing the module was to distribute log messages to one or more processes that want to be notified when an event occurs. We use /dev/fanout, a web server, and XMLHttpRequest on a web client to build an alarm system with multiple web interfaces running simultaneously. One nice feature of our alarm system is that the web interfaces don't use polling but still update automatically when a new alarm system message arrives. Common Approaches to Broadcast: The two most common broadcast mechanisms in Linux are signals and UDP packets. You can broadcast a signal to a group of related processes using the kill command with a PID of zero. This works well if all of the processes are related and if the program knows what action is required on your signal. Signals will not work for our application since there is no way to directly route a signal from a web server to a web client, and because web servers do not know that we want to redraw certain web screens on a particular signal. We can also broadcast events using UDP or TCP. We've built event servers which accept TCP connections and broadcast event information down each accepted connection. We use XMLHttpRequest to request a PHP page that opens the TCP connection and waits for the event. While this approach works well, it requires yet another process and has the slight extra burden of an additional TCP connection for each web client. A Better Broadcast Approach: A better approach would be to have something like a FIFO, but instead of having all of the listeners compete for the single copy of the input message, have all of the listeners get their own copy. Consider the following bash dialog: mkfifo event_fifo cat event_fifo & cat event_fifo & cat event_fifo & echo "Hello World" > event_fifo Hello World The message appears only once, since only one instance of the cat command is given the fifo output. Now let's consider the same experiment using fanout: cat /dev/fanout & cat /dev/fanout & cat /dev/fanout & echo "Hello World" > /dev/fanout Hello World Hello World Hello World The message now appears once for each of the three listening cat commands. We use bash commands just to illustrate what fanout does. Its real power lies in letting many different programs get identical copies of a data stream. Which fails: send or receive? No matter how hard we try to avoid it, one day we'll find a reading process that can not keep up with the writing process. Allocating more memory postpones the problem but does not eliminate it. When this problem occurs we have two choices: apply back pressure to the writer causing the writer to block, or let the readers miss some output. The problem with blocking the writing process is that you may affect other parts of the system. Our original purpose was to build an alarm system and we chose to route all event notifications through syslogd. Since we have /dev/fanout as a target in the syslog.conf file, blocking the writer would block syslogd and defeat the whole purpose of our alarm system. The author of fanout very deliberately chose to cause the reader to fail when it can not keep up. Data is stored in a circular buffer and if a reader can not keep up with the writer, it will eventually ask for data that is no longer in the circular buffer. The fanout device returns an EPIPE error to the reader when this happens. In our application for /dev/fanout we are happy to protect syslogd at the expense of the web clients when we are forced to choose one over the other. Build and Install /dev/fanout The source code to the fanout module is available as a compressed tarball fanout.tgz, or you can pick up the individual files, fanout.c and Makefile. Build the module with the following commands: cd /usr/src/linux tar -xzf fanout.tgz cd fanout make When you install the module you can set the size of the circular buffer and can set the verbosity of the printk messages. The default buffer size is 16k and the default debug level is 2. A debug level of 3 traces all calls in the module and a debug level of 0 suppresses all printk messages. Here is an example that overwrites the default values for buffer size and debug level: insmod ./fanout.ko buf_sz=8192 debuglvl=3 Fanout uses a kernel assigned major number so you need to look at /proc/devices to see what was assigned. The following lines create all ten of the possible instances of a fanout device. MAJOR=`grep fanout /proc/devices | awk '{print $1}'` mknod /dev/fanout c $MAJOR 0 mknod /dev/fanout1 c $MAJOR 1 mknod /dev/fanout2 c $MAJOR 2 mknod /dev/fanout3 c $MAJOR 3 mknod /dev/fanout4 c $MAJOR 4 mknod /dev/fanout5 c $MAJOR 5 mknod /dev/fanout6 c $MAJOR 6 mknod /dev/fanout7 c $MAJOR 7 mknod /dev/fanout8 c $MAJOR 8 mknod /dev/fanout9 c $MAJOR 9 If all has gone well, the "Hello World" example given above should now work for you. How /dev/fanout Works This section is a high level design review of the fanout module. We explain the design and architecture, and relate specific lines of code in the module to the overall design. The key to understanding how fanout works is to know how a little about how read() works. If you were to open a disk file and make five read() calls with each call reading a thousand bytes, you would expect the next read to give you the data starting with byte 5000. Internally, the operating system keeps a counter, called f_pos , that remembers where you are in the file. Once you've read the first 5000 bytes, you don't normally want to read them again, and since you aren't likely to ask for them again, fanout can forget them. The mechanism used to remember only the most recent data is a circular queue. The fanout device uses the count variable to keep track of how many bytes have been written so far. At quiescence, the readers have all read count bytes ( count and f_pos are equal), and the readers are now asking for data starting at *offset (which also equals count ). When a writer adds data to the queue, the count variable is incremented by the amount added. Each of the readers must now wake and read the bytes between *offset and count . After adding data to the queue, a writer wakes any sleeping readers with the call to wake_up_interruptible() in fanout_write(). Buffer overflow: One of the fundamental decisions to make in a design is what to do when a reader can not keep pace with the writers. In many designs you would apply flow control to the writers to slow them down to keep pace with the slowest reader. The fanout device, however, returns an error to the slow reader. Specifically, the reader gets an EPIPE error when it requests data that is no longer in the circular buffer (i.e. *offset < count - buf_sz , where buf_sz is the number of bytes in the circular buffer). A reader does not immediately get an EPIPE after opening a fanout device that's been operating for awhile because in the file open routine, fanout_open() , we explicitly force the reader to be caught up with the writers. The line of code that does this is: filp->f_pos = fodp->count; Code notes: It is said that programmers can read code and know what it does, but they can not read a variable and know what it means. So instead of reviewing the code, we are going to review the variables. The fanout module supports up to NUM_FO_DEVS instances of a fanout device. NUM_FO_DEVS is currently set to ten. Each instance of a fanout device is described by the following data structure: struct fo { char *buf; /* points to circular buffer */ int indx; /* where to put next char recv'd */ loff_t count; /* number chars received */ wait_queue_head_t waitq; /* readers wait on this queue */ struct semaphore wlock; /* write lock to keep buf/indx sane */ }; Let's look at each of these variables in turn: buf: The buf variable points to the start of the buf_sz number of bytes allocated for the circular queue. The memory is not allocated until the first open() on the device, and the memory is allocated using kmalloc(). Allocated memory is not freed until the module is unloaded. indx: This variable gives the location of where to place the next byte in the circular queue. It is updated by fanout_write() as bytes are added to the queue. When indx gets to buf_sz, it wraps back to zero. count: This variable is the total number of bytes written to the device. It is updated only by fanout_write() and a reader has data to read when count is not equal to *offset. waitq: When a reader has no new data to read it blocks until new data is available. Specifically, the reading process sleeps in a call to wait_event_interruptible() . The writer's call to wake_up_interruptible() causes the readers to wake and continue execution with the lines of code immediately after the wait_event_interruptible(). wlock: While writers are writing to the circular queue, there is a short time during which the count and indx variables are not yet consistent with the data in the queue. During this window of inconsistency another writer might run and inadvertently corrupt the queue. The wlock mutex prevents this by locking out other writers while one writer is updating the queue. One final note on the code is the use of the *private_data in the file structure. Fanout uses this variable to store a pointer to the struct fo appropriate to that file. The FanOut Device Pointer (fodp) is usually retrieved at the start of a routine with a line of code like this: struct fo *fodp = (struct fo *)filp->private_data; Known bugs: While there may be several implementation bugs, the one possible design bug is that fanout assumes that the file offset counter never wraps. This should probably be fixed. Security and Obsolescence Security: In our use of /dev/fanout we let a web server read directly from it so that web clients can be updated when an event occurs. Giving the web server direct access to a device file is considered a security risk. You generally don't want your web server to follow symbolic links and you don't want the file system with the web server root directory to allow device nodes in it. (The file system should be mounted with the nodev option.) The fear is that if an attacker breaks Perl, PHP, or some other component of the web server, the attacker might be able be able to create /dev/hda1, /dev/mem, or some other critical device. We will be using /dev/fanout in an appliance where, after boot, we can at least drop the system capability CAP_MKNOD when we drop the other system capabilities. Obsolete already? It is widely anticipated that one of the next releases of the 2.6 kernel will include two new system calls, tee() and splice(). These calls will probably make obsolete the approach used to build fanout, and might make /dev/fanout entirely obsolete. This might be a good thing since, from a security point of view, it might be better to create a one-to-many multiplexer as a variant of a FIFO that can be attached to a nodev mounted file system. More information on tee() and splice() is available in an article at the Linux Weekly News (http://lwn.net/). The article number is 118750 and you can get to it directly here. Bob is an electronics hobbyist and Linux programmer. He is one of the authors of "Linux Appliance Design" to be published by No Starch Press.



Stepper motor driver for your Linux Computer By Sreejith N An introduction intended for people with no prior device driver knowledge This article is intended for those newbie Linux users who wish to use their Linux-box for some real work. I will also share some interesting experiments that I did with my AMD machine. INIT Learning new stuff is fun, but can be a bit frustrating. So, you want to write a device driver. The name itself is high-tech! You have some skills in the C programming language and want to explore the same. Also, you've written a few normal programs to run as processes in user space, and now you want to enter kernel space - where the real action takes place. Why Linux device drivers? The answer is, For fun

For profit (Linux is HOT right now, especially embedded Linux)

Because you can! The source is with you. Although it is possible to learn device driver coding by reading some books and PDFs written by the masters, this is a complicated and time-consuming approach. We will take the quick and easy approach, which is: Find some pre-written, working code

Understand how this code works

Modify it to suit our needs Let's make an easy start with some fundamentals. Stepper motor basics Stepper motors are special direct-current (DC) motors, typically used in applications like camera zoom drive and film feed, fax machines, printers, copying machines, paper feeders/sorters, disk drives and robotics. A DC stepper motor translates current pulses into motor rotation. A typical unipolar (single voltage) motor contains four winding coils. Applying voltage to these coils forces the motor to advance one step. In normal operation, two winding coils are activated at the same time, causing the motor to move one step clockwise. If the sequence is applied in reverse order, the motor will run counterclockwise. The speed of rotation is controlled by the frequency of the pulses. A typical full step rotation is 1.8 degrees, or 200 steps per rotation (360 degrees). By changing the time delay between successive steps, the speed of the motor can be regulated, and by counting the number of steps, the rotation angle can be controlled. Bit Pattern for Full Step Mode Green Blue Orange Red Hex Output Value Step 0 1 0 1 0 A Step 1 1 0 0 1 9 Step 2 0 1 0 1 5 Step 3 0 1 1 0 6 Hardware ideas The circuit diagram for the drive is shown below. The circuit consists of four TIP122 power transistors (T1, T2, T3 and T4), 220Ω resistors (R1, R2, R3 and R4), 3.3KΩ resistors (R5, R6, R7 and R8), 1N4148 freewheeling diodes (D1, D2, D3 and D4), and one LM7407 buffer chip (IC1). The 7407 buffer used here is a hex-type open-collector high-voltage buffer. The 3.3KΩ resistors are the pull-up resistors for the open-collector buffer. The input for this buffer comes from the parallel port. The output of the buffer is of higher current capacity than the parallel port output, which is necessary for triggering the transistor; it also isolates the circuit from the PC parallel port and hence provides extra protection against potentially dangerous feedback voltages that may occur if the circuit fails. The diode connected across the supply and the collector is used as a freewheeling diode and also to protect the transistor from the back EMF of the motor inductance. The motor used in my experiments (and documented here) was an STM 901 from Srijan Control Drives. During normal operation, the output pattern from the PC drives the buffer, and corresponding transistors are switched on. This leads to the conduction of current through those coils of the stepper motor which are connected to the energized transistor. This makes the motor move forward one step. The next pulse will trigger a new combination of transistors, and hence a new set of coils, leading to the motor moving another step. The scheme of excitation that we have used here has already been shown above. How do we interface the hardware with the Linux-box? You can use either the parallel port or the serial port for this purpose. We will be using parallel port as a digital interface between PC and the hardware (stepper motor drive). The parallel port can be considered as a register, and the I/O operations can be done simply by writing bit patterns (numbers like 0xA, 10, '1010', etc.) to this register. The base address of parallel port is 0x378. The PC parallel port is a 25 pin D-shaped female connector in the back of the computer. It is normally used for connecting computer to printer, but many other types of har