How careless, we’d forgotten to configure log rotation. So our application had gone with a default designed for a less verbose age, rotating files as soon as they exceeded a megabyte in size, and never throwing any of them away. Oh, and it was putting these log files at the root of the file system where they’d somehow gone unnoticed for some time. As a consequence, the file system had become clogged up with squillions of files.

$ cd / $ ls ... server.log.736624 server.log.736625 server.log.736626 server.log.736627 ...

How many files, exactly?

$ ls | wc -l ^C

No time to wait. Too many! We had to act fast.

We changed the log rotate configuration to something more appropriate, restarted the application, and set about cleaning up. Now, this is when you don’t want to open a file browser and drag files into trash can, not unless you like watching egg-timers. The desktop metaphor fails when you have squillions of files on your desk. Alarmingly, the shell complains too.

$ rm server.log.* -bash: /bin/rm: Argument list too long

At this point, a clear head and a steady hand is needed. I use pathname expansion and rm all the time and I’m confident the commands I type will have the right effect. But in my current situation — as root user, in the root directory, on a machine running an unfamiliar flavour of Unix, about to combine find with xargs and rm — I grow nervous.

How to stop find from descending? -Maxdepth , I think, but level 0 or 1 ? Is -print required? Should I create a scratch directory and practise.

Enough questions already! Are you a man or a man(1) reader?

$ find / -maxdepth 1 -name 'server.log.*' | xargs rm -f

Done!