Deep within the recesses of my hard disk, since time immemorial, a set of Linux tips lay in wait, lurking in a dark folder, until the fateful day when they were unleashed upon an unsuspecting node gel . . .

One of the most commonly used commands on the console is clear, but that's an awful lot to type to get a screen full of nothing. Instead, just press Ctrl+L, it does a Form Feed and gives you (metaphorically) a blank sheet of paper.

Picture this: You want to know the size of a file in a directory, which has a large number of files in it, but when you do an ls -la, the names just scroll off the screen before you have a chance to read them! So what do you do? Of course, the textbook answer is more or less to use more or less, but it's often easier to use the consoles scroll back history. Just press Shift+PgUp and Shift+PgDn to view whatever content had scrolled off the top of the screen.

It's also handy for viewing boot time messages spewed by the kernel, which are not shown by dmesg (if you boot into a console by default rather than X).

Note: The scroll history of a console is lost every time you switch away from it. The number of lines you can scroll is a maximum of 200 lines, and the number varies unpredictably (but you will always have two screens full).

On a bash (saying 'bash shell' is a pleonasm), Pressing Ctrl+R allows you to find any previously used commands (every command that you ever typed on that account) that match the characters you type following, and each character you type will narrow down your search, until no more matches are available (when you'll hear a beep).

For example:

Ctrl+R l matches 'ls -la'

matches 'ls -la' Ctrl+R le matches 'less README'

matches 'less README' Ctrl+R Red matches 'cd /mnt/cdrom/RedHat/RPMS'

The characters searched can occur anywhere within the matching command, and need not be only at the beginning.

Most folks know that a process can be killed by pressing Ctrl-C, but it doesn't always work. In those cases, try Ctrl-\ and usually, the program dies (with a core dump).

On MS-DOS, the Scroll Lock key never worked in the way the original IBM PC designers intended it to.

However on a Linux console, pressing Scroll Lock locks the scrolling (duh!) until you press it again.

Note that it only affects the output on the screen, the system will continue to accept keystrokes, buffering them and any I/O not related to the screen, will continue in the background.

bash provides a feature called command completion, which means that when you type a few characters, and press Tab, you are shown any matching commands which are available in the PATH. If there is more than one match, you'll hear a beep, and pressing Tab again will show you a list of all the possible matches. If you press Tab after entering a command name, it works the same way, except that instead of executable commands, you are shown files and directories within the current directory.

This feature is a part of the Readline library and is found on all programs which use it (e .g, gdb -The GNU Debugger)

The default PC beep is quite shrill and irritating.

On Linux, you can modify the beep pitch and duration by sending ANSI codes to the console.

Send these codes with the following the commands, substituting your own values for m (frequency in Hz), and n (duration in milliseconds).

echo -e '\033[10;n]' echo -e '\033[11;m]'



Personally, I find a 70 Hz beep nice and soothing.

If you want to run a command on a number of combinations of strings, for e.g. - you wish to create directories named foo1, foo2, foo3, bar1, bar2, bar3, baz1, baz2, baz3, you can use bash's brace expansion feature and type in a command like this:

mkdir {foo, bar, baz}{1, 2, 3}

mkdir foo1 foo2 foo3 moo1 moo2 moo3 bar1 bar2 bar3

The shell will generate all possible combinations from elements within the braces, thus expanding the above to...You can nest brace groups, and prefix or suffix them with any string you please.

The usual way in which most beginners setup their Linux mount points, is to make a single large partition, and mount it on the root directory. This seems convenient, because you don't have to mess with multiple partitions, but you might want to at least mount /usr and /usr/src on different partitions, and also mount them read-only. The advantage of doing this is that, in case there's a crash or power failure, and you reboot, you avoid the nasty fscks on /usr and /usr/src, which typically contain the largest amount of data. It's convenient to setup a couple of aliases to remount them read-write, so that you can recompile the kernel or install a package. After the write operations complete, you can remount them back to read-only.



On my system I used the following aliases in .bashrc :

alias Uw='mount -oremount, rw, defaults /dev/hda3 /usr' alias Sw='mount -oremount, rw, defaults /dev/hda4 /usr/src' alias Ur='mount -oremount, ro, defaults /dev/hda3 /usr' alias Sr='mount -oremount, ro, defaults /dev/hda4 /usr/src'

The most destructive command in Unix folklore is

rm -rf /

rm -rf /home/auser/foo

This not only destroys your whole installation, but also any files in other mounted filesystems .Of course this wont happen if you aren't logged on as root, but many users do so anyway ( A very bad habit ).For example you might intend to do the following...and leave a space between '/' and 'home', leading to the above mentioned cataclysm . It doesn't matter if the file permissions are read-only, when you use the -f option onwhile logged on as root, it will delete the files regardless of permissions.But there is a way to actually nail some files down, so that even root can't delete them by mistake. For this you can use thecommand, which allows you to set extended file system attributes for ext2fs , (distinct from those that you set using).The most useful attribute is the 'immutable' flag, that will prevent deletion, unless the attribute is reset explicitly.Set this attribute for /bin/*, /sbin/*, /usr/bin/*, and /usr/sbin/*, and also for your backed up files, to save a lot of grief when you inevitably screw up Read the man pages to learn about other useful attributes thatcan apply.

Linux never crashes except uh... well sometimes . . . .

When you're hacking the kernel, using buggy hardware or experimental drivers, it can do that. Most of these crashes are not a complete breakdown, in fact, usually only the screen is blanked out, or the keyboard is locked up, while the kernel still breathes. To escape from these situations safely, you can use a feature of the kernel known as the Magic SysRq Key.

To use it press Alt+SysRq+key where key is one of the following:

A - Show all available finctions

B - Reset the system

S - Sync disks (Flush unwritten data)

O - Remount all filesystems readonly

U - Unmount all filesystems

K - Kill all tasks

The typical sequence is S O U K and possibly B after that, this will get you a somewhat clean shutdown . This feature of the kernel may not be enabled by default, so just add the line 'sys.kernel.sysrq=1' to the etc/sysctl.conf file, to enable it.

Many a time the errant C / C++ program (usually the XFree86 Server, or one of those buggy Gnome programs) will go kaput with the following message:

Segmentation fault Core dumped

ulimit -c 0

Disclaimer: Many of these tips require you to be logged on as root. This is 2007 and I wrote this in 1999, much has changed and I doubt if many users still use the console a lot, but I'm sure at least some of the tips here would be useful to someone. As far as I can tell all these still work today, but your mileage may vary depending on the distribution of Linux that you use.

What it means is that the program tried to access memory, which was outside its data segment(s) . In Unix land, this was referred to as core, a throwback from the days when mainframes used magnetic core memory So, the program is terminated, and an image of the executable code along with lots of other baggage, is dumped to a file named 'core'. For those folks among us who debug, this file is useful to analyze the crash, but for the average user it's just junk, which takes up valuable space.To avoid this, you can use theinternal command ofin one of your startup files , to set the maximum size of a [core dump.The following command tells the system not to dump any cores bigger than 0 bytes (no cores are dumped).You can also set various other program limits withRead the bash documentation for more details.