For reasons beyond the scope of this entry, today I feel like writing down a broad and simplified overview of how modern Linux systems boot. Due to being a sysadmin who has stubbed his toe here repeatedly, I'm going to especially focus on points of failure.

The system loads and starts the basic bootloader somehow, through either BIOS MBR booting or UEFI. This can involve many steps on its own and any number of things can go wrong, such as unsigned UEFI bootloaders on a Secure Boot system. Generally these failures are the most total; the system reports there's nothing to boot, or it repeatedly reboots, or the bootloader aborts with what is generally a cryptic error message. On a UEFI system, the bootloader needs to live in the EFI system partition, which is always a FAT32 filesystem. Some people have had luck making this a software RAID mirror with the right superblock format; see the comments on this entry.

The bootloader loads its configuration file and perhaps additional modules from somewhere, usually your /boot but also perhaps your UEFI system partition. Failures here can result in extremely cryptic errors, dropping you into a GRUB shell, or ideally a message saying 'can't find your menu file'. The configuration file location is usually hardcoded, which is sometimes unfortunate if your distribution has picked a bad spot. For GRUB, this spot has to be on a filesystem and storage stack that GRUB understands, which is not necessarily the same as what your Linux kernel understands. Fortunately GRUB understands a lot these days, so under normal circumstances you're unlikely to run into this. (Some GRUB setups have a two stage configuration file, where the first stage just finds and loads the second one. This allows you more flexibility in where the second stage lives, which can be important on UEFI systems.)

Using your configuration file, the bootloader loads your chosen Linux kernel and an initial ramdisk into memory and transfers control to the kernel. The kernel and initramfs image also need to come from a filesystem that your bootloader understands, but with GRUB the configuration file allows you to be very flexible about how they're found and where they come from (and it doesn't have to be the same place as grub.cfg is, although on a non-UEFI system both are usually in /boot ). There are two things that can go wrong here; your grub.cfg can have entries for kernels that don't exist any more, or GRUB can fail to locate and bring up the filesystem where the kernel(s) are stored. The latter can happen if, for example, your grub.cfg has the wrong UUIDs for your filesystems. It's possible to patch this up on the fly so you can boot your system.

The kernel starts up, creates PID 1, and runs /init from the initramfs as PID 1. This process and things that it run then flail around doing various things, with the fundamental goal of finding and mounting your real root filesystem and transferring control to it. In the process of doing this it will try to assemble software RAID devices and other storage stuff like LVM, perhaps set sysctls, and so on. The obvious and traditional failure mode here is that the initramfs can't find or mount your root filesystem for some reason; this usually winds up dropping you into some sort of very minimal rescue shell. If this happens to you, you may want to boot from a USB live image instead; they tend to have more tools and a better environment. (Sometimes the reasons for failure are obscure and annoying.) On many traditional systems, the initramfs /init was its own separate thing, often a shell script, and was thus independent from and different from your system's real init. On systemd based systems, the initramfs /init is actually systemd itself and so even very early initramfs boot is under systemd's control. In general, a modern initramfs is a real (root) filesystem that processes in the initramfs will see as / , and its contents (both configuration files and programs) are usually copied from the versions in your root filesystem. You can inspect the whole thing with lsinitrd or lsinitramfs . Update: It turns out that the initramfs init is still a shell script in some Linux distributions, prominently Debian and Ubuntu. The initramfs init being systemd may be a Red Hat-ism (Fedora and RHEL). Thanks to Ben Hutchings in the comments for the correction. How the initramfs /init pivots into running your real system's init daemon on your real system's root filesystem is beyond the scope of this entry. The commands may be simple (systemd just runs ' systemctl switch-root '), but how they work is complicated. (That systemd is the initramfs /init is convenient in a way, because it means that you don't need to learn an additional system to inspect how your initramfs works; instead you can just look at the systemd units included in the initramfs and follow along in the systemd log.)

Your real init system starts up to perform basic system setup to bring the system to a state that we think of as the normal basic way it is; basically, this is everything you usually get if you boot into a modern single user mode. This does things like set the hostname, mount the root filesystem so it can be written to, apply your sysctl settings (from the real root filesystem this time), configure enough networking so that you have a loopback device and the IPv4 and IPv6 localhost addresses, have udev fiddle around with hardware, and especially mount all of your local filesystems (which includes activating underlying storage systems like software RAID and LVM, if they haven't been activated already in the initramfs). The traditional thing that fails here is that one or more of your local filesystems can't be mounted. This often causes this process to abort and drop you into a single user rescue shell environment. (On a systemd system the hostname is actually set twice, once in the initramfs and then again in this stage.)