I'm kidding, although it seemed this book would never be outdated since EFI came along. Still worth a read though, as a sort of primer?
BIOS - this is the old way of doing things, though newer UEFI firmwares can also emulate BIOS booting with something called CSM, so a newer machine can still use the old system.
After the firmware (either BIOS or UEFI with CSM enabled) executes, it loads the first 512 bytes from the boot device into memory and hands off control to it. This is 512 raw bytes from the drive, independent of any partitions or partition schemes. It's called the Master Boot Record or MBR - it's closely related to the MBR partition scheme but I'm pretty sure the partition scheme doesn't matter at this stage - all that matters is that those 512 bytes are enough to continue the boot process.
Typically those 512 bytes are the bare minimum needed to load further code from the drive - it's not enough to implement any partition scheme or filesystem support, so usually it just contains raw offsets of data on disk (that the bootloader's installer figures out and writes during OS install/update). That loads the second stage - either it's the full bootloader or maybe there's a third stage that would load in a similar manner.
Either way at this point the bootloader (GRUB for example) is loaded. It typically understands the concept of partitions, filesystems and/or extra peripherals (USB drives, etc - and iPXE for example can speak many network protocols so you could technically boot over HTTPS at this point). It will then read its configuration from the filesystem which will then point to a kernel and initrd (usually located on the same filesystem which is your /boot partition), load them into memory and pass control to it.
The kernel then takes over, uses its own device drivers to (re)initialize the hardware, actually "mount" the partitions read-write and start the OS.
EFI - this is the "new" way of doing things.
The firmware itself now understands the concept of partition schemes and filesystems - as per the spec it should at least be able to read FAT32 filesystems, but implementations are free to support more or even have an entire network stack in the firmware.
EFI boot revolves around EFI binaries - .efi files that are actually executable code. Those can be arbitrary in size and not limited to 512 bytes like the MBR was. EFI boot typically involves the GPT partition scheme, but I don't see why it couldn't also work on MBR and some firmwares may support this.
When the firmware finishes, it will try the following:
- entries in NVRAM pointing to a (GPT) partition UUID, path (to the .efi file) and optional arguments to it, if it finds it it passes control to it and that's the end. That NVRAM can be modified by the OS (on Linux, look for the efibootmgr command).
- failing the NVRAM approach, it'll look for a "/EFI/BOOT/BOOTX64.EFI" on the EFI System Partition of the chosen storage drive - the ESP is a FAT32 partition (on a GPT-partitioned disk) with a special partition type "EF00", if it finds it it passes control to it and that's the end.
As you can see, a major change here is that there's now persistent state related to the boot process stored on the NVRAM that is also accessible to and modifiable by the OS. It opens the possibility for example of a single drive having multiple boot entries (at the firmware level) where as BIOS boot always mandates 1 drive = 1 boot record, and multiple boot entries would have to be managed by whatever bootloader you choose. This is a major source of confusion if you're not aware of it.
It also opens the doors for a situation where the NVRAM entry is created but there is no entrypoint at the default "/EFI/BOOT/BOOTX64.EFI", meaning the machine with the NVRAM entry will boot just fine (because it contains the non-standard path) but should this entry be deleted or the drive moved to a different machine, the system won't boot because there's no entrypoint at the standard path. For this reason, most OSes also leave something at that default spot so that it can still boot, and I guess they might entirely rely on that and not even bother creating an NVRAM entry.
With EFI boot supporting multiple entries per drive at the firmware level, you technically don't even need a bootloader anymore.
The Linux kernel itself can be a valid EFI binary (this is called EFISTUB) and so you can use the ESP as your /boot partition, put your kernel & initrd there and use efibootmgr (or some other way to write to NVRAM) to set the path of that kernel and additional kernel arguments such as the path to the initrd, UUID of the storage drive, etc. If you compile the initrd and arguments into the kernel itself, you could put it in "/EFI/BOOT/BOOTX64.EFI" and it'll boot directly, no NVRAM entries needed - this will also make your installation portable across machines.
However, most distros still use GRUB or similar (systemd-boot, etc) as a bootloader which they put in the ESP, either at a custom path (with corresponding NVRAM entry created at install) or in "/EFI/BOOT/BOOTX64.EFI" - most distros do both to ensure the system will still boot if the drive is moved or the NVRAM entry is erased for some reason. This also allows them to unify bootloader-level configuration and abstract away differences in firmware since the process once the bootloader is ran becomes identical across both BIOS and UEFI boot. The process once the bootloader runs is up to the bootloader itself and no longer depends on how the machine booted in the first place - in case of GRUB it's essentially identical.