(the rabbit hole I started with was a job ad for ex-Googlers "who hated waiting for the ad batch runs to complete" - so batch runs are still a thing :-)
https://storage.googleapis.com/pub-tools-public-publication-data/pdf/65b514eda12d025585183a641b5a9e096a3c4be5.pdf
The key idea is that any human-written change only ever touches ~1-100 files, and there's never any need to store or maintain the entire source tree in your local persistence store. By only ever working on deltas, you can have highly efficient distributed and cached builds. This architecture imposes significant constraints: you must express your build graph in a modular way [2], you collocate your distributed FUSE backend and build system for speed, and all development is over the network. But it comes with many benefits at scale: near-perfect caching, instant "branch switching", fast builds even with deep and wide dep graphs, and excellent affected target determination so flakes/outages in one part of the codebase don't affect the rest of it.
[1] - https://dl.acm.org/doi/pdf/10.1145/2854146 [2] - https://bazel.build/