Using a vector as an example, where O(n) operations are actually very fast in practice when n < X, when might X be big enough in practice to warrant the use of other specialized structures like skip lists and radix-balanced trees?
In the IPv4 Internet, there are about 830,000 prefixes (network addresses) announced, and the number is growing [1]. Routers at ISPs and sophisticated customers maintain full (“default-free”) routing tables with at least one entry for each prefix.
These tables are used for packet forwarding decisions, so lookups have to be fast. Traditionally, a radix tree is used, but some routers use other data structures or specialized hardware.
> The ultimate goal is to fit the entire Pinterest graph in memory. Obviously we can’t fit the entire 100 billion edges in RAM, but once we prune down to 20 billion edges, we end up with a graph that’s 150GB.
Because something like a relational database could have 1M elements in a B-tree but the whole data structure doesn't have to be in memory at the same time.
Or operating system page tables as another example.