HACKER Q&A
📣 lambdadmitry

Why are M1-based Apple laptops so much faster at changing monitors?


There are reports across the web of M1 laptops switching displays or resolution instantaneously (e.g. [1] or [2]). It can't be explained by just lower memory latency or faster CPU, because neither are x30+ faster, with a speedup like that needed to explain it getting down to under 100ms from several seconds (that's the current norm across platforms and GPU vendors). So most likely they do something differently there, skipping work instead of doing the same thing, but faster. What is that? Is that somehow dependent on their hardware, or is it something like what Intel did to MKL on AMD?


  👤 Someone Accepted Answer ✓
My guess is that most systems ¿almost? completely start from scratch setting up the hardware on resolution switches.

You need that code anyways for every resolution, and it means you need only n paths for n different resolutions, not the n×(n-1) you need for switching from any of them to any of the other ones. It also makes handling the various setups completely independent. There can’t be some left-over state from the previous resolution setting that interferes With how the current setting works.

It also might be a left-over from CRT days, when the hardware couldn’t seamlessly switch to a different clock frequency, and doing the full restart on the software side didn’t make much of a difference (yes, lots of hardware could switch horizontal resolutions even on a line by line basis, but that, AFAIK, always was on the same number of vertical lines and frames/second, and, using ratios between horizontal resolutions with small denominators such as ½ or ¼.

This is one of those things where it takes a company like Apple to not only notice that can be improved, but also decide it’s worth spending resource on.


👤 wfleming
I’m largely guessing here, but I think at least one aspect of the hardware contributing to this is the M1 shares the same RAM for the CPU & GPU. Normally that’s not the case, so a display change requires recalculating the display buffer for the entire screen and then copying that to the GPU. The M1 can skip that copy phase, it just tells the GPU where the buffers already are in RAM. That definitely can’t account for multiple seconds, but it’s probably part of the story.

We also know that Apple spent time optimizing very specific operations in their CPU - optimizing retain/release operations on objects is the one that’s gotten mentioned a lot. Maybe they spent similar time optimizing things in the GPU for changing modes/outputs?


👤 fsflyer
I haven't seen an M1 Mac in person yet. Does the hardware actually switch resolution on the M1 Macs? Or does it only change the software scaling and leave the hardware alone? I suspect the laptop panels just stay in their native resolution. What about external monitors?

MacOS sends a notification to each app that the screen size changed, so fast CPUs speed that up. Lots of cores can process those notifications in parallel.