HACKER Q&A
📣 stephenheron

Why hasn't x84 caught up with Apple M series?


Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen


  👤 roscas Accepted Answer ✓
RISC vs CISC. Why you think a mainframe is so fast?

ARM is great. Those M are the only thing I could buy used and put Linux on it.


👤 bigyabai
All Ryzen mobile chips (so far) use a homogeneous core layout. If heat/power consumption is your concern, AMD simply hasn't caught up to the Big.little architecture Intel and Apple use.

In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.


👤 netvarun
s/x84/x86/

👤 trashface
I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.

👤 daemonologist
I think this is partially down to Framework being a very small and new company that doesn't have the resources to make the best use of every last coulomb, rather than an inherent deficiency of x86. The larger companies like Asus and Lenovo are able to build more efficient laptops (at least under Windows), while Apple (having very few product SKUs and full vertical integration) can push things even further.

notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.


👤 dmitrygr
There is one positive to all of this. Finally, we can stop listening to people who keep saying that Apple Silicon is ahead of everyone else because they have access to better process. There are now chips on better processes than M1 that still deliver much worse performance per watt.

👤 blacksmith_tb
I considered getting a personal MBP (I have an M3 from work), but picked up a Framework 13 with the AMD 7 7840U. I have Pop!_OS on it, and while it isn't quite as impressive as the MBP, it is radically better than other Windows / Linux laptops I have used lately, battery life is quite good, ~5hr or so, not quite on par with the MBP but still good enough that I don't really have any complaints (and being able to up upgrade RAM / SSD / even mobo is worth some tradeoff to me, where my employers will just throw my MBP away in a few years).

👤 dapperdrake
How much do you like the rest of the hardware? What price would seem OK for decent GUI software that runs for a long time on batter?

Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.


👤 al_borland
I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.

👤 pengaru
My M1 Macbook Pro I used at work for several months until the Ubuntu Ryzen 7 7840U P14s w/32GB RAM arrived didn't seem particularly amazing.

The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.

I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.


👤 wslh
Most probably it is not impacting on Microsoft sales?

👤 altairprime
M1’s efficiency/thermals performance comes from having hardware-accelerated core system libraries.

Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?

For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.

Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.

I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.

x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.

That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.


👤 phendrenad2
[delayed]

👤 ac29
One downside of Framework is they use DDR instead of LPDDR. This means you can upgrade or replace the RAM, but it also means memory is much slower and more power hungry.

Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.


👤 brudgers
[delayed]

👤 numpad0
[delayed]

👤 sandreas
In my opinion AMD is on a good way having at least comparable performance to MacBooks copying Apples architectural decisions. Unfortunately their jump on the latest AI Hype Train did not suit them well for efficiency. Ryzen 7840U was significantly more efficient than Ryzen AI 7 350 [1]

However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.

Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.

So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks.

Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)

1: https://youtu.be/51W0eq7-xrY?t=773

2: https://youtu.be/oyrAur5yYrA

3: https://tinymicros.com/wiki/Apple_iPod_Remote_Protocol

4: https://github.com/androidx/media/issues/2637


👤 DuckConference
They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.

If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.


👤 PaulKeeble
I tend to think its putting the memory on the package. Putting the memory on the package has given the M1 over 400GB/s which is a good 4x that on a usual dual channel x64 CPU and the latency is half that of going out to a DRAM slot. That is drastic and I remember when the northbrige was first folded into the CPU by AMD with the Athlon and it had a similarly big improvements in performance. It also reduces power consumption a lot.

The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.

Apple got a lot of performance out of not a lot of watts.


👤 todotask2
x86 has long been the industry standard and can’t be remove, but Apple could move away from it because they control both hardware and software.

👤 hnaccountme
Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.

👤 chvid
I don't think there is a single thing you can point to. But overall Apple's hardware/software is highly optimized, closely knit, and each component is in general the best the industry has to offer. It is sold cheap as they make money on volume and an optimized supply chain.

Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.

As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.


👤 gigatexal
I think the Ryzen ai max 395+ gets really close in terms of performance per watt.

👤 ben-schaaf
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.


👤 pythonRon
Does the M series have a flat memory model? If so, I believe that may be the difference. I'm pretty sure the entire x86 family still pages RAM access which (at least) quadruples activity on the various busses and thus generates far more heat and uses more energy.

👤 mmcnl
A lot of insightful comments already, but there are two other tricks I think Apple is using: (1) the laptops can get really hot before the fans turn on audibly and (2) the fans are engineered to be super quiet. So even if they run on low RPM, you won't hear them. This makes the M-series seem even more efficient than they are.

Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.


👤 noelwelsh
Like a few other comments have mentioned, AMD's Strix Halo / AI Max 380 and above, is the chip family that is closest to what Apple has done with the M series. It has integrated memory and decent GPU. A few iterations of this should be comparable to the M series (and should make local LLMs very feasible, if that is your jam.)

👤 tannhaeuser
I always thought it's Apple's on-package DRAM latency that contributes to its speed relative to x86 especially for local LLM (generative but not necessarily training) usage but with the answers here I'm not so sure.

👤 musicale
> a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

Note those docker containers are running in a linux VM!

Of course they are on Windows (WSL2) as well.


👤 j45
One is more built from the ground up more recently than the other.

Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.

The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.


👤 FrankyHollywood
Backward compatibility.

Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.

Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.


👤 purpleidea
Honestly, I have serious FOMO about this. I am never going to run a Mac (or worse: Windows) I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.

My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.


👤 out_of_protocol
On efficiency side, there's big difference on OS department. Recently released handheld Lenovo Go S has both SteamOS (which is Arch btw) and Windows11 versions, allowing to directly compare efficiency of a AMD's Z1E chip under load with limited TDP. And the difference is huge, with SteamOS fps is significantly higher and and the same time battery lasts a lot more.

Windows does a lot of useless crap in the background that kills battery and slows down user-launched software


👤 Panzer04
Software.

If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.

I find both windows and Linux have questionable power management by default.


👤 danb1974
Macbooks are more like "phone/tablet hardware evolved into desktop" mindset (low power, high performance). x86 hardware is the other way around (high power, we'll see about performance).

That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.


👤 mschuster91
> I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.

Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.


👤 dlcarrier
That's a Chrome problem, especially on extra powerful processors like Strix Halo. Apple is very strict about power consumption in the development of Safari, but Chrome is designed to make use of all unallocated resources. This works great on a desktop computer, making it faster than Safari, but the difference isn't that significant and it results in a lot of power draw on mobile platforms. Many simple web sites will peg a CPU core even when not in focus, and it really adds up with multiple tabs open.

It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.

The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.

I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.

Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.

I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):

    killall -STOP google-chrome
When you want to go back to using it, run:

    killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.

👤 aorth
Thanks for the honest review! I have two Intel ThinkPads (2018 and 2020) and I've been eying the Framework laptops for a few years as a potential replacement. It seems they do keep getting better, but I might just wait another year. When will x86 have the "alien technology from the future" moment that M1 users had years ago already?

👤 pharrington
I don't know, but I suspect the builds of the programs you're using play a huge factor in this. Depending on the Linux distro and package management you're using, you just might not be getting programs that are compiled with the latest x86_64 optimizations.

👤 asno3030
They are pretty similar when comparing the latest amd, and Apple chips on the same node. The buying power from Apple means that they get them earlier than AMD, usually by 6-9 months.

Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.

The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.


👤 lawn
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).

The issue you're seeing isn't because x86 is lacking but something else in your setup.


👤 teekert
I think it is getting close: [0]

(Edit, I read lower in the thread that the software platform also needs to know how to make efficient use of this performance per watt, ie, by not taking all the watts you can get.)

[0] https://www.phoronix.com/review/ryzen-ai-max-395-9950x-9950x...


👤 Aissen
You can probably install Asahi Linux on that M1 pro and do comparative benchmarks. Does it still feel different? (serious question)

👤 rs186
> using the Framework feels like using an older Intel based Mac

Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.


👤 atwrk
To me it simply looks like Apple buys out the first year of every new TSMC node and that is the main reason why the M series is more efficient. Strix Halo (N4P) has, according to Wikipedia, a transistor density about 140 MTr/mm2, while the M4 (N3E) has about 210 MTr/mm2. Isn't the process node alone enough to explain the difference? (+ software optimizations in MacOS of course)

👤 fafhnir
I have the same experience here with my MacBook Air M1 from 2020 with 16GB RAM and 512GB SSD. After three years, I upgraded to a MacBook Pro with M3 Pro, 36GB of RAM, and 2TB of storage. I use this as my main machine with 2 displays attached via a TB4 dock.

I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.

The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.

But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.

It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.

Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.

Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.


👤 munchlax
I can build myself a new amd64 box for just under €200. Under €100 with used parts. Some older Dell and Lenovo laptops even work with coreboot.

An Airbook sets me back €1000, enough to buy a used car, and AFAICT is much more difficult to get fully working Linux on than my €200 amd64 build.

Why hasn't apple caught up?


👤 KingOfCoders
1. Memory soldered to the CPU

2. Much more cache

3. No legacy code

4. High frequencies

The engineers at AMD are the same as at Apple, but both markets demand different chips and they get different chips.

Since some time now the market is talking about energy efficiency, and we see

1. AMD soldering memory close to the CPU

2. Intel and AMD adding more cache

3. Talks about removing legacy instructions and bit widths

4. Lower out of the box frequencies

Will take more market pressure and more time though.


👤 gettingoverit
> might be my Linux setup being inefficient

Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.

Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.

Switching to another scheduler, reducing interrupt rate etc. probably help too.

Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.

> Is x86 just not able to keep up with the ARM architecture?

Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.

That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.


👤 jryan49
What is the power profile setting? Is it on balanced or performance? Install powertop and see what is up. What distro are you using? The linux drivers for the new AMD chips might stink cause the chips are so new. Linux drivers for laptops stink in general compared to Windows. I know my 11th gen WiFi still doesn't work right, even with latest kernel and disabling powersaving on the wifi.

👤 hoppp
I don't think a fan spinning is negative. The cooling is functioning effectively.

Apple often lets the device throttle before it turns on the fans for "better ux" linux plays no such mind games.


👤 BirAdam
First, Apple did an excellent job optimizing their software stack for their hardware. This is something that few companies have the ability to do as they target a wide array of hardware. This is even more impressive given the scale of Apple's hardware. The same kernel runs on a Watch and a Mac Studio.

Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.

In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.


👤 throwaway_20357
Cinebench points per Watt according to a recent c't CPU comparison [1]:

  Apple M1: 23.3
  Apple M4: 28.8
  Ryzen 9 7950X3D (from 2023, best x86): 10.6
All other x86 were less efficient.

The Apple CPUs also beat most of the respective same-year x86 CPUs in Cinebench single-thread performance.

[1] https://www.heise.de/tests/Ueber-50-Desktop-CPUs-im-Performa... (paywalled, an older version is at https://www.heise.de/select/ct/2023/14/2307513222218136903#&...)


👤 76SlashDolphin
There's a lot of trash talking of x86 here but I feel like it's not x86 or Intel/AMD that are the problem for the simple reason that Chromebooks exist. If you've ever used a Chromebook with the Linux VM turned on, they can basically run everything you can run in Linux, don't get hot unless you actually run something demanding, have very good idle power usage, and actually sleep properly. All this while running on the same i5 that would overheat and fail to sleep in Windows / default Linux distros. This means that it is very much possible to have an x86 get similar runtimes and heat output as an M Series Mac, you just need two things:

- A properly written firmware. All Chromebooks are required to use Coreboot and have very strict requirements on the quality of the implementation set by Google. Windows laptops don't have that and very often have very annoying firmware problems, even in the best cases like Thinkpads and Frameworks. Even on samples from those good brands, just the s0ix self-tester has personally given me glaring failures in basic firmware capabilities.

- A properly tuned kernel and OS. ChromeOS is Gentoo under the hood and every core service is afaik recompiled for the CPU architecture with as many optimisations enabled. I'm pretty sure that the kernel is also tweaked for battery life and desktop usage. Default installations of popular distros will struggle to support this because they come pre-compiled and they need to support devices other than ultrabooks.

Unfortunately, it seems like Google is abandoning the project altogether, seeing as they're dropping Steam support and merging ChromeOS into Android. I wish they'd instead make another Pixelbook, work with Adobe and other professional software companies to make their software compatible with Proton + Wine, and we'd have a real competitor to the M1 Macbook Air, which nothing outside of Apple can match still.


👤 donatj
> I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

My experience has been to the contrary. Moving to Linux a couple months ago from Windows doubled my battery life and killed almost all the fan noise.


👤 jarpineh
I wonder what is the difference between efficiency of MacBook display vs Framework laptop. Whilst CPU and GPU take considerable power they aren't usually working at 100% utilization. Display however has to be using power all the time, possibly at high brightness in daytime. MacBooks have (all?) high resolution displays which should be much power hungrier than Framework 13 IPS. Pro models use mini LED, which needs even more power.

I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.


👤 bsenftner
Well, there is a major architectural reason why the entire M-series appears to be "so fast" and that is the unified memory, which completely eliminates the buffer-to-buffer data copying that is probably over half of what a non-unified memory architecture chip is doing at any given time. M-series chips have an architecture that completely eliminates data copying, just reference the data where it is, and you're done.