HACKER Q&A
📣 larsiusprime

Is Moore's Law over, or not?


Every once in a while I see an article that seems to claim that Moore's Law is over, or slowing down, or about to be over. Then I see some counter-claim, that no, if you account for added cores, or GPUs, or some other third thing, that actually it's still right on track. This cycle has repeated every year for like the past 10 years, but the last few years feel like things have really started to slow down. Maybe that was partially illusory with the chip slowdown from the pandemic, but I figure now that we're several years out we should be able to say for sure.

It also seems like a pretty important question to answer because it has big implications for the advancement of AI technology which has everyone so freaked out.

So what's the consensus around here? Is Moore's Law actually over yet, or not?


  👤 kpw94 Accepted Answer ✓
Earlier today on HN there was a submission about great CPU stagnation. In the blog post was an interesting link: https://raw.githubusercontent.com/karlrupp/microprocessor-tr...

This graph to me show that while yes technically Moore's law of doubling transistor per "thing Intel or AMD sells you" is still holding, it has ended for single threaded workloads. Moore's law is only holding due to core count increase.

For everyday use of users running multiple simple programs/apps, that's fine. But for truly compute heavy workloads (think a CAD software or any other heavy processing), developers turned to GPUs to get the compute power improvements.

Writing amazing programs taking full advantage of the core count increase is simply impossible (see Amdahl's law). So even if one wanted to rearchitect programs to take full advantage of the overall transistor count from ~2005 to now, they won't be able to.

Compare with pre-2005, where one just had to sit & wait too see their CPU-heavy workloads improve... It's definitely a different era of compute improvements


👤 btilly
Nvidia thinks that Moore's Law is dead. https://arstechnica.com/gaming/2022/09/do-expensive-nvidia-g...

Intel, by contrast, says that Moore's Law is still alive. But Intel is technologically behind, and it is easier to improve when there is someone to learn from, so maybe there is a wall that they haven't yet hit.

Regardless, it is a very different law than when I was young, when code just magically got faster each year. Now we can run more and more things at once, but the timing for an individual single-threaded computation hasn't really improved in the last 15 years.


👤 hooby
I think it's worth mentioning, that "Moore's Law" is not actually a "law". It's just an observation of a historical trend.

Moore posited in 1965 that the the amount of transistor per chip will roughly double every year - something he himself called "a wild extrapolation" in a later interview.

Actual development speed proved slower than that, so in 1975 he revised his prediction to transistors doubling every two years - so, the original "Moore's Law" was already dead by then. The second revision of his prediction proved more long-lived, in part because manufacturers were actively planning their development and releases around fulfilling that expectation - making it sorta a self-fulfilling prophecy.

There was another slow down in 2010 though - with actual development falling behind "schedule" since then.

But neither the "doubling" - nor the "year" or "two years" were ever anywhere near precise to begin with, so the question "is it dead" depends highly on how much leeway you are willing to give.

If you demand strict doubling every year - that's been dead since before 1975.

If you demand roughly doubling every two years - that's probably mostly dead since 2010.

If you allow for basically exponential growth but with a tiny bit of slow down over time - then it's still alive and well.

There can be no precise answer to the question - since the whole prediction was so imprecise to begin with. I don't think there's any benefit to getting hung up on drawing a supposedly precise line in the sand...


👤 martinpw
Moore's law is still going strong. But when people talk about Moore's Law ending they normally mean a wider set of trends, specifically:

- Transistor count doubles every ~24 months (Moore's law) - still going strong

- Total power stays ~constant (Dennard Scaling) - no longer holds

- Total cost stays ~constant - don't know if there is a name for this, but it no longer holds either

The real magic was when you had all three trends working together, you would get more transistors for the same power and same total cost. As the last two trends have fallen off, the benefit of Moore's law by itself is reduced.

There is still some effect of the last two trends, power per transistor and cost per transistor are still dropping, just at a slower rate than density is increasing. So power budgets and costs continue to grow. Hence 450W GPUs and 400W CPUs.


👤 tdsanchez
I worked at Intel when the first mobile Pentiums were being developed. Back then, gate oxides were 10-12 atoms thick. That was nearly 30 years ago and feature size was 350nm or .35 micron.

Today's 3nm processes use 3 dimensional gates that have film thicknesses on the order of 5 to 8 atoms thick and the features size is smaller than the wavelength of light used to measure expose wafers' different mask reticles that rely on using light slit interference to make features smaller than the EUV wavelength of around 10nm.

To get much smaller than 1nm using these techniques is going to run into fundamental physical limits in a decade and probably that limit will be around .5nm feature size.

The next frontier in silicon will be building three dimensional chips and IBM is a pioneer in 3D stacking of CMOS gates.


👤 sanxiyn
Transistor is scaling, for now. SRAM is not! TSMC N3E SRAM density is equal to N5 SRAM density. This is something of an inflection point.

https://www.tomshardware.com/news/no-sram-scaling-implies-on...


👤 Curzel
Dead, but we're heading into new a new paradigm of hardware accelerators (encoders, decoders, AI-optimized chips, multi-die GPUs, ...) and new packaging systems (M1 SoC's, GPU Chiplets, ...), all powered by what was previously the big bottleneck, interconnection within these components (AMD's Infinity Fabric, Apple's UltraFusion, ...)

👤 epicureanideal
According to Jim Keller, not dead:

https://www.youtube.com/live/oIG9ztQw2Gc?feature=share

This isn’t the best recording on YouTube but it’s late and I couldn’t quickly find the other one.


👤 gpjanik
No. The Moore Law has been declared dead since at least 2010 and nothing changed in terms of increase of density of transistors in chips. Doubling every two year, just as stably as in the 90s.

https://ourworldindata.org/grapher/transistors-per-microproc...

(the stagnation of 2019-2020 has nothing to do with technology; it's COVID)


👤 pieterr

👤 brancz
In its original form as in "transistors double every two years" it certainly already seems over. Apple silicon is at 5mn today, and Intel claims they'll conquer 3mn by 2030, so even these facts and theoretical statements already don't fit anymore.

We'll probably have a couple more innovations and might get to making a transistor out of a single atom (silicon atom is 0.262nm; carbon atom is 0.3).

5nm / (2*2*2*2) =~ 0.3

So I don't think we're done making faster hardware just yet, but we're certainly getting to the boundaries of what appears to be physically possible.


👤 ly3xqhl8g9
Moore's Law has become a snowclone [1], the current iteration is:

"AI's Rule: Just as Moore's Law unfolds, the language models might expand, doubling the size and inference capability every [insert timeframe], revolutionizing communication and comprehension in unprecedented ways." (generated by ChatGPT)

[1] "a cliché and phrasal template that can be used and recognized in multiple variants", https://en.wikipedia.org/wiki/Snowclone


👤 d110af5ccf
No, it is not over (yet). The node is nonsense. https://ieeexplore.ieee.org/document/9150552 DOI: 10.1109/MSPEC.2020.9150552

Edit: I have no idea why anyone would downvote a link to this article. It directly answers the question with a decent level of technical detail. We are nowhere near single atom sized features yet, despite what node names might lead you to believe. There's still quite a ways to go.


👤 bjourne
The original Moore's law was about transistors per chip AT CONSTANT COST. That has ceased to be true as design and fabrication costs are increasing exponentially. However, the number of transistors per chip (regardless of cost) has still tracked Moore's law.

What is definitely over is Dennard scaling. As transistors got smaller it used to be possible to reduce the current used to drive them. That in turn made it possible to increase the clock frequency without frying the chip. Heat dissipation is proportional to the frequency and drive current. It's not possible anymore because you have leakage current and other parasitics (electrical noise essentially) that does not scale with transistor size. In the past you could take a 486 dx, overclock it from 25 to 50MHz and it would "magically" get about twice as fast. That is not possible anymore and chips are unlikely to ever run much faster than 5GHz.

However, Moore's law still provides performance because you can fit more cores, larger caches, specialized circuits, SIMD units, etc, on the same chip.


👤 fulafel
The graph in https://en.wikipedia.org/wiki/Moore's_law seems to be carrying on in the same direction.

People have been predicting its end for a long time.

Note also that it's about transistor cost, not about cpu performance - people sometimes think it is because performance used to be more correlated with transistor count.


👤 namelosw
I thought Moore's Law was dead long ago. I don't understand why some people still bring it up from time to time.

I remember reading in a magazine when I was a kid that Pentium 4 Extreme failed to reach 4.0 GHz in 2003 or 2004.

Since then, it took Intel quite some years to hit 4.0 GHz. Instead, the industry shifted to multi-core CPU, starting with the Core 2 series.

Does multi-core CPU count? I would say it's a bit of a stretch. It's more about horizontal scaling, where multi-CPU or even cluster also work in similar ways - there's no hard limit on how many CPUs you can add as long as you can cool them down. You can also make it much larger and sparse then put it in a large box to deal with the heating problem.

P.S. From the perspective of programming paradigm, people would then find "share nothing" and "message passing" is the way to harness concurrent and multi-core programming, after getting burned again and again with shared memory. These disciplines of not sharing RAM would further make multi-core more like programming on multi-CPU or clusters.


👤 ksec
Well as you read the comment here, everyone seems to have a different definition of Moore's Law.

Let's measure transistor in a chip without caring about die size, so you can just use a larger die size measurement to keep the Moore's Law narrative alive. Well at some point that wont work because your maximum die size is still ~840mm2 due to reticle limit.

Then what? There is Chiplet, or what about you package all the die together using CoWos or EMIB? Yep. More transistor per "Chip" because the definition of Chip just changed for Die to multi die.

Or finally another media narrative, or Intel, AMD' PR or even how Jim Keller uses it. Any continuous improvement in transistor density per mm2 is consider as following Moore's law.

>So what's the consensus around here?

Generally speaking HN is very poor source of information on anything Hardware. I would use any consensus on HN as final say on the subject.


👤 TanjB
When Moore wrote in 1965, commercial use of MOS was 10 years in the future and Dennard scaling would not become widely understood and stirring interest in CMOS until 15 years in the future. So, he was actually observing an era much like now, with multiple chiplets inside the can and all sorts of random improvements that had an emergent trend. The Dennard era, which gave Moore its main impulse, was about 20 years long. Maybe 25 years if you include controlling tunnel leakage by introducing Hf-based dielectrics, and FinFETs since they sort of crinkle the surface of the chip to give you double the area, and otherwise obey classic Dennard laws of constant power per unit area.

But even during the Dennard era there were a bunch of big random innovations needed to keep things going. CMP allowing the number of metal routing layers to balloon, keeping distances under control. Damascene metals allowing much finer metals carrying heavier currents. Strained channels for higher frequency and better balance between P and N. Work-function-biased fully depleted transistors to avoid the stochastic problems with doping small channels. Etc.

So what really happened is not that Moore ended. We still have a wealth of random improvements (where "wealth" is the driving force) which contribute to an emergent Moore improvement. But the large change is Dennard ended, which gave us scaling at constant power. Although some of the random improvements do improve energy efficiency per unit of computation, they are not overall holding the line on power per cm2. At the end of the classic Dennard we were around 25W /cm2 but now we commonly have 50W in server chips, and there are schemes in the works to use liquid cooling up to hundreds of W / cm2.

Well, ok. But does that kill Moore? Not if it keeps getting cheaper per unit function. And by that I do not mean per transistor. But as long at that SOC in your phone keeps running faster radio, drawing better graphics, understanding your voice, generating 3D audio, etc., and is affordable by hundreds of millions of consumers, Moore remains undead.


👤 IshKebab
It is dead in the sense that transistor density isn't doubling every 18/24 months. It is still increasing but not very much.

People are starting to move to getting performance improvements by increasing chip sizes and power budgets - part of the reason why GPUs are more expensive than they used to be.


👤 hedgehog
It's over but Intel's marketing department likes to redefine it to mean whatever suits the message of the day. You can read the original paper's definition and do the math on transistor counts say typical desktop processors and arrive at something well under 50% year over year growth over the last decade.

"The complexity for minimum component costs has in creased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase."

  1. https://www.intel.com/content/www/us/en/newsroom/resources/moores-law.html
  2. https://download.intel.com/newsroom/2023/manufacturing/moores-law-electronics.pdf

👤 g0xA52A2A
Yet another classic Cantrill talk nails this IMHO https://www.youtube.com/watch?v=MtrZJ4UqSn8

👤 sys_64738
It gets modified to suit the current processor trends of the day with the exclamation it isn't dead. I don't think most bother about such silliness, TBH.

👤 jumploops
General purpose CPUs can only get so good. With that said, there are still many advancements that can be made to shrink die sizes and/or shove more transistors more closely together (3D is hard due to heat… but…).

I’m excited about photon-based processors, but until that’s a reality we still have a ton of headway for application-specific scaling.

If you rip specific loops out of a general purpose CPU, there are still plenty of gains to be made!


👤 dotcoma

👤 rusticpenn
Moore's law coming to an end gives more oppurtunities to exploit new ways of using tech that was not done until now since transistor sizing provided bigger updates. There are things like "More Moore" and "More than Moore" for a long time.

👤 kukkeliskuu
It seems to me that at some point Moore's law became not an independent result of innovation, but rather a target for Intel et co to hit. And then it became increasingly more expensive to hit the target, and ultimately this target process collapsed.

👤 jokoon
It's also crucial to not forget about wirth's law, that states that software is getting slower than hardware is getting faster.

So don't fall for software vendors that want to convince you that you need a faster CPU every x year. You don't.


👤 mensetmanusman
Moore's law normalized to power and cost is dead. Ignoring power and cost it is continuing along because people want it more even if it is getting more expensive.

👤 Dalewyn
Claims of the death of Moore's Law have been greatly exaggerated.