HACKER Q&A
📣 hwers

Is the death of Moore's Law just the consequence of no competition?


We clearly can create bigger and stronger CPUs and GPUs. The highest end stuff Nvidia creates (like the V100) are amazing. I can't imagine the actual manufacturing cost of these things are particularly high, so that leads me to wonder: Is the only reason we're seeing stagnant growth because Nvidia et al has a monopoly on the IP to make these things? What's your take?


  👤 PaulHoule Accepted Answer ✓
No, physics is getting in the way.

In the 1980s most computer architectures (C-64, Apple ][) were tightly coupled and didn't let you swap in faster chips as technology got better.

There was an explosive growth in computer power in the 1990s because of "Dennard Scaling" (corrected) in which making chips smaller not only meant more transistors (Moore's Law) but also that those transistors were faster.

Circa 2005 that "free ride" ended and you started getting more cores instead of faster cores.

Intel struggled to make the 10 nm process which is still not completely successful. Newer processes need a special light source that blows up a droplet of tin to make a plasma explosion that makes extreme ultraviolet light. It's all very expensive and makes each generation more of a stretch then the one before.


👤 olliej
TSMC, Samsung, and Intel are all aggressively investing in Fab R&D.

Nvidia, AMD, Apple, Intel, Samsung are all investing billions into chip R&D.

There is no shortage of competition.

As far as moore’s law: it’s a predicted number of transistors on a chip. Not performance. The increasingly vast proportion of transistors on a chip are the caches. The construction of high density memory is generally fabricator IP as it they are highly optimized for each company’s own fab processes. So again not the lack of competition.

The reality is the problem is physics: the development of EUV lithographers and steppers has taken two decades of intensive effort: the creation of the light source is astonisngly challenging and ended up requiring a system that fires lasers at droplets of tin dropping thousands of times a second. Teach droplet has to be shot twice: the first low power one to push the droplet into a correctly aimed lens. The second to turn it into a plasma. Of course blowing up tin creates debris, so they had to develop away to prevent debris hitting the collector mirror. Each collector mirror takes Zeiss 4 months to manufacture. They can’t use lenses as EUV light can’t penetrate them. So they have to use mirrors. Only ~70% of the EUV light is reflected by the perfect mirrors.

All this gets you to feature sizes that are getting increasingly close to single digit numbers of atom per transistor, at which point you can’t make them any smaller.

The statement that you don’t think manufacturing costs are great is wrong.

Every high end fab costs billions of dollars, and generally has a five year life for high end (this high margin) chips. Actually producing the chips is expensive. For a high end chip you’re looking a 3 months to produce. That is 3 months of continuous 24/7 operation. The EUV optics are dumping enough power to etch the mirrors in the $150 million ASML steppers. A single wafer during production can travel 500 thousand miles of track, and consume 50 thousand liters of water.

All of those costs are essentially determined by wafer not number of chips, so a larger chip necessarily costs more to manufacture, and the costs pile up: the bigger your chip is the more likely you are to get defects that kill the entire chip, or kill execution units requiring you to disable the units (generally resulting in “binning” them as low margin budget chips). Then you also have geometry: wafers are circular, the bigger your chip, the less useable space is on each wafer. Further reducing the number of chips.

Manufacturing large high end chips is expensive