HACKER Q&A
📣 sebnun

Why isn't Apple competing with Nvidia?


Today Nvidia added $277B in market value, Wall Street's largest one-day gain in history.

Apple has, in my opinion, made one of the most significant advances in SoC with Apple silicon.

I know dedicated GPUs are a different beast, but wouldn't make sense for them to invest in a new dedicated GPU line to compete more directly with Nvidia? It is clear there is money to be made there.

Google has their own TPU line, but as far as I know, they are only allowed to be used via GCP.

If not Apple, what company has the cash and know-how to dethorne Nvidia?


  👤 samspenc Accepted Answer ✓
They are technically already building GPUs, but specific to their own iOS and Mac hardware.

Google, Microsoft and Amazon are actually doing something similar, the GPUs and special-purpose chips and hardware they are building are built to run mostly on their clouds. They are not selling their chips directly to customers and I doubt they will.

I suspect it will cost Apple and other companies a LOT to build general-purpose GPUs that they can sell directly to consumers, enterprises or data centers.

Actually you can take a look at Nvidia's biggest current competitors - AMD and Intel. They have barely moved Nvidia's GPU market share in the past few years, and are unlikely to be able to meaningfully compete even for the next few years. It's not just about building better hardware, it's also about the ecosystem (CUDA etc) built on top of the hardware, which gives Nvidia a moat today and likely will at least for a few more years to come.


👤 runjake
1. Apple isn't interested.

2. Apple is focused on consumers and that's where their strength is.

3. Apple probably isn't capable (without a lot of hiring and re-org). I suppose this point might be debatable, but they'd need to develop GPUs and an open-use library that overthrows CUDA.

4. Apple doesn't do or support modular, commodity hardware.


👤 fbn79
Compete with NVIDIA is not only a matter of build and sell GPU. If you want to compete with NVIDIA you need to develop your proprietary GPGPU platform (https://en.wikipedia.org/wiki/CUDA) and convince thousands of developers that their software (totally based on NVIDIA API that are around since 2007) must be redesigned and reprogrammed to support you new proprietary API and Hardware.

👤 JSDevOps
Why would it need to? It's like saying why isn't Walmart competing with Vertex Pharmaceuticals or whatever, They make plenty of money doing their thing.

👤 sircastor
Apple doesn’t want to be in that business. Apple doesn’t want to supply an industry with components, it wants to supply an industry with an entire ecosystem.

Apple’s approach to technology is to provide the whole setup.


👤 koqoo
it will!

soon or later, GPUs will be a vintage hardware.

see what happens with Bitcoin ;)


👤 ActorNightly
>Apple has, in my opinion, made one of the most significant advances in SoC with Apple silicon.

They didn't. A lot of people seem to be confused about apple silicon.

First and foremost (rant incoming, sorry), apple is not a technology company, its a tech jewelry company with extremely good marketing and advertising departments. Nothing they have made over the decade has been proven to be that good. They just make the products flashy and spend a bunch on advertising in all the right ways. Just look at what they are doing with Apple Vision now, there is a bunch of tech people and celebrities all trying to normalize it. Same shit happened with silicon, they gave it to the right people with instructions on how to review it, and collected profits. They have a history of doing exactly this all the way back to the 90s (with the only exception of 2008-2010 mobile phone market before android caught up)

Now for the tech part, Apple silicon is a case of hyper optimized hardware for specific use cases. This is literally nothing new. Its the same reason ASICs took over GPUs for crypto mining. RISC vs CISC doesn't really matter (https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...). Turns out when you make hardware specific enough for the general use of a laptop (i.e basically port phone processors into a laptop use case) you get really good battery life. The reason other chip makers haven't done that is because their use case must apply to a wider range of hardware, so its impossible to hyper optimize for all of those. However, AMD has managed to get some neat results in terms of speed and battery life with their 7x chips, which I would argue is a more impressive achievement because these chips specifically have to work with that wide array of hardware, whereas Apple doesn't.

As far as raw processing goes, just like the article said, if you keep all pipelines full, the chip doesn't really matter. As such, GB6 results show that single core performance of M3 Max is pretty much equivalent to the Intel's top chip right now with similar TDP.

Now for your question of Nvidia: Nvidia isn't killing it because they have better hardware engineers (AMD gaming cards are pretty on par and sometimes better than Nvidia ones, and rendering is all matrix math anyways), they are killing it because they invested A LOT of time into the software piece making CUDA what it is, which is why they can solder more ram onto their gaming cards, rebrand them, and sell them for 10x the cost, and people keep paying because developing on other stuff from scratch means that you have to essentially make your own "cuda" for that hardware. Turns you have good API, people are very keen to develop for it, which makes it defacto standard. Back in the GPU crypto mining days, AMD was actually the one in demand because it would do one of the ops in less cycles than Nvidia.

Apple will never do this, because if there is one thing Apple demonstrated that it cannot do through the years is write good software. Its actually quite sad how Windows 11 with WSL2 works way better than macOS these days. And despite all the ML hardware in Apple Silicon, there still isn't any support for it that is mature - last time I checked, tiny grad (george hotz ML framework) actually performs better on Apple Silicon than Pytorch, whereas its way behind on Nvidia.

Nvidia most likely will get dethroned either by

1. AMD because AMD will compete on price as they have historically, and they have decent enough engineers to write drivers that work well under Linux (much better than Nvidia), so they just need to allocate resources to do the same. HIP has shown good progress.

2. Google if they ever make their TPU consumer grade. They have been in the game the longest with Tensorflow and TPU development. Tensorflow isn't as good as pytorch for development, but its mature enough to be able to be used, and if you can buy a TPU and use Tensorflow to interface to it, its a winning combo.

My bet is on 1.


👤 type0
because they don't have to