HACKER Q&A
📣 slmjkdbtl

Will there ever be a vendor agnostic GPU interface?


We moved from almost write once run anywhere OpenGL to vendor-dependent Vulkan / D3D12 / Metal / WebGPU, which is kinda impossible to support all without heavy abstraction layers and shader transpilers. Is there any possibility that we'll have easy cross platform in graphics realm in the future?


  👤 pcwalton Accepted Answer ✓
There's mostly not a reason why we couldn't have a single API other than politics. My opinionated takes (opinions mine and not those of my employer, etc.):

Vulkan: Standardized by a standards body (Khronos). In an ideal world we would all be using this.

D3D12: Microsoft likes Direct3D and has the market power to be able to push its own API. However, the situation on Windows isn't as bad as on macOS/iOS because drivers generally support Vulkan as well.

Metal: This mostly exists because Apple refuses to support anything Khronos related, for not-particularly-compelling reasons. It does have the excuse that it was being developed before Vulkan, but in 2021 its existence is not particularly well-motivated.

WebGPU: As an API, WebGPU in some form has to exist because Vulkan is not safe (in the memory-safe sense). For untrusted content on the Web, you need a somewhat higher-level API than Vulkan that automatically inserts barriers, more like Metal (though Metal is not particularly principled about this and is unsafe in some ways as well). That being said, Web Shading Language is just Apple playing politics again regarding Khronos, as the rest of the committee wanted to use SPIR-V. There's no compelling reason why Web Shading Language needs to exist.


👤 enos_feedler
The issue is that hardware systems have evolved in a very dynamic way over the years. Initially GPUs were mostly fixed function hardware with proprietary APIs for each driver. Then we had Microsoft that was able to flex its platform power to build a vendor interface in Direct3D. Now we have Apple doing the same thing with Metal. The issue is going to be the ever increasing integration of heterogenous compute engines in unique ways. Google Tensor SoC, Apple A/M chips, etc. AMD is going to increasingly license their IP to build out custom SoC for companies like Samsung.

The best chance to build a standard on top of all this change is to not view this as a "GPU abstraction" but rather a collection of compute engines. At this layer, the place where this happens will be compiler toolchain. IMO (as ex-NVIDIA compiler guy) the best effort happening right now is MLIR [1]. I don't know if dialec is the right abstraction but thats where things are at right now. The best example is is IREE [2] which is written at Google for mapping a lot of of the sophisticated translation inference on mobile for acceleration across GPU, neural engines, etc.

[1] https://mlir.llvm.org

[2] https://google.github.io/iree/


👤 slimsag
In my view, WebGPU is the future of cross-platform graphics. I wrote about this and why we chose it for a game engine we're developing in Zig[0]

It's the one API that Mozilla, Google, Microsoft, Apple, and Intel can all agree on. Nvidia and AMD are silent, but it's on top of Metal/DirectX/Vulkan behind the scenes so they don't really need to care.

Today, Vulkan with MoltenVK (for Apple devices) has more _features_, but even that is not super clear-cut. For example, mesh shaders are supported on AMD cards in DirectX 12 but not Vulkan.

It's not just Apple who continues to push their own API, either. DirectX 13 is likely coming soon, and from what I understand Vulkan gets "ported" features after they land in DirectX.

If you want bleeding edge features, you're going to need to use the native API: Vulkan on Linux, DirectX on Windows, and Metal on Apple.

But if you just want the common denominator? WebGPU native isn't a bad choice IMO. And it being a cleaner API than Vulkan may mean you need less abstraction on top to work with it.

[0] https://devlog.hexops.com/2021/mach-engine-the-future-of-gra...


👤 modeless
WebGPU is the easy cross platform graphics you're requesting. It's vendor agnostic and not limited to the web. You can use WebGPU from native code and you won't need your own shader transpilers or abstraction layers. You will (soon) just work on any platform.

👤 KennyBlanken
Vulkan is not vendor-dependent. It runs on AMD, NVIDIA, Intel, and Qualcomm chipsets. It runs on a huge number of operating systems.

(Vulkan performance on the Pi 4 just got a big performance boost, btw.)

It seems the only gotcha is that Apple is being difficult and not supporting Vulkan on MacOS, but MoltenVK is "within a handful of tests" of Vulkan 1.0 conformity.


👤 PeterisP
IMHO the requirements for that to happen would be a major slowdown in the "GPU race" where consumers consider that they have "enough" power and features, companies are not able to convince them otherwise and get pushed to commoditize and standartize the products as every product "ticks all the checkboxes", and one of those checkboxes happens to be good standards support. Alternatively, extreme fragmentation might force that, it happened decades ago when there very many video card manufacturers, but it's impossible now given the huge barriers of entry for competitive GPUs.

But while companies are able to differentiate their products feature wise (or even if they can't, but strongly believe they can), the incentives are set against it, so it's not likely to happen, the GPU vendors will push for vendor-specific aspects.


👤 glitchc
We don't want a standard.

One size fits all doesn't work for customized hardware, as OpenGL amply demonstrated, which is why it's going away. OpenGL started dying the day NVidia introduced the GeForce series with the T&L engine. That was followed by shaders, antialiasing, anisotropic filtering and now raytracing and DLSS, not to mention innovations on the compute side and GSync/Freesync implementations. Any kind of standard acts as a bottleneck for new hardware. It's too slow to adapt to new features because it requires review and ratification, and then of course, an abstraction layer is needed to support hardware missing those features. This adds complexity. Standardized APIs are a dead-end, just like standardized CPU ISAs are a dead-end. They are trying to solve the same problem in a wrong-headed fashion.

Apple's hardware is completely different from NVidia. It makes sense to have a completely different API to fully leverage the silicon.


👤 troymc
Pragmatically, many developers can just use the APIs provided by Unreal Engine, Unity, Godot, Qt, or whatever and let them figure out how to support various GPUs.

👤 pjmlp
OpenGL so far has hardly been a thing in game consoles anyway, and on Windows most vendors had crappy drivers versus their DirectX ones.

Also contrary to common beliefs, OpenGL is "portable" not portable, as the spaghetti code of vendor extensions, GPU driver workarounds, differences between GL and GL ES, shading language compilers, lead to multiple common paths hardly different from using multiple 3D APIs.


👤 corysama
How are Vulkan and WebGPU vendor dependent?

👤 ksec
>We moved from almost write once run anywhere OpenGL

I dont know who is "We". But I dont see the industry "moved" from write once run anywhere to vendor-dependent at all because it never happened in the first place. I mean even OpenGL you have vendor specific extensions.

Partly because all these API designed happened at a time when GPU hardware was moving so fast and constantly changing.

I think in terms of basic functions WebGPU is the closest we are going to get.


👤 Jugurtha
Something like this: https://github.com/plaidml/plaidml ?

👤 astlouis44
WebGPU is the proper answer here, it’s truly the future of cross-platform graphics. As developers we will be able to ship powerful 3D/VR experiences that “just work” everywhere that browsers exist.

The key is getting the major game engines to support these standards which is what my startup is working on.

We’re currently focusing on bringing WebGPU + WebXR + improved WASM support so real-time 3D apps and games can be deployed to end users wherever they are, no walled gardens required. It’s also the path to the metaverse, which needs to be built on open standards and protocols in order to be truly accessible to all, and not vendor-locked by an entity that seeks to own it all like Meta (FB).

If you’re interested in hearing more or want to leverage our platform that’s approaching general availability we’re building at Wonder Interactive, you can join our Discord here:

https://discord.gg/3t8bj5R


👤 oneplane
It will probably not happen as long as there is a commercial incentive to not do that. What you will get is a subset of features that can be common in a shared interface.

At this time a GPU manufacturer will conform to whatever workloads will run on the device, instead of creating a shared API and letting someone else choose how to use it.

Even if you just compare CUDA and OpenCL which makes use of the GPU in non-graphics workloads. In theory this was a chance to create a universal interface but in practise it turns out what you can make more money if your special sauce is better than the sauce of others, even if it's not better in all scenarios.

It seems the true agnostic interface will be on engine level where a compute library or graphics engine will be running on top of a bunch of non-agnostic interfaces and the real interface you'll be working with is the interface of the engine.


👤 fulafel
We have WebGL, so we have a near universally available cross platform interface now. It lags the current hardware a lot but seems to be the only actually working cross platform thing we'll have for a while.

If you look at WebGL implementations, the heroic amounts of driver bug workarounds found through trial and error over the years and feature compromises due to said bugs, it's hard to believe anything will match and replace it for a long time.


👤 kvark
See also the relevant discussion here about the “Point of WebGPU on Native”:

https://news.ycombinator.com/item?id=23079200


👤 mkl95
Yes, and I believe we will have it this decade. The intention is there, so it's a matter of money and time, mostly time.

👤 tester756
Is there something like LLVM IR for GPUs?

👤 pkulak
It _is_ Vulkan.

👤 sebow
Vulkan is the best choice right now, but "money talks" when it comes to what framework developers "should" use.