Vulkan: Standardized by a standards body (Khronos). In an ideal world we would all be using this.
D3D12: Microsoft likes Direct3D and has the market power to be able to push its own API. However, the situation on Windows isn't as bad as on macOS/iOS because drivers generally support Vulkan as well.
Metal: This mostly exists because Apple refuses to support anything Khronos related, for not-particularly-compelling reasons. It does have the excuse that it was being developed before Vulkan, but in 2021 its existence is not particularly well-motivated.
WebGPU: As an API, WebGPU in some form has to exist because Vulkan is not safe (in the memory-safe sense). For untrusted content on the Web, you need a somewhat higher-level API than Vulkan that automatically inserts barriers, more like Metal (though Metal is not particularly principled about this and is unsafe in some ways as well). That being said, Web Shading Language is just Apple playing politics again regarding Khronos, as the rest of the committee wanted to use SPIR-V. There's no compelling reason why Web Shading Language needs to exist.
The best chance to build a standard on top of all this change is to not view this as a "GPU abstraction" but rather a collection of compute engines. At this layer, the place where this happens will be compiler toolchain. IMO (as ex-NVIDIA compiler guy) the best effort happening right now is MLIR [1]. I don't know if dialec is the right abstraction but thats where things are at right now. The best example is is IREE [2] which is written at Google for mapping a lot of of the sophisticated translation inference on mobile for acceleration across GPU, neural engines, etc.
It's the one API that Mozilla, Google, Microsoft, Apple, and Intel can all agree on. Nvidia and AMD are silent, but it's on top of Metal/DirectX/Vulkan behind the scenes so they don't really need to care.
Today, Vulkan with MoltenVK (for Apple devices) has more _features_, but even that is not super clear-cut. For example, mesh shaders are supported on AMD cards in DirectX 12 but not Vulkan.
It's not just Apple who continues to push their own API, either. DirectX 13 is likely coming soon, and from what I understand Vulkan gets "ported" features after they land in DirectX.
If you want bleeding edge features, you're going to need to use the native API: Vulkan on Linux, DirectX on Windows, and Metal on Apple.
But if you just want the common denominator? WebGPU native isn't a bad choice IMO. And it being a cleaner API than Vulkan may mean you need less abstraction on top to work with it.
[0] https://devlog.hexops.com/2021/mach-engine-the-future-of-gra...
(Vulkan performance on the Pi 4 just got a big performance boost, btw.)
It seems the only gotcha is that Apple is being difficult and not supporting Vulkan on MacOS, but MoltenVK is "within a handful of tests" of Vulkan 1.0 conformity.
But while companies are able to differentiate their products feature wise (or even if they can't, but strongly believe they can), the incentives are set against it, so it's not likely to happen, the GPU vendors will push for vendor-specific aspects.
One size fits all doesn't work for customized hardware, as OpenGL amply demonstrated, which is why it's going away. OpenGL started dying the day NVidia introduced the GeForce series with the T&L engine. That was followed by shaders, antialiasing, anisotropic filtering and now raytracing and DLSS, not to mention innovations on the compute side and GSync/Freesync implementations. Any kind of standard acts as a bottleneck for new hardware. It's too slow to adapt to new features because it requires review and ratification, and then of course, an abstraction layer is needed to support hardware missing those features. This adds complexity. Standardized APIs are a dead-end, just like standardized CPU ISAs are a dead-end. They are trying to solve the same problem in a wrong-headed fashion.
Apple's hardware is completely different from NVidia. It makes sense to have a completely different API to fully leverage the silicon.
Also contrary to common beliefs, OpenGL is "portable" not portable, as the spaghetti code of vendor extensions, GPU driver workarounds, differences between GL and GL ES, shading language compilers, lead to multiple common paths hardly different from using multiple 3D APIs.
I dont know who is "We". But I dont see the industry "moved" from write once run anywhere to vendor-dependent at all because it never happened in the first place. I mean even OpenGL you have vendor specific extensions.
Partly because all these API designed happened at a time when GPU hardware was moving so fast and constantly changing.
I think in terms of basic functions WebGPU is the closest we are going to get.
The key is getting the major game engines to support these standards which is what my startup is working on.
We’re currently focusing on bringing WebGPU + WebXR + improved WASM support so real-time 3D apps and games can be deployed to end users wherever they are, no walled gardens required. It’s also the path to the metaverse, which needs to be built on open standards and protocols in order to be truly accessible to all, and not vendor-locked by an entity that seeks to own it all like Meta (FB).
If you’re interested in hearing more or want to leverage our platform that’s approaching general availability we’re building at Wonder Interactive, you can join our Discord here:
At this time a GPU manufacturer will conform to whatever workloads will run on the device, instead of creating a shared API and letting someone else choose how to use it.
Even if you just compare CUDA and OpenCL which makes use of the GPU in non-graphics workloads. In theory this was a chance to create a universal interface but in practise it turns out what you can make more money if your special sauce is better than the sauce of others, even if it's not better in all scenarios.
It seems the true agnostic interface will be on engine level where a compute library or graphics engine will be running on top of a bunch of non-agnostic interfaces and the real interface you'll be working with is the interface of the engine.
If you look at WebGL implementations, the heroic amounts of driver bug workarounds found through trial and error over the years and feature compromises due to said bugs, it's hard to believe anything will match and replace it for a long time.