My understanding is that application developers are required to write these themselves and that not all developers have the inclination or the resources to do them.
The analogy I can make is with the Windows Hardware Abstraction Layer which provided standardized interfaces to device drivers so developers wouldn't have to deal directly with the quirks of the devices themselves. Microsoft set a standard or some requirements they had to fulfill and device manufactures had to meet those standards for them to be to Windows "certified".
In this case instead of Wayland providing a standard application interface developers can write for, the application developers have to deal with window elements at a lower level.
Can anyone explain to me what the fundamental differences between X11 and Wayland in this respect?
What newer better features is it meant to provide, or what problems is it meant to solve or address in relation to X11 and why should it take so long?
Wayland has been in development for 17 years now. I and I suspect a lot of others too don't understand how something so fundamental to desktop usage is still not ready to be used "out of the box", so to speak after so many years. If it was a part-time project mostly being done by volunteers in their spare time that would be understood, but it has received corporate backing over the years, apparently being something corporations are ready to bank on.
There is no doubt that if it was a fully corporate project it would have been canned years ago, but somehow some corporate backers have been willing to keep it going because of the large component of volunteers involved and it gives them an presence in the free software environment.
Is the "delay" (if that is a good way to describe the situation) due to technical challenges, wrong turns, inadequate resources, complexity or any of the other issues that affect software projects?
I've read a few blog posts over the years about problems developing for Wayland eg https://dudemanguy.github.io/blog/posts/2022-06-10-wayland-xorg/wayland-xorg.html, though I acknowledge that things may have improved since then.
Right now I just want to go back in time to 2007 understand what it is [sic] at its core at its inception and how meaningful its goals however distant make it an alternative to X11 - what makes it a candidate for its corporate support.
I'm getting the HTML5 vs Flash vibes, when instead of opening up Flash and cleaning it up or creating a more secure open source alternative, corporate agendas preferred to go for HTML5 which in my view was never a proper substitute for what Flash accomplished. Guys please don't let this comment be a major digression.
I just want to understand it technically from nuts and bolts upwards. Some comprehensive diagrams comparing with X11, Windows and macOS would help.
https://en.wikipedia.org/wiki/Windows_Display_Driver_Model
except Microsoft had that up and running in 2006 but politics has kept Linux from doing the same. Specifically, WDDM and Wayland simplify the graphics interface to an API that lets you blast out a rectangle of pixels so unlike the old HAL model you weren't specifying an API where it can draw primitives like lines and circles, the API allows user-space applications to run software on the GPU to make pixel rectangles that either get blasted to the screen or composited.
Microsoft is really good at maintaining backwards compatible with the UI (I can still run Office '97 on my PC) and they figured out how to make it so old applications don't know what happened.
Incompetence at GUI work is a hallmark of the politics of Linux, just as power management "just not working" is a hallmark of the politics of the Intel-Microsoft-Dell alliance.
> What newer better features is it meant to provide, or what problems is it meant to solve or address in relation to X11 and why should it take so long?
IMHO, Wayland is meant to address a few issues that can't really be addressed in X11.
Integrating the 'display server' with the window manager eliminates some unfixable syncrhonization issues in the X11 design, mostly around creation of new windows and how they're placed (or something like that, I don't quite remember), but the downside is that window managers need to integrate a compositor and have more behavior for that.
The other big thing is that X11 is a distributed system with no security between parts. Any connected X client can read or generate keyboard events at any time, or read any part of the screen. This is not ideal for lots of reasons, but the Wayland way requires different approaches to make things like screenshots, screen capture, and global hotkeys work. Also, because each window manager/compositor has its own display server (although many use the wlroots library for the base), consensus building is needed or a program that wants a hotkey needs to know how to do it in X and in all the popular Wayland compositors. Consensus building is hard.
From what I understand, Wayland required new driver architecture, and drivers for Nvidia were slow to arrive.
X11 was built around networking and has always allowed for a display server located near your monitor and clients distributed elsewhere in a network. This used to work well, but requires proper client architecture to handle the asynchronous nature of networks for best results. There's a fraction of users who make use of this (including me) in their regular use case for X. Waypipe was slow to arrive, and isn't as convenient as ssh -X, but should help provide for this use case.
There's probably some other stuff. But basically, Wayland is not a drop in replacement for X11 and X11 works for a lot of people, so they don't see a big reason to switch and disrupt their way of life.
It continues because most of the people who were working on Xorg prefer to work on Wayland. There's a lot of people relying on the work of a few people who do the deep work on display servers, so you know... either you use what they're making, or you use the older things they made, or you go to CLIs. :)
Since then, the X11 protocol got extended and extended, but the "core protocol" is not allowed to change (by design). There are a zillion instructions on how applications can tell the server how to draw stuff, but basically none of these are used anymore.
These days, when it comes to drawing, what the clients want to do is very different than the traditional X11 network protocol model. Clients want to create themselves the buffers, draw on them (using GL/Vulkan), and just be able to tell the compositor: "hey, I finish drawing on my buffer, please display it on the screen", without even having to send pixels over the network. This is accomplished by the DRI and Composite extensions (and a lot of glue everywhere) and these clients are able to not make use of a vast part of the X11 protocol.
So based on all that, Wayland's idea is to promote that new model to be the base level. Wayland is also a protocol, but there are no protocol calls such as "draw this line from x:20,y:20 to x:40,y:20". The operations match the new model: "hey, I just finished drawing on my window that has handle 0x30984, can you please display it for me?" (well, actually, it's more like: "when syncobj handle 234 gets signaled, the buffer is yours to display"). The big difference here is that instead of having a server, a client and a WM/compositor, there is just a client and a server/WM/compositor. This simplifies a lot of things, allows more security, but creates the problem that, well, Gnome wants its compositor to behave in some way, but KDE wants its compositor to behave in another way, so each one will have to be a whole Server and not just an application anymore. And that causes the fragmentation that has been haunting us a lot.
On top of all that, Input handling and Accessibility was sorta underestimated on the Wayland side, and things have been evolving rather slowly. A lot of stuff that is possible to accomplish these days with X11 is still not covered by Wayland. A lot of super-specialized X11 apps will never be ported to Wayland, and there is probably not even Wayland protocol in existence to support that. This is all, of course, a solvable problem: throw enough money at these issues and they will all disappear (but you may want to dedicate part of this money as bribes, because gathering consensus for some stuff without bribery may be impossible so your extension proposals may get stuck in limbo). For example: even a filthy rich behemoth such as Valve has been having problems merging extensions that will allow its games work better on Linux. Good luck with anything you may need.
I am also deeply disappointed that Wayland has been taking to much to develop, but I don't have anybody specifically to blame. I believe part of the issue is that more money should be thrown by I-dont-know-who at it. The Compositor fragmentation is also a painful issue, but going back to insecure X11 protocols doesn't seem like a good alternative. Anyway: I just don't know how to make things better. Let's just try not to blame the developers that have been doing herculean efforts to keep all these cards stacked on top of each other. Without them, perhaps we'd still be doing insecure inefficient 80's-style window drawing and wrapping, moving pixels all over network sockets.
I'd love to be corrected on my views by anybody on this forum.