I haven't look at this much either but there's also [kui.nvim](https://github.com/romgrk/kui.nvim), a terminal GUI framework built on-top of Kitty Graphics and it seems to escape the TUI constraint of only being able to visualize things with text characters, being able to draw elements of any length. There's a [comment](https://new.reddit.com/r/neovim/comments/110znd4/comment/j8f6pb6/?utm_source=share&utm_medium=web2x&context=3) on this [Reddit post showcasing kui.nvim](https://new.reddit.com/r/neovim/comments/110znd4/kuinvim_an_experiment_into_a_real_graphical/) discussing the benefits of a terminal are that it's not a GUI. But if you were to use this, then how much would it be different from just using Obsidian with its various plugins along with with [Obisidian-bridge.nvim](https://github.com/oflisback/obsidian-bridge.nvim)?
So what makes a terminal a terminal, different from GUIs and full desktop environments? Is it the low resource usage, is it still low with Kitty Graphics and kui.nvim? Is it the keyboard-centric interaction for higher efficiency? Is it because of the other benefits of commands environments, like unix stdin and stdout piping? If you want full blown GUIs in a terminal environment then how is it much different than using a GUI app with full keyboard navigation and text inputs? How do you feel about rendering full GUI graphics in a terminal?
Personally I like the idea of rendering graphics in a terminal environment is it would be overall better than using GUI apps for the reasons listed above, but I'm feeling reluctant on that.
Compared to a commandline, both TUIs & GUIs add discoverability: no need to remember commands or -switches, just navigate through menus & see what's there.
I think all these are best regarded as different 'views' on the same underlying machinery. What works best depends on user preference / workflows.
Resource requirements obviously depend on the feature set & implementation.
So instead of clicking through 5 menus to find out when your last deposit was at the bank's website, for example, you could type, "What was my last deposit?" into a GPT-style box. The LLM would the perform the same actions as the GUI.
I'm convinced this is the future norm for human-computer interaction. It will be both GUI and natural language text.
* It works over SSH without any extra configuration * Some servers don't have xorg installed * Maybe save laptop battery not running a full DE
Most GUIs have evolved to be a bloated web render app. So TUIs are probably a statement that it won't be, and definitely snappier