If IDEs weren't confined to a 2D window, what would it look like? Are there any features you can think of that would make AR coding more productive than simply on a monitor?
AR VR iOS and macOS app for arbitrary code rendering in 3D space. Terminal like rendered glyph by glyph means perfect control over ever mesh and texture.
The iOS demo is fun. You can walk around your code like an art museum, draw lines of executed traces, and perform visual hierarchical search.
It is not obvious to me how an AR interface would make a difference other than more virtual screen real estate. You would still need a way to enter text, build code, run tests, etc. This means a keyboard and pointing device (mouse, trackpad, etc.) are still needed unless something else can do it better.
Granted, for the same cost as the Vision Pro you could get several large, high resolution monitors and have lots of screen to work on.
If you get an error, automatically search for the answer and propose the change.
If you add a new flow uncovered by tests, propose the test.
Generally, have panes that are dynamic to what you are doing, and tightly couple them.
I could imagine looking at different zoom levels of a code file, folder, or architecture, and working primarily on abstractions, approving / rejecting the resulting proposed edits.
Strategic coding more akin to a game like Supreme Commander or Planetary Annihilation.
Using it to present stacks of information (version history, undo/redo chain)
Using it to render background information that doesn't need to be swapped into the foreground to be useful - the architecture/module that the code you're working in serves, the remote services that fulfill certain commands, the test coverage available to you in this module.
Nodes would include class/object/method nodes w/ code blocks. So, an important AR/VR UX feature would be the ability to collapse/dive-into nodes & groups of nodes much like code-outliners in 2D IDEs.
Another awesome feature would be the ability to affix dials/gauges and other displays to the outside of nodes & node groups that would provide indicators of the unit's state: How "full" is a collection node, how often/frequently was this node invoked, the health of the node (errors, exceptions, slow-execution times), etc.
But in all seriousness, you can have a tree of small code windows connected by various dependencies. So it will be much easier to see a piece of code and it’s uses / definitions, and thus understand a codebase’s architecture
If they manage to pull something like this off and be first to market I guess Magic Leap, HoloLens, and whatever Meta is cooking up (if they still are doing something in that space anyway) will very likely be pretty much dead, by the way.
One obvious thing is much more space. I.e. unlimited number of extra monitors, or entities which work as such.
nobody invest into a closed platform. i expect jetbrains to came up with something marvelous for ar/vr, but it will run on the upcoming version of Microsoft or HP glasses. you know, the only ones today that works just like an external monitor without a locked in ecosystem like apple or facebook.
the silly apps and games and such will net millions tho.