What may be becomes when it happens.
What is the potential, or probability of which kind of change?
Ideally, advancements will make for “optimal” efficiency, though their purpose is for useful feature progress, and these bodies rushing forward may well obliviously oil burn or inexcusably deforest precious remains of our Earth in pursuit.
In the future I hope for is not hover cars or glass roads, it is one where obvious technology disappears into the useful and tastefully enduring.
Enterprises FOMO may be toxic. The gradual progress at a resourceful speed is too inevitable.
What do we need?
Improved AI might be one where the user has more involvement in structuring the model’s “super-psyche”. Enabling and disabling layered YAML, overlayed by saved/edited sessions. This could continuously re-render everything you’ve ever done and inform you of substantial changes (you still want a report right?)
Or maybe away from LLM, and back to a deductive process for which we have more realistic expectations (recognize discrete domain level success.)
Consider an ultimate game server which facilitates requests based on signed policies, and provides real time self support.
Not AGI, better than just a chat bot.
So “who knows?” And, always happy to makes something up!