From a more concrete perspective, I'd say that we've reached the point when we can transcend the idea of what "work" is currently. We can see beyond the limited view of doing things for monetary profit, and see that certain systems could be implemented much better if there were certain forms of entity-organization and resource-allocation at play. What is a company if it's ran by a computer? What does it serve? I mean, it sounds silly to suggest some big companies with overlapping domains of operation should consolidate their operations, but we all know that that's where things should be headed in many cases.
So yeah, what's your vision for the future?
what's your vision for the future?
Honestly, I consider those two pretty different questions. At the very least, I'd approach them very differently in terms of time-scale. What's "top of mind" for me is more about the short-term threats I perceive to our way of life, whereas my "vision for the future" is - to my way of thinking - more about how I'd like things to be in some indeterminate future (that might never arrive, or might arrive long after my passing).
To the first question then: what's on my mind?
1. The rise of authoritarianism and right-wing populism, both in the US and across the world.
2. The increasing capabilities of artificial intelligence systems, and the specter of continued advances exacerbating existing problems of unequal wealth / power imbalances / injustice / etc.
Combine (1) and (2) and you have quite a toxic stew on your hands in the worst case. Now I'm not necessarily predicting the worst case, but I wouldn't bet money that I couldn't afford to lose against it either. So worst case, we wind up in a prototypical cyberpunk dystopia, or something close to it. Only probably less pleasant than the dystopias we are familiar with from fiction.
And even if we don't wind up in a straight up "cyberpunk dystopia", one has to wonder what's going to happen if fears of AI replacing large numbers of white-collar jobs come true. And note that that doesn't have to happen tomorrow, or next year, or 5 years from now or whatever. If it happens 15 years, or 25 year, or 50 years, or whatever, from now, the impact could still be profound. So even for those of you who are dismissive of the capabilities of current AI systems, I encourage you to think about the big picture and play some mental simulations with different rates of change and different time scales.
If you care about the future, what people say on the Internet is not worth your time. Just make it happen.
End of the day, I see it as repeat of the 1920s, good and bad. Technology will drive discontent until we figure out how to tame it.
The hopeful version:
People get their head out of a phone only to realize
life is more than the next dopamine hit.
The dystopian version: The logical conclusion of what is detailed in the
paragraphs above.
Where being addicted to a handheld device is not
only normal, but expected.
Where "what it is that we actually want" is not an
individual choice, but a corporate one.
Where the idea of technofascism is introduced as
"silly to suggest" and then normalized as "but we
all know that that's where things should be headed
in many cases." (see above)
The "Metaverse" is going to be a more interactive, immersive extension of that device. I also believe that Meta's superintelligence team isn't necessarily about achieving AGI, but rather, creating personable, empathetic LLMs. People are so lonely and seeking friendship that this will be a very big reason to purchase their devices and get tapped into this world.