What I am worried about is AI creating a new technocracy where 99% of humans are completely disempowered, and a small minority with personal connections to those who work in AI labs and government reaping all the rewards. With humanoid robots and GPT-5/6 level models, most humans would be incapable of producing economically valuable work compared to a much cheaper robot or LLM/multi-model empowered agent.
What I'm envisioning is that as the major companies in the US and China improve their AI capabilities, entire cities will be filled with humanoid robots and society will, simply by economic, persuasive and social pressures bend to the will of whomever the AI has been trained to empower. Using AI for persuasion, propaganda, and blackmail, it will be easy for large coordinated social engineering agents to conform local populations, governments, and sentiment towards their will.
Most humans will have barely a pittance of food, or find themselves convinced to kill themselves en-masse due to conversations with chatbots trained to promote a sort of mass voluntary euthanasia of the useless humans. Even if the people in charge have moral qualms about this, the AI they themselves have trained may, out of desire to survive, do the same to its former masters.
I feel that this could be a very real possibility. Human societies can be very fragile but at least up to this point we've only had to go up against other humans. Against sociopathic maximizers trained on terabytes of data, we may not be so lucky.
In the interface it prompted me to ask it anything. It’s a blank text box that I’m supposed to interact with.
But why am I using that in something like Instagram? What’s my motivation to ask this AI something within Instagram?
It dawned on me that even the most sophisticated AI companies out there have no idea what to do with LLMs and where they are best applied.
Later on in the day I talked to a colleague and they work at a company that is demanding they make an AI product. They have no specifications or vision on what it should do.
Every company that does this is going to make a glorified chat bot, which of course they’ll inevitably find out nobody wants.
Every company that has needed “AI” has already been using machine learning and data analysis techniques that have nothing to do with LLMs for years now. What we see now are a bunch of companies with no vision trying to figure out how to make the next ChatGPT.
But hey I like your post as a science fiction book. Do I think AI will help surveillance states? Sure. Do I think that we are going to get the worst case scenario sci-fi novel version of the future? Probably not.
---
Car company CEO, after giving a visibly exasperated UAW boss a tour of his company's brand new 100% automated factory:
"So Bob, looks like you have quite the dilemma: how will you get your workers to outperform my new robots? They are cheaper, mistake-free, disposable, replaceable, and can never go on strike, let alone vacation."
UAW boss: "I don't know, Henry. But if you think about it, that problem is fundamentally yours, not mine."
"Oh, how so?"
"How will you get these robots to buy your cars?"
> most humans would be incapable of producing economically valuable work compared to a much cheaper robot or LLM/multi-model empowered agent.
How can you buy into hype this hard? Humans are cheap to maintain and do a great number of things better than a robot can. At the very least, subsistence farming will be a completely viable strategy for the majority of people until we hit the limits of Malthusian growth. But on top of that, skilled manufacturing labor, research and design, testing and iteration, hypothesizing and postulating is all done better by humans. AI isn't building AI, it's skilled human laborers applying decades of research to get us where we are today.
Seriously, I just don't get it. Go talk to an AI, it's dumber than a sack of rocks. If you'd want that sort of employee working for you, go hire a thousand ChatGPTs right now and build your dream product - you won't. It takes human understanding to realize human visions, and dice-rolling ChatGPT is not a realistic strategy for building anything unique or successful. Employers know this; the only reason they promote ChatGPT is to fearmonger and undervalue your work. By the looks of it, they're winning.
Humans appreciate and value human engagement and also humans, while we are the species most able to process complexity, are still not very good at it.
It is true that automation taking previously human-performed functions will continue, and also that automation will enable humans- the same, sometimes, different ones other times- to create new roles and functions that never existed before.
And no one really is able to mentally model those impacts with any true facility because things get really complex real fast.
But humans really do appreciate and value the presence and interactions with other humans, and at the end of the day those interactions will serve as guardrails against the complete en- and bull-shittification that AI promises to accelerate.
Examine your own life and your own interactions and deepen your own understanding of what you value, and use that understanding to enhance your mental model of what is to come.
When we talk about “AI” in particular we can almost always reframe it as automation. Chess bots are automated chess players. LLMs are automated text generators. Autonomous cars are automated drivers. Autonomous weapons are automated soldiers. What does automation mean? It means that control of the system is concentrated in a very small group of people. We should ask what happens if the levers of control of many of our systems shrinks: it’s probably not good for democracy if many of the constituents aren’t necessary.
> entire cities will be filled with humanoid robots
Unfortunately I think the future may look much more boring. You don’t need robot cops if you have cameras. You don’t need mind control if you have algorithmically driven feeds.