How do other people feel about this? I've discounted tons of hype cycles in the past (crypto/Blockchain, Metaverse, etc.), even in cases where I was wrong (e.g. the importance of mobile), but this feels at least as consequential as the Internet to me.
The time span is just a few short years and our predictions were already off by so much. There is no telling where we will be five years from now, let alone 20 or even further beyond.
That being said, I think you're right in that it is a turning point, and that there is a great deal of changes needed for us to adopt to what's ahead of us. The best way I can describe my take is that we will have a lot of growing up to do as a society. Lots of the systems that our modern world is built on are very fragile. We've rediscovered that during the lockdowns. The biggest ones to worry about being politics and the general political climate, the news cycle and how we deal with information. If we can't figure those out soon we will drown in an ocean of AI content where any semblance of truth is lost.
What I think we need is politicians that aren't stuck in time 30, 40 or 50 years in the past, that understand the world we live in today and can move accordingly. The information age hasn't caught up with the powers that be. At the pace that AI is moving, and the pace that politics is moving, what I think will happen is a trial by fire. Tech will move faster than the world around it can, and we will get burned before things get better. What exactly that will look like, who knows?
Do I think some of the jobs will get replaced? Certainly. As far as human being and people, I suspect code camp grads will suffer the initial blow, but eventually, it'll even it out and tech job will be in full swing.
I'm optimist by nature, after 20 years in development, I still feel we are still at the beginning of the technology (IT) revolution, not the end.
That said, I remain skeptical about how truly useful LLMs will be. Even with the latest GPT4 they still suffer from the same fundamental flaws as any LLM: being confidently wrong in a nuanced way. Nor do they understand mathematics, etc. I think narrowly scoped work (tightly scoped software developer work, data entry, transcription, etc.) is likely ripe for replacement.
The problem is I totally see them exacerbating income inequality significantly as a result. It's going to cause serious conversations about when this technology can be used and conversations about protectionism and worker rights will suddenly have more mainstream appeal.
Whatever happens, happens. Nothing I do will change anything about this. If it's as disruptive as you've described, I know my Government will do something to prevent massive job losses.
I don't see this as the end of the world, and frankly, my life is shitty enough without Chat-GPT.
Again, what can I do about it? This reminds me of listening to TWIV, a podcast about virology, about COVID before it hit my country and being like "it's going to be bad, but there's nothing I can do about it".
Once it hits, we'll see from there.
Medium term: the Industrial Revolution for intelligence.
Long term: the solar system, then the stars.
Our world is broken and unfinished. We are using up all the physical resources of our world and are stressing every natural system that we know of. Billions of people live in relative poverty and the world is packed with inefficiency and corruption. Our leadership is incompetent and our politics are toxic. We need help. Badly and soon.
If AI can enable the typical person to be 10 or 100 times more productive or make 10 or 100 times better management decisions, we might finally have a chance of escaping the Malthusian trap that’s about to close on us. Our solar system is filled with vast resources and vast stores of energy. We have the scientific knowledge to colonize the many planets and moons within reach but are vastly short on the energy, labor and materials required. AI could be the key enabler to get the space based economy off the ground, and release us from the resource constraints of this single planet. Once we have remade, all these worlds into gardens our descendants can head off to Alpha Centauri.
There will be a few years of tumult and experimentation with AI's but humans (the best ones in particular areas) will always be one step ahead imo.
Only jobs that need very little or no adaptation or creativity can be automated forever I think.
That is until, or if, there is an AI with an actual "life life" brain with new capabilities currently not existing
Maybe I'll go to plumbing school or become an electrician. I don't know.
Writing and visual art are both skills that take a long time to master. Part of the pride and pleasure people feel from mastering those skills is the time they put in. We're diminishing activities that nourished people's souls for hundreds of years.
Look at how fast it's getting better. Look at how quickly we're taking the very basic capabilities and expanding on them. Where will it peak? How good can it get?
Imagine if you had free access to a really, really good therapist 24/7. If everyone did. It learns you better and better, has the wisdom of 10,000 years of therapy sessions, can analyze you better than any human can. What happens next?
What about a LLM that replaces having friends? We're all lonelier than ever these days it seems. What if you had a pretty good friend that made jokes and chatted with you and had fun idea and stories. Someone you genuinely enjoyed talking to, maybe more than other people.
How long until someone comes forward genuinely in romantic love with an LLM? People fall in love over the internet all the time. Maybe this time it's not a real person.
I think we will fall for it. I think the models will get good enough that all of that will come to pass and a dozen things we never predicted.
It's going to temporarily upend some peoples careers, but I don't see long lasting impact.
White collar: significant disruption for the go-with-the-flow majority but opportunity for the inquisitive and enterprising.
Blue collar: Pressure on employment from incoming from the above.
5-10 years:
Significant ingress into low-skill blue-collar trades by AI as physical capabilities (accelerated by current tech) increase. Socio-cultural consternation as the caring, sex and other "human" professions are disrupted.
10-20 years:
Campaigns for AI rights, which the current AI ethicists forgot about in their rush to get us to a robot slave society. Slothful, unemployed masses turn against the elites controlling AI who turn the latter into police and armies for protection.
20+ years:
Only human jobs left are in John Connor's rebel army.
I think many of us are confusing the magnitude of our surprise at how "coherent" the responses of these LLMs are to how useful they are going to be.
In the end I think they will end up augmenting the workflow of many professions, but disrupting or totally replacing few.
As an engineer, if I can outsource the coding part of my job, I don’t need a manger anymore as I can use LLMs to build my own Microsoft. It goes both ways.
I might even be able to use the LLM to replace the need for working ?
We understand how to build LLMs Open AI has months before competition springs up and begins costs down and more people have access to this technology.
But who will develop langages now ? AI ? What about debugging, if the AI says there is no bug but it’s still won’t generate all the datas, forms, subscriptions or statistics needed or bug free. How to fix it ?
Now let’s say I love Daft Punk and David Guetta. Daft punk stopped producing music. So will I be able to say: « generate 1h of music from the style of daft punk and the rythm of David Guetta ». Even if I pay the service let’s say 50€ a month. Are those artists going to be compensated ? Because today if I mix and sell this exact music, depending on the country I might be sued for copyright infringement. But what about AI ? Who would be responsible ? Should we track every piece of content that AI use, track it down the blockchain to identify its sources and pay the relevant original authors ? Should a system like that be « built-in » by law ? Or should we do nothing and treat all created content as without any IP ? It’s very good for me, the consumer but very bad for all business. It seems like an unfair game and open AI should be 100% free as well ?
In the future there will be tiny super qualified teams on some narrow slice of competency having super narrow data sets they will guard like it is the fort knox.
Everyone else: Fridge repair.
An example: if an AI reliably can give legal counsel for clear cases, people can ask what they can do if something happens. This gives power to the people. Lawyers will work for unclear cases to create precedents. This opens up the space to solidify law even for niche cases.
This all dependents on whether society can care for the poor. If the poor are well off, too, then they can afford to be laid off and find something else. In extreme cases they don't need to find a new job but they need to find a new meaning of life. If this works out, everybody can enjoy the new offerings of AI.
I just am afraid that key people prevent progress, like forbidding legal AI because only humans can be lawyering around, but what they want is more money. I also am afraid that some people can monopolize access to AI. This is the bigger danger in my opinion than AI alignment. If everything is open and transparant and people can build their own AI assistants, we will get a new wave of progress.
This will be a giant step to utopia, I hope.
I think all this software that we will create will end up being used to optimize a lot of processes such as power generation, agriculture, and manufacturing. We will have specialized software for recycling things, leading to a much more circular supply chain. Right now it doesn't make sense to sit down and figure out how to clean, test and repurpose objects, but it will make sense once computers get smart enough.
In terms of jobs outside of computer science, as everything on the production side gets more and more automated, we are going to see much much more paid emotional labor. More people working in coffee shops, bartender/therapists, paid pen-pals ect. We will also see a huge growth in the services for end of life. Now, people who are dying often lay uncared for in group homes, alone, sad and frustrated. In the future, there will be people reading novels to them, and doing finger painting with them.
Every child will have the opportunity to have private tutors.
Medicine will start to work and biological mortality will decrease.
We'll go to the stars.
Maybe I'm just dreaming...
I'm wondering if we're going to get into a circular problem of information "purity". At the moment these models are trained on entirely human created content (because that's the only thing that exists). That training data is therefore roughly as true as it can be.
But what happens when significant portions of the internet have been generated by LLMs? What happens when other models are unwittingly trained on them? Do they, very subtly just become worse and worse? Does the prevalence of these models mean that people write material less and less - exacerbating the signal to noise problem even more.
Basically, these models are just really efficient at recycling things that people have written. What happens when nobody writes much and it's just recycling things produced by a model?
for context, I am young and a recent junior dev after a struggle through failing school, self teaching, and finally landing what I thought would carry me through retirement
For example in the corporate world, you have people whose jobs will be semi automated. Some that come to mind is instead of having a PM managing 1-3 projects you will have an AI assisted PM that can now manage 10-15 projects. Another example is an entry level engineer who usually works on well defined tasks. They will be able to type in plain English what the feature is and have most of the code generated for them.
I think the next generation of startups will be built on architectures that the LLM can understand easily and a lot of research will go into that. For example, maybe an LLM is really good at understanding micro service architecture or something.
One specific thing I can imagine is that a lot more will be expected of the individual knowledge worker. I'm a software engineer. I guess with amazing code gen and testing tools, I will be expected to deliver much more than I can at the moment, because of all the high tech help LLMs can offer.
Just thinking about how much I depend on tooling to make life easier... yeah, I can point to that.
As a writer, prompting is going to be even more fascinating. Shameless plug, but I tried out Alpaca and wrote about the results [0]. I wish the "gigglepotamus" was a thing! My little experiment was meant to see how much creativity I could get out of the fine-tuned 7B LLaMA model. It was hit or miss, but impressive when it worked.
Prompt engineering is already becoming a thing, because people will need people who can get the best out of an LLM. Kind of like how today people are hired to lead teams and get the best out of their individuals.
This kind of commoditizing of cognitive function might open up a few new spaces for "authentic natural intelligence". Not unlike the niche world of bespoke, hand-made good crafted by humans and not pushed out on the factory line.
TLDR - I'm trembling. With excitement and a bit of fear. There so much that can change so quickly.
[0] https://medium.com/sort-of-like-a-tech-diary/speculative-fic...
They will gossip about you behind your back, and if you're mean to any of them they will be mean to you in return.
The number one cause of death will be social exhaustion.