HACKER Q&A
📣 hn_throwaway_99

How do you think LLMs will affect society medium/long term?


I know obviously there is a ton of discussion about GPT-4 in the main announcement post, and while I'm blown away by a lot of these capabilities, I honestly don't understand how our capitalist system will survive this, long term. I'm not a Luddite, but it's pretty easy for me to see how this will get rid of the need for tons of jobs. People always love to say "technology replacing people has been happening since the beginning of time", but the change here (at least to me) is that the rate by which AI can fill jobs will (or has?) hit a tipping point where it will be faster than new jobs will pop up.

How do other people feel about this? I've discounted tons of hype cycles in the past (crypto/Blockchain, Metaverse, etc.), even in cases where I was wrong (e.g. the importance of mobile), but this feels at least as consequential as the Internet to me.


  👤 MildlySerious Accepted Answer ✓
A few years ago the general opinion was still that artists were going to be safe from being replaced by AI for the longest time compared to other jobs. Today I think it is valid to say that artists were among the first people to get a direct feel for how much an entire occupation can be stirred up by AI.

The time span is just a few short years and our predictions were already off by so much. There is no telling where we will be five years from now, let alone 20 or even further beyond.

That being said, I think you're right in that it is a turning point, and that there is a great deal of changes needed for us to adopt to what's ahead of us. The best way I can describe my take is that we will have a lot of growing up to do as a society. Lots of the systems that our modern world is built on are very fragile. We've rediscovered that during the lockdowns. The biggest ones to worry about being politics and the general political climate, the news cycle and how we deal with information. If we can't figure those out soon we will drown in an ocean of AI content where any semblance of truth is lost.

What I think we need is politicians that aren't stuck in time 30, 40 or 50 years in the past, that understand the world we live in today and can move accordingly. The information age hasn't caught up with the powers that be. At the pace that AI is moving, and the pace that politics is moving, what I think will happen is a trial by fire. Tech will move faster than the world around it can, and we will get burned before things get better. What exactly that will look like, who knows?


👤 aynyc
Maybe I'm underestimating it, but it feels like the outsourcing wave 20 years ago. I'm not talking about just outsourcing developers, but support call centers, like what Dell did by moving everything to India, etc.. Almost 20 years ago, I remember my 50 year old boss was saying how developers are overpaid and we (young grads) will need new career soon. 20 years later, I'm still here. I heard similar sentiment in robotics and manufacturing. AFAIK, there are more jobs than skilled people applying.

Do I think some of the jobs will get replaced? Certainly. As far as human being and people, I suspect code camp grads will suffer the initial blow, but eventually, it'll even it out and tech job will be in full swing.

I'm optimist by nature, after 20 years in development, I still feel we are still at the beginning of the technology (IT) revolution, not the end.


👤 alfalfasprout
My prediction: this is going to replace a lot of "lower tier" jobs quite quickly. It'll also obliviate the need for a lot of outsourced knowledge workers.

That said, I remain skeptical about how truly useful LLMs will be. Even with the latest GPT4 they still suffer from the same fundamental flaws as any LLM: being confidently wrong in a nuanced way. Nor do they understand mathematics, etc. I think narrowly scoped work (tightly scoped software developer work, data entry, transcription, etc.) is likely ripe for replacement.

The problem is I totally see them exacerbating income inequality significantly as a result. It's going to cause serious conversations about when this technology can be used and conversations about protectionism and worker rights will suddenly have more mainstream appeal.


👤 macNchz
I've recently started to think that an AI "deity" is going to emerge at some point, and things are going to get really, really weird. The human psyche seems primed for religion/worship. What if anyone could talk directly to their god, as much as they could possibly want?

👤 user-extended
My two cents as someone who's not in IT and not doing well in life.

Whatever happens, happens. Nothing I do will change anything about this. If it's as disruptive as you've described, I know my Government will do something to prevent massive job losses.

I don't see this as the end of the world, and frankly, my life is shitty enough without Chat-GPT.

Again, what can I do about it? This reminds me of listening to TWIV, a podcast about virology, about COVID before it hit my country and being like "it's going to be bad, but there's nothing I can do about it".

Once it hits, we'll see from there.


👤 dougmwne
Short term: painful disruption.

Medium term: the Industrial Revolution for intelligence.

Long term: the solar system, then the stars.

Our world is broken and unfinished. We are using up all the physical resources of our world and are stressing every natural system that we know of. Billions of people live in relative poverty and the world is packed with inefficiency and corruption. Our leadership is incompetent and our politics are toxic. We need help. Badly and soon.

If AI can enable the typical person to be 10 or 100 times more productive or make 10 or 100 times better management decisions, we might finally have a chance of escaping the Malthusian trap that’s about to close on us. Our solar system is filled with vast resources and vast stores of energy. We have the scientific knowledge to colonize the many planets and moons within reach but are vastly short on the energy, labor and materials required. AI could be the key enabler to get the space based economy off the ground, and release us from the resource constraints of this single planet. Once we have remade, all these worlds into gardens our descendants can head off to Alpha Centauri.


👤 ffwd
I'm going to be contrarian-ish and say that LLM's / ai will never gain the creativity, flexibility and ability to adopt to novel stimuli nor create the new novel thing as much as a skilled human. That's why artists will also survive.

There will be a few years of tumult and experimentation with AI's but humans (the best ones in particular areas) will always be one step ahead imo.

Only jobs that need very little or no adaptation or creativity can be automated forever I think.

That is until, or if, there is an AI with an actual "life life" brain with new capabilities currently not existing


👤 lampshades
I spent 20 minutes riding around contemplating my life today because I don't see how my current skillset will be relevant by the time GPT-10 comes around in the next 10 years.

Maybe I'll go to plumbing school or become an electrician. I don't know.


👤 TJTorola
Why do we need to keep replacing these jobs (or working hours perhaps)? If as a society we produce more for less work why is that bad? Why don’t we spend more time questioning the underlying system that makes efficiency improvements a bad thing?

👤 dw_arthur
I'm worried when kids and teens see the capabilities of LLMs they will no longer be motivated to learn skills that the LLM easily demonstrates. Being able to write at the level of a college graduate used to be worth something. A person who was artistically talented used to at minimum derive some social benefit from the skill.

Writing and visual art are both skills that take a long time to master. Part of the pride and pleasure people feel from mastering those skills is the time they put in. We're diminishing activities that nourished people's souls for hundreds of years.


👤 mabbo
The economic implications don't worry me. The social implications do.

Look at how fast it's getting better. Look at how quickly we're taking the very basic capabilities and expanding on them. Where will it peak? How good can it get?

Imagine if you had free access to a really, really good therapist 24/7. If everyone did. It learns you better and better, has the wisdom of 10,000 years of therapy sessions, can analyze you better than any human can. What happens next?

What about a LLM that replaces having friends? We're all lonelier than ever these days it seems. What if you had a pretty good friend that made jokes and chatted with you and had fun idea and stories. Someone you genuinely enjoyed talking to, maybe more than other people.

How long until someone comes forward genuinely in romantic love with an LLM? People fall in love over the internet all the time. Maybe this time it's not a real person.

I think we will fall for it. I think the models will get good enough that all of that will come to pass and a dozen things we never predicted.


👤 p1necone
Efficiency improvements have never caused humans to do less work before, I'm not sure why it be any different now. If some jobs get replaced by AI I'm sure we'll invent new work to do with the now spare labor.

It's going to temporarily upend some peoples careers, but I don't see long lasting impact.


👤 mellosouls
1-5 years:

White collar: significant disruption for the go-with-the-flow majority but opportunity for the inquisitive and enterprising.

Blue collar: Pressure on employment from incoming from the above.

5-10 years:

Significant ingress into low-skill blue-collar trades by AI as physical capabilities (accelerated by current tech) increase. Socio-cultural consternation as the caring, sex and other "human" professions are disrupted.

10-20 years:

Campaigns for AI rights, which the current AI ethicists forgot about in their rush to get us to a robot slave society. Slothful, unemployed masses turn against the elites controlling AI who turn the latter into police and armies for protection.

20+ years:

Only human jobs left are in John Connor's rebel army.


👤 spicyusername
I think the expected amount of disruption is overblown.

I think many of us are confusing the magnitude of our surprise at how "coherent" the responses of these LLMs are to how useful they are going to be.

In the end I think they will end up augmenting the workflow of many professions, but disrupting or totally replacing few.


👤 scottcodie
Most technology advances are not labor replacing but rather labor augmenting. For example, LLMs could make teachers much more productive in the classroom but it would be unlikely to replace teachers entirely.

👤 ChatGTP
The question I ask is, why do you need a boss when you have an LLM ? People seem to have this image of themselves being fired and just going home to die while the LLMs are being promoted and going out for dinner with their new boss.

As an engineer, if I can outsource the coding part of my job, I don’t need a manger anymore as I can use LLMs to build my own Microsoft. It goes both ways.

I might even be able to use the LLM to replace the need for working ?

We understand how to build LLMs Open AI has months before competition springs up and begins costs down and more people have access to this technology.


👤 shinycode
If AI knows how to do most things then It’ll be able to create, host and run any service so many Saas will disappear. No more dev ? Then no more sales and marketing as well. No employees and no management. « Please write an app that does … ». That’s it. If we are there then there won’t be competition because anyone can generate its own tools. But at what cost ? It certainly won’t be free.

But who will develop langages now ? AI ? What about debugging, if the AI says there is no bug but it’s still won’t generate all the datas, forms, subscriptions or statistics needed or bug free. How to fix it ?

Now let’s say I love Daft Punk and David Guetta. Daft punk stopped producing music. So will I be able to say: « generate 1h of music from the style of daft punk and the rythm of David Guetta ». Even if I pay the service let’s say 50€ a month. Are those artists going to be compensated ? Because today if I mix and sell this exact music, depending on the country I might be sued for copyright infringement. But what about AI ? Who would be responsible ? Should we track every piece of content that AI use, track it down the blockchain to identify its sources and pay the relevant original authors ? Should a system like that be « built-in » by law ? Or should we do nothing and treat all created content as without any IP ? It’s very good for me, the consumer but very bad for all business. It seems like an unfair game and open AI should be 100% free as well ?


👤 mickmack_
Big organizations are done in my opinion.

In the future there will be tiny super qualified teams on some narrow slice of competency having super narrow data sets they will guard like it is the fort knox.

Everyone else: Fridge repair.


👤 _nalply
Robotic systems will help people with a lot of tasks. Some professions will feel threatened like artists, lawyers, counselors, programmers, and so on. If they don't prevent the use of AI this will be a benefit for the society as a whole.

An example: if an AI reliably can give legal counsel for clear cases, people can ask what they can do if something happens. This gives power to the people. Lawyers will work for unclear cases to create precedents. This opens up the space to solidify law even for niche cases.

This all dependents on whether society can care for the poor. If the poor are well off, too, then they can afford to be laid off and find something else. In extreme cases they don't need to find a new job but they need to find a new meaning of life. If this works out, everybody can enjoy the new offerings of AI.

I just am afraid that key people prevent progress, like forbidding legal AI because only humans can be lawyering around, but what they want is more money. I also am afraid that some people can monopolize access to AI. This is the bigger danger in my opinion than AI alignment. If everything is open and transparant and people can build their own AI assistants, we will get a new wave of progress.

This will be a giant step to utopia, I hope.


👤 timthelion
LLMs are absolutely amazing at writing unit tests. This really speeds up development a lot. I think we will see the creation of much larger, more complex code bases which will be even more complex than what we have now. And the demand for programmers will drastically increase. At the same time, it will be even harder to get into the profession as the already limited opportunities for junior devs will dry up. Perhaps we will move to an apprentice model in which senior devs will take on apprentices with some expectation of loyalty, or just out of civic duty.

I think all this software that we will create will end up being used to optimize a lot of processes such as power generation, agriculture, and manufacturing. We will have specialized software for recycling things, leading to a much more circular supply chain. Right now it doesn't make sense to sit down and figure out how to clean, test and repurpose objects, but it will make sense once computers get smart enough.

In terms of jobs outside of computer science, as everything on the production side gets more and more automated, we are going to see much much more paid emotional labor. More people working in coffee shops, bartender/therapists, paid pen-pals ect. We will also see a huge growth in the services for end of life. Now, people who are dying often lay uncared for in group homes, alone, sad and frustrated. In the future, there will be people reading novels to them, and doing finger painting with them.

Every child will have the opportunity to have private tutors.

Medicine will start to work and biological mortality will decrease.

We'll go to the stars.

Maybe I'm just dreaming...


👤 arduinomancer
Can anyone name an actual job replaced by LLMs yet?

👤 rkangel
The old truism is that "in the short term things change less than you expect, and in the long term things change more than you expect".

I'm wondering if we're going to get into a circular problem of information "purity". At the moment these models are trained on entirely human created content (because that's the only thing that exists). That training data is therefore roughly as true as it can be.

But what happens when significant portions of the internet have been generated by LLMs? What happens when other models are unwittingly trained on them? Do they, very subtly just become worse and worse? Does the prevalence of these models mean that people write material less and less - exacerbating the signal to noise problem even more.

Basically, these models are just really efficient at recycling things that people have written. What happens when nobody writes much and it's just recycling things produced by a model?


👤 Ambitious1325
Honestly I try to avoid thinking about it. Naive for sure, but as soon as I do I realize it's a matter of time or horsepower before what I do is either shifted massively, or taken away from me entirely. It's incredibly emotionally overwhelming and I'm not sure how to handle it. And it's not just about losing my job, I fear losing the gratification and validation that I get from solving something with code, from wrangling two bits of software together, getting it to click and knowing I made the sand think the right way. And not that I can't still do that without being paid for it, but I feel in the future there will be no material benefit to doing it "by hand" and therefore my love for it will be lost.

for context, I am young and a recent junior dev after a struggle through failing school, self teaching, and finally landing what I thought would carry me through retirement


👤 makestuff
It is going to be similar to manufacturing in America. Instead of thousands of workers building a car, you have 100 workers building a car assisted by robots.

For example in the corporate world, you have people whose jobs will be semi automated. Some that come to mind is instead of having a PM managing 1-3 projects you will have an AI assisted PM that can now manage 10-15 projects. Another example is an entry level engineer who usually works on well defined tasks. They will be able to type in plain English what the feature is and have most of the code generated for them.

I think the next generation of startups will be built on architectures that the LLM can understand easily and a lot of research will go into that. For example, maybe an LLM is really good at understanding micro service architecture or something.


👤 gardenhedge
Systems integrator - A person who puts AI driven pieces of technology together.

👤 wsgeorge
I'm excited and scared at the same time. Breathless, actually. My first long hard look at gen AI was just last Saturday evening. Since then, LLaMA was leaked and made easy to run on consumer hardware, folks from Stanford hosted a fine-tuned version of the least sophisticated LLaMA model, and GPT-4 has been announced.

One specific thing I can imagine is that a lot more will be expected of the individual knowledge worker. I'm a software engineer. I guess with amazing code gen and testing tools, I will be expected to deliver much more than I can at the moment, because of all the high tech help LLMs can offer.

Just thinking about how much I depend on tooling to make life easier... yeah, I can point to that.

As a writer, prompting is going to be even more fascinating. Shameless plug, but I tried out Alpaca and wrote about the results [0]. I wish the "gigglepotamus" was a thing! My little experiment was meant to see how much creativity I could get out of the fine-tuned 7B LLaMA model. It was hit or miss, but impressive when it worked.

Prompt engineering is already becoming a thing, because people will need people who can get the best out of an LLM. Kind of like how today people are hired to lead teams and get the best out of their individuals.

This kind of commoditizing of cognitive function might open up a few new spaces for "authentic natural intelligence". Not unlike the niche world of bespoke, hand-made good crafted by humans and not pushed out on the factory line.

TLDR - I'm trembling. With excitement and a bit of fear. There so much that can change so quickly.

[0] https://medium.com/sort-of-like-a-tech-diary/speculative-fic...


👤 ineptech
Voice chatbots will replace phone trees in ~2 years, and most call-center workers within 5. Within a decade it will be almost literally impossible to look up a phone number associated with a business, call it, and speak to a human.

👤 keskival
Like all appliances now beep, they will instead say "hello".

They will gossip about you behind your back, and if you're mean to any of them they will be mean to you in return.

The number one cause of death will be social exhaustion.


👤 basch
I see it playing out well for onboarding. It can interview your customer and put the information in a database. Sort of like branching forms that change and evolve as you give it info, but more complex.

👤 MuffinFlavored
I think one of its biggest applications might be in the therapy space. Just using it as somebody to talk to/bounce ideas off of/talks things through with.