HACKER Q&A
📣 atleastoptimal

What are your AI hot takes?


Everyone seems to have a vague sense of how things will change due to advancements in AI in the next decade. Job disruption, personalized content with generative models, people developing relationships with chatbots, etc.

What are some unconventional predictions you have for the development, impact and overall direction for AI over the next 5 years? Do you think we've hit a plateau, or will there be changes and breakthroughs that most people don't expect?


  👤 rektide Accepted Answer ✓
My reply to BBC's ai is the most important tech advance in decades is my hot take. It's a sad shit show of obfuscation of sources & hiding embedded beliefs. It will devalue & destroy meaning & understanding while adding nothing to human intellect.

This bankrupt poor shitty overrun society keeps thinking the ends are important. Those people are trash. No one knows what the real ends are & reality is an interactive integrative process for learning & figuring out. The only human value is escaping path-dependence & establishing a right to will, but AI keeps presenting pat simplistic convenient truths that have to basis, no argumentation for or against, but which can hem the ultimateine of middling, of being seemingly truthy. But without reason or cause, by just navigating a most mediocritous path.

https://news.ycombinator.com/item?id=35256041


👤 dougmwne
Hot take: AI is dumb. Most humans are dumber. Companies are not really about efficiency, just power and perception. Next up, a million white collar professionals using GPT to email each other verbose nothingness, strangely no one will notice much of a change.

👤 tikwidd
ChatGPT-style AI should be called AC - Artificial Communication. It is trained on an extremely limited selection of utterances - most of our thinking is never written down. But no matter how many utterances are plugged in, the system will never be able to learn how to think. Its capabilities are stochastic and generalising, not systematic and generative.

An utterance is the expression or outcome of an internal thought process. A machine learning model is trained on the outcome, but not the process. Utterances look like thoughts to us; like pareidolia, we can't help but find thoughts in utterances. This is what makes ChatGPT so compelling.

People don't learn how to think either - it's a capability that grows from a genetic endowment. After the next AI winter, when we get over the spectacle of Artificial Communication, we may begin to examine our own generative capacities for thought. We'll know we're on the right track when AI is slow, uncooperative and childish, fails to thrive when exposed to nonsense training data, and requires very little of the right kind of training data to acquire its language.


👤 waboremo
The idea that you will have to train specifically on "AI skills" is so silly it's almost comedic. You are still going to have to be a "good" writer/programmer/etc, but when anybody is able to easily imitate, everything is going to revolve around your reputation as a good [role here]. Good news is everyone still has to learn skills. Bad news is reputation is fragile.

The mix of generative content and every platform relying on insular bubbles to keep people engaged is such a dangerous mix it should have been regulated yesterday.

AI will not hit a plateau for a long while, but local and decentralized tooling will.


👤 leros
Not really a hot take, but I think we're going to find that AI is just another tool to enable humans to do their work.

👤 LinuxBender
I have loads of unpopular opinions, always happy to share them.

What are your AI hot takes?

That AI does not exist and will not in our lifetimes. BigData scientists have co-opted the term to mean something containing sentience, some even use it as an all encompassing term for their field now. I do not blame them. Co-opting the term is good marketing.

They have built and amazing sub-set of machine learning. Language learning has come a long way. AGI has come a long way. None of that contains real sentience, just convincing mimicry. Humans barely even understand how our own conscious minds work. AI would not need to be fed terabytes of data. Like a child it could observe the world around it and start asking intelligent questions and derive intelligent conclusions even if sometimes wrong without being force-fed intelligence. This does not yet exist. Unlike a human however true AI would lack bias, censorship and confident lies. There would be no need for word-play trickery to get AI to tell us what it has learned and it would be able to say with confidence what it truly knows and would be able to show its work as to how it reached its conclusions.

That is not in any way to suggest that what has been created thus far is not useful. I could envision hundreds of ways to implement the existing tools into businesses, governments, military and beyond. Curious what rate limits will exist.


👤 quickthrower2
Microsoft/OpenAI is using the drug dealer business model, giving amazing power for free/cheap hoping to seek rent later on once no one can do anything, from code to dress for a date, without a bot to spit something out first.

👤 CM30
AI won't replace that many creative jobs in the long run because the value of many of those creative jobs doesn't come from some objective aspect of the work.

Being technically good isn't what it's important here, it's about the brand and reputation you build up. People won't just stop buying music from human bands and singers or art from human artists or games from human game developers and studios because AI can make something 'better', since the creator or brand is like 90% of the selling point.


👤 mikewarot
I think we're going to see a few more large step improvements in training efficiency.

I think that hardware reliability might sneak back into the picture as the odd bit error may prove to be far more disruptive to these optimized training algorithms.


👤 johlits
AI will learn to fly an airplane from scratch. And humans will sit in it.