I don't need you to convince me I'm wrong. I just want to know if there are other people here that feel the same way. Thank you for commenting thoughtfully!
What do you expect? Might as well ask an AI to generate that text, same level of information you'll be getting.
If you would've asked experts 6 months before chatgpt when we'd have the current capabilities they would've said we're at least 10 years away.
I'm a freelancer and all of my clients talk is AI. I think, it's cool tech, but also quite overhyped.
But I get it, AI has become the magic box that the masses of "idea guys" can use to realize "the next big thing". No more meddling with devs or designers.
Whelp, guess we have to wait for the valley of disillusionment.
Examples change as desired:
# Filter some topics
# top (title / url)
news.ycombinator.com##tr.athing span.titleline > a:has-text(/(lockchain|coin|202[3-9]$)/):upward(tr)
# bottom (stats / comments)
news.ycombinator.com##tr.athing span.titleline > a:has-text(/(lockchain|coin|202[3-9]$)/):upward(tr) + *
#
I am not a luddite by any means, I constantly keep trying out LLMs in order to see if I am missing anything. I am trying to get some utility from them, but I just can't.
"But you can use them to generate code", no, not really. First of all, why would I want to generate code? Code is a liability, I want less code, not more. Also, code is very expressive, I can say exactly what I want in code much more effectively than I can try to explain in English to an LLM. The LLM always misunderstands and generates garbage, garbage that takes more time for me to read, understand and fix, compared to simply writing it in the first place.
"Ah, but you can generate the boring stuff, boilerplate, stuff like that". I don't write any boilerplate, any repetitive things I automate by pricipled things, like more abstract code or by using my very effective text editor skills. Trivial code is easy to get right, by definition, why would I risk getting it wrong by using an LLM?
I do get some utility out of LLMs by asking questions about stuff I don't know about. The LLM's answers are almost always wrong, but they can push me in the right direction by informing me of things I am not aware of. This is not really a feature of LLMs, it's just that Google has become garbage at searching. So yes, LLMs are useful, but only by accident.
Generative "art"... miss me with all that.
For example, here are a couple from my HN favorites list, and I wouldn't mind seeing more of these articles:
* An Intuitive Explanation of Sparse Autoencoders for LLM Interpretability (https://news.ycombinator.com/item?id=42268461)
* Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet (https://news.ycombinator.com/item?id=40429540)
* Refusal in LLMs is mediated by a single direction (https://news.ycombinator.com/item?id=40242939)
80% of shopping is done in stores. Off line retail is still quite successful despite Amazon.
But they were successful because they adapted. Most stores that completely ignored the Internet failed. The successful ones adapted.
OTOH The ones that tried competing directly against Amazon failed at a much higher rate.
That suggests the middle road: neither ignoring AI nor embracing it completely. Instead concentrate on the many strengths we have as humans.
But I don't find its generative capabilities to be all that impressive. It is generic and uninspired at least in text generation. I feel the same way about most new things that are emerging out of the tech scene since Blockchain. I dont even bother with tech news nowadays.
AIs adoption has been quite fast though. Unlike blockchain that never went mainstream AI has captured the casual user segment. It's being used everywhere from recruiting to emails. In offices and in schools. It's hyped alright but real people are using it.
AI-related content is seasonal. This new LLM trend (which is now shifting towards “agentic AI”) will eventually fade and be replaced by another shiny new term. Basically, there’s a lot of noise, but every now and then, some gems pop up. Finding those articles is the fun part and the reason why I visit HN every day.
Aside from the fact that a ton of people here make money with AI or are or will be heavily invested in AI, there's just the curiosity factor, that I had and then lost to the reasons you described and many more.
My brain clicked in the past three days, and it's annoying that people didn't explain it properly but waited until I reasoned myself into it. It makes me angry and I'm gonna build a tiny potato canon this summer and hunt them down :P (I'm not, but I never build one, so ...)
Humans take a lot of wrong turns in their lives and it's important not to take that the wrong way. Envy is brutal and makes people do buttloads of dumb stuff and among those things are all those AI enhancers and data krakens build on top of GPT and of course all those people who create content and make those this-is-not-marketing videos ... but they all serve some customers, and of course, some Ponzis.
It rarely is what it is in this digital world, just as it always has been on any consumer markets. Honesty? Every single person and supplier counts.
Is it all pathetic? Is pathetic bad from all POVs?
A lot of people do it to pussy-grab money out of gullible people and if you really want to to get the benefits of AI, use it to hack these people and take em out of their fraudulent businesses so they have a reason to get and build better or pivot the fuck away.
I think these hype topics are inherent to HN though, given that the org is fundamentally a venture capital financial institution.
By nature, VC is always riding a wave of hype. Investors put money into a startup in hopes that it generates returns like the next singer on stage in her underwear.
The industry could support smaller returns from thousands of smaller startups, but the big payoff comes with investment in the next big superstar.
These are the same financial forces that guide massive industry consolidation.
Who actually likes goggle today? Investors...
Genetics research, especially for more rare conditions with less traditional research funding, is one application where I can’t help but feel excitement.
AI image & song generation in the style of a popular artist is one application where I only feel sadness.
Like so many successful applications of computers it's a new way of taking a monotonous task and grinding through it quickly. In this way I think it's different from some previous fads (e.g. block chain) and there is real utility.
I agree though that it's exhausting to read people anthropomorphize and hype it up.
With big advances, the current wave of AI is finally promising to turn this dream into reality.
Besides, it's already quite useful even with its current shortcomings.
--
² here's a short very incomplete list:
The Matrix, 2001 A Space Odyssey, Blade Runner, The Terminator, Robocop, Her, Ex Machina, Ghost in the Shell (1995, 2004, 2017), Data in Star Trek, Star Wars, ...
I see people here using those LLMs on their machine etc, I have no idea what they're doing with it or how any of that works.
Or writing prompts all day doesn't seem fun.
The paradox is that I work at a AI startup with a real product, with real clients ;)
So I don't think it's a fad and as a developper you have to stay alert on the effects it has on your profession.
AI is not useless NFTs.
But at the same time, I don't think Hacker News is that heavily flooded with AI related posts. In the 30 posts I see on the home page, 3 are AI related, with one being this post complaining about it. The next page of posts has even less, with only 1 or so post about an AI related topic.
I’m interested in AI tech (not philosophy or prophecies). But HN isn’t that great of a source for tech details. Reddit and local boards helped much more.
At the same time I have to ignore lots of shallow startups and react form developers 10x productivity reports, cause that sparks no interest.
AI hype is everywhere. On HN there is genuinely much higher quality conversation about AI than many other places.
So, no, the question in the title doesn't resonate with me. I don't think there is an obsession, and for the AI discussion happens, I'm happy with the quality of it.
But if you follow the thread to the end, this is the beginning of something that will change everything, probably more than the Industrial Revolution even.
The only thing I even remotely use AI for is Codeium in VS Code and that's almost entirely as autocomplete - every time I've tried to use it to generate anything more than a simple function I end up spending as much time going through it and rewriting it to suit my needs as I would have just writing it myself in the first place.
I like writing code, the same way I like writing prose and music and drawing. I will never want or need AI to do these things for me and, if I'm being honest, I'm always gonna be a snob to people who do.
If you don't care about the act of creation and the process of learning how to do it, what's the point? To "generate content"? To make money? Go be a prostitute then, if money is all you care about.
Maybe a life focused on efficiency of generating content works for some people but, fuck me, I'd rather gouge my own eyes out with a spoon.
So yeah, totally with you.
They’re all great to get you started on a (re)search path provided you take care to validate the information so easily acquired.
its a fascinating and useful tool, but its no silver bullet. Calm down and carry on people, less froth (I'm looking at YOU Microsoft!).
I flagged this submission for this reason.
Of course there is a lot of lame content and grifting around this, but I'm pretty sure this is not another Web3.0 nft crypto hype wave, this is here to stay and expand.
* People who like AI or China or Trump or Apple.
* People who hate AI or China or Trump or Apple.
You see AI slammed as much as it is glorified here. And AI is the topic of the industry today, it would be like shoving your head in the the ground if you ignored it.
Nope.
Someone took an HN poll last year and a majority agreed "AI" is overhyped.
I see your opinion a lot here on HN and I wonder if there is a segment of the population in this field who choose to stick their head in the sand? No doubt there is a lot of hype that won’t materialize but unlike some of the other hype cycles, people are starting to see a value from this current cycle already. Certainly at the bleeding edge the cost may exceed the value but it’s only a matter of time before those costs get eaten away.
So no, I don’t understand your feelings, I am excited for the future and if something gets posted on HN that I don’t like, I don’t read or upvote it.