AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.
LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.
Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?
Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.
Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.
Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
That's tiring, and really annoying.
It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.
It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.
Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
When I started with the topic I watched a documentary with Joseph Weizenbaum ([1]) and felt weirded out that someone would step away from such an interesting and future-shaping topic. But the older I get, the more I feel that technology is not the solution to everything and AI might actually make more problems than it solves. I still think Bostrom's paperclip maximizer ([2]) is lacking fundamental understandings of the status quo and just generated unnecessary commotion.
[1] http://www.plugandpray-film.de/en/ [2] https://www.lesswrong.com/tag/paperclip-maximizer
Where will the AI hype train go? The internet as we know it already has so much SEO engineered content and content producers chasing that sweet, sweet advertising money that they could all be replaced by mediocre, half-true, outdated content created by bots. So do we have to wait until our refrigerators are "AI powered, predicts your groceries for you!" in order to see the usefulness?
If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task. Doing so is a more or less straightforward extension of existing techniques in AI. All that is necessary is to record you performing the task and then an AI will be able to imitate your behavior. GPT3 can already do this for text, and doing it instead with trajectories of screen, mouse and keyboard is not fundamentally different.
So yes, it is true that there is a lot of hype right now, but I suspect it is a small fraction of what we will see in the near future. I also expect there will be an enormous backlash at some point.
We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.
This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.
It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.
That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.
Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.
It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?
Imagine how the HN users who disagree with that feel. It is beyond fatiguing. I’m frequently reminded of the companies who added “blockchain” to their name and saw massive jumps in their stock price, despite having noting to do with blockchains¹.
¹ https://www.theverge.com/2017/12/21/16805598/companies-block...
Note that nobody is pretending that ChatGPT is "true" intelligence (whatever that means), but i believe the excitement comes from seeing something that could have real application (and so, yes, everybody is going to pretend to have incorporated "AI" in their product for the next 2 years probably). After 50 years of unfulfilled hopes from the AI field, i don't think it's totally unfair to see a bit of (over)hype.
In other words, if you’re fatigued already, I have some bad news regarding the rest of your life.
(…or take a good step back from the news cycle, check in once or twice a week instead of several times daily. News consumption reduction is good for mental health.)
At the same time people's actual quality of life or economic standing is going nowhere, there is fragility that bursts in the open with every stress, politics has become toxic and the environment gets degraded irreversibly.
Yet people simply refuse to see and they keep chasing unicorns.
Sorry, I couldn't help; that is the ChatGPT response to your question. More informatively, AI is clearly at the height of inflated expectations. It will provide a helpful tool. However, it will not push people out of jobs. Furthermore, right now it gives a much better search experience than Google, as it is not yet filled with ads or has been gamed extensively by SEO. It is doubtful this will stay like this in the future.
I've got "AI Fatigue" not in the sense that it is overhyped, but just like "JS Fatigue": It is all very exciting, and new genuinely useful and impressive things are coming up all the time, but it's too much to deal with. I feel like it's difficult to start a product based on AI these days due to the feeling that it will become obsolete next week when something 10x better will come out.
Just like with JS Fatigue back in the days, the reasonable solution for me is something like "Let the dust settle a bit before going in the latest cool thing"
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
It's the same kind of people that were hyping cryptocurrencies in the past. People who understand nothing about the technology, but shout the loudest about how amazing it is (probably to make money off of it). Those are also the kind of people that will be the cause of the next AI winter.
But it seems like the current trendline for “AI” is going to be worse. Why be excited about building tools that will undermine democracy and cast doubt on the authenticity of every single photo, video, and audio clip. Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.
And at the very least, if you think blogspam sucks now, wait until this becomes 99.9999% of all indexed content. It’s going to jam all of our comms with noise.
But hey it looks great on your resume, right?
Maybe I’m too cynical, would love for someone to change my mind. But you are not alone in your unease.
In the meantime, all the attention and media is easing people into thinking about some difficult questions that we may end up having to deal with sooner than we'd like.
The hype can be annoying, and I'm sure they'll be suckers who lose a lot of money chasing it, but I'm also sure AI will get better, and be better understood too, as a result of all of the attention and attempts to shoehorn it into new roles and environments.
It just feels like a waste of time having read the comment. Even if the information is there I don't trust the user to be able to distinguish between true or confident false. If it's not my skillset or knowledgebase I assume it's wrong because I can't tell and can't ask followup questions.
Me using it as an assistant? Love it. Others using it as an assistant? I don't trust them to be doing it right.
In any case I want to read your opinion, copy paster, not a robot I could just ask in my own time! Just don't post if you've got no thoughts lol
BUT the rate of change in AI is enormous and it will be a much bigger deal than the internet over the next 10 years. Not because of API wrappers, but because the cost of many types of labor will effectively go to zero.
At least all the previous crazes didn't threaten to replace humans, so I suppose this tech hype bubble is arguably even more irritating.
It seems more likely that we'll surpass the hype than not in the next few decades. I think people have forgotten how quickly technology can move after the last 20 years of relative stability where more powerful hardware didn't really change what a computer can do.
Cloud for this cloud for that! Blockchain for this blockchain for that! Big Data for this, big data for that! Web scale all the things!
The marketing driven development is exhausting and has done nothing to improve technology. This happened because of 0% interest rates and free money. People have been vying for all the VC money by creation solutions looking for problems, which end up being useless solutions for which no problems exist
To the general public, ChatGPT and the Image Generators 'just appeared,' and appeared in a very impressive and usable form. Of course there were many waves of ML advances leading up to these models, but for many people these tools are their first opportunity to play with ML models in a meaningful way that is easy to incorporate into daily life and with very little barrier to entry.
While impressive and there are many applications, my questions surrounding the new AI tools relate to the volume of information they are capable of producing and our capacity to consume it. Tools can be used to synthesize the information, tools can act on it, but there is already too much 'noise.' There is a market for entertainment tailored to exact preferences, but it won't provide the shared cultural connection mass media provides. In the workplace, e-mails and documents can be quickly drafted. This is a valuable use case, but it augments and increases productivity. It will lower the bar necessary for certain jobs, and it will increase productivity expectations, but it will become a tool like Excel rather than a replacement like a factory robot (for now).
The Art of Worldly Wisdom #231 - Never show half-finished things to others. <- ChatGPT managed it's release perfectly in this regard.
IMO AI has reached this stage of its lifecycle. There have always been, and still are, valid use cases for AI, but I think the GPT-3 inspired applications we've been seeing as of late are no more than impressive tech demos. It's the first time the general public has seen a glimmer of where AI can go, but it really is just a glimmer at this point.
My advice is to keep your head down and try to be selective with the content you engage with on AI. It seems like every feed refresh I have some unknown Twitter Verified account telling me why swaths of the population will be out of a job soon. The best heuristic I have so far is to ignore AI-related posts/reshares from names I haven't heard of before, but of course that has obvious drawbacks.
It's not AI it's an IF statement for crying out loud :-(
But this is the industry we're in, and buzzword-driven headlines and investment are how it goes.
Actual proper AI getting some attention makes a pleasant change tbh :-)
I think putting AI inside everything will give us opportunity to experience first-hand what is a local extremum of multidimensional function and how it differs from global extremum. Our paper gets eliminated because some AI-based curriculum vitae review glitch. Our car lost a wheel because computer vision failed (or lose our heads like that one owner of Tesla )... Most scary for me is that we are starting to build more and more things of which we wouldn't be able to understand the inner workings. Hence there might be an intelligence crisis creeping slowly into our civilisation, and then bam... like in Andrzej Zajdel's Van Troff's Cylinder
It will be increasingly tiresome until it becomes commonplace, then the disastrous consequences will become the next tedium.
In a better world, it’d be possible to occasionally pause, take a breath and think about what the models are actually doing, how they’re doing it, and if that’s what we want something to do. However, it’s hard to find space to do so without getting run over by people “moving fast” and breaking things and feels like doing the hard corrective work is so much less rewarded.
I'd rather we have bitcoin crazes, scaling crazes, nosql crazes and GPT crazes than this industry commoditizes itself to hell and I have to spend the rest of my career gluing AWS cognito to AWS lambdas for $55k / year.
At the same time I'm pretty sure that it will wildly change any industry where creativity is critically important and quality control either isn't that important or can be done by amateurs. There is substance at the core of the hype.
It seems too exciting to me and I am eager to see more AI. It's fascinating stuff.
I'm excited for these emerging technologies, but I don't care about any of the products people want to sell based on them. I've spent the past 27 years developing zero-effort self-filtering against spam and hucksters, so I'm not even aware of any AI startups, just as I can't tell you the names of any Bitcoin exchanges. That's just not in my sphere, and I'm not missing out.
Hunker down and have fun. It's incredibly accessible, and you likely have more than you need to get started making it work for you.
Best you can hope if you're a "Y" person is for the marketers to get bored of the current Y and jump to the next one, leaving yours alone.
AI is wide and deep, and its proper uses are so so far removed from mainstream media and the hype-train.
AI still has too many undiscovered areas of usefulness to the degree that it will nothing short of transform those areas.
But you hear most of the times about Stable Diffusion, see melted faces and weird fingers, and screenshots of ChatGPT.
These, wrt area and width, are nothing compared to what is possible.
So, no, I am not AI fatigued as I don't pay much attention to these hypes at all.
People are still trying to figure out what the new AIs can and can’t be used for.
Some people will try to build ridiculous products that don’t work, but that’s just part of the learning process and those things will be weeded out over time.
There’s no ‘clean’ path to finding all the useful applications of these new models, so be prepared to be bombarded with AI powered tools for a few more years until the most useful ones have been figured out.
While crypto or VR tech still hasn't arrived in our daily lives, most of my friends are already using tools like ChatGPT on a regular basis.
None of this is new; there's a special magic phrase to attract VCs that changes every few years, and for now AI is it (we've actually been here before; there was a year or so a while back when everything was an "intelligent agent"/chatbot).
AGI could be ML driven, most likely it is not. Neuronal nets are still AI tech. Even Bayesian inference is weakly AI tech.
The public always misuses words. Words change to match that meaning.
ChatGPT is the "new" booster shot, it's a hell of a boost and this one might stick. What will not stick is the copious amount of wishful thinking and bullshit the usual suspects are bringing in. ChatGPT is a godsend after crypto went bust and the locusts had to go somewhere else.
I suspect we will have to endure a crypto-craze like environment for a couple years at least..
When asked for references it cannot refer to any. Scientifically useless?
Until AI can filter out fact from fiction, it will continue frustrate the technical people who rely on absolute truths keep important systems running smoothly.
That uncritical handling along with a growing offer can lead to the next big bullshit bubble.
Machine Learning research isn't "for us." Let the researchers do what they do, and toil away in boring rooms, and eventually, like the internet slowly did, it will be all around us and will be useful.
Personally I enjoy creating language models and agent networks, at work I make predictive models so.. :)
Even if I didn't find the tech fascinating and especially the new emergent features of the big LMs, I would be left in the dust professionally if I ignored it. The tech really works for a lot of stuff.
The only thing we can definitely do better than machines is sad, proud sophistry. “Not real understanding, not real intelligence, just a stochastic parrot”. Sure, keep telling yourself that.
I asked some general questions to ChatGPT, and it gave me pretty coherent answers. But when I asked really specific question like "How to rewrite Linux kernel in Lisp", then it gave me seemingly gibberish answer.
This was about 2 months ago, BTW. Maybe ChatGPT already learn more stuffs and are smarter. Let's see...
And you're not alone, I feel the same since ~2015
People love the optimism and the paranoia and uncertainty.
Just wait for it to underdeliver. Investors will get scared and we will be back to calling it machine learning.
Or who knows, may be there will be an application for block chains too.
We've seen this pattern many times. And there is money to be made, for sure, but the value might not be there yet.
The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.
Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...
So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.
I just use the features in the iphone where some photos get enhanced or i can detect and copy text from images.
So far it’s going very well.
bring me npmGPT
With that, all the hype-sters and shady folks rush in and it can quickly become hard to differentiate between what’s good, what’s misplaced enthusiasm, and what’s just a scam.
These scenarios are also a big case study in the Dunning-Kruger effect. I’ve already got folks that haven’t written a line of code in their life trying to “explain” to me why I’m not understanding why some random junk AI thing that’s not really AI is the next big thing. Sometimes you gotta just sit there and be like “thanks for your perspective”.
ofc HN over-analysing is killing the fun
Try to focus on the bright side - now that you've seen behind the curtain, you can more easily avoid the hacks and shysters. They will try to cast the "ML/AI" spell on you and it won't take.