Every "AI" related business idea I've seen prop up recently is people just hooking up a textbox to ChatGPT's API and pretending they're doing something novel or impressive, presumably to cash in on VC money ASAP. The Notion AI is an absolute fucking joke of epic proportions in its uselessness yet they keep pushing it in every newsletter
And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.
Besides the science behind it, currently it feels like the same hype as crypto a couple years before.
I'm sure pretty much every accountant in the world got up and went to work that day exactly like they'd been doing all of their career. It probably didn't feel different for that many of them. Most of them had probably never used a computer then, and a lot of them probably didn't feel any particular need to, at least until they tried it. There were probably more than a few who were near the end of the career managed to keep doing their job the old way for another half or whole decade or so, because even when the future moves fast it's never evenly distributed.
There were probably also more than a few who saw VisiCalc and bought an Apple II to start doing their own books and ended up regretting it. I don't know where we are and how things will pan out, but I think there's a parallel.
ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.
The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.
If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.
My sister is not technically minded. She has used chatgpt to create a request for permit for a backyard deck to city Hall, a letter to her boss requesting attendance to a conference, and her performance reviews.
Another non techie and I use chatgpt to help us learn French and music theory.
My mother in law uses it to create funny rhyming stories for kids.
Meanwhile the techie friends of mine are... Completely ignoring it. My two best friends are vmware senior architect and a Java developer / tech manager, and I've been urging them for months to try it.
So I personally live in a completely opposite situation to that which you describe :-). Techie are skeptical of the toy, and endlessly discuss it's limitations and impact. Non Techies are just using it as a tool.
Also, we're all calling it "AI" for some dumbass reason. That infects our thoughts with unwarranted credit toward the technology.
But the more surprising thing is that even after I explained what it could do, they weren’t even slightly impressed. “So it tells you wrong answers? Sounds utterly pointless”
I was flabbergasted. I think we do live in a bubble. I’m sure architects and surgeons have equally exciting advances in their fields that no one else cares about.
I also think the medias sensationalist phrases should be taken with a pinch of salt: “The tech everyone is talking about”, “here’s the news that got everyone buzzing. etc
I do think LLM are a significant event, but that will only be realised by building on top of it and “showing rather than telling”.
Within tech the biggest constraint I'm seeing is a failure of our imagination on how this tech could be used, so far we've limited our interactions to that which we've been shown... chat bots, and if that's all we can imagine then this is definitely a hype cycle.
But when I speak to those outside of tech, who are not constrained to imagine what they've seen, then I see and hear much different things. It's not the second coming, it's not going to make the whole world redundant, but it is a change and for the most part non-tech people seem more eager to get there. At least, I'm surrounded by positive people who seem to hope that the most mundane aspects of our lives will be replaced by AI and will lead to some qualitative improvement of life (ignoring the cost of living crunch presently hitting most of them).
* All the techies I know have heard about and used it, and most have a healthy dose of skepticism paired with some optimism that it can be used to help solve some previously hard to solve problems. This alone, I think, makes it clear that LLM has staying power in a way that blockchain did not: obvious use cases.
* The normies in my life run the whole spectrum from "never heard of it" to "use it at least sometimes." One interesting subset are the folks who have heard a LOT about it but haven't used it. My lawyer said he had already attended panel discussions about the ethical implications of AI usage in law. I asked if he had USED ChatGPT and he said no; I had to direct him to the URL and walk him through signing up so he could see it for himself. And he's pretty tech-saavy as non-techies go.
Burying the lede, now. Here is my unpopular opinion: there is an outsized "wow factor" when you specifically use ChatGPT because of the fact that it outputs the text seemingly in real-time. It makes it look like it's thinking/talking and viscerally our minds are blown. Bing and Bard generate the response in the background and output it all at once like a search result, doesn't hit the same way.
She said she's heard a lot about it, but it all sounds like marketing hype, tabloid sensationalism, or open alarmism from people who don't seem to know what they're talking about.
She said some of her students had used ChatGPT for creating revision materials for their exams, and that they'd found it useful for that, but she found the assertion that mass unemployment is 6-12 months away 'pure speculation'.
In my work I've yet to find a use for NNs, but maybe for the writing of a lot of templates in one go it could be useful.
I think there is certainly more talk about it inside tech circles, but I feel like it's more of a generational divide than anything else.
This is the real world. Manufacturing. Logistics. Managing people. Building tools for streamlining one tiny part of a workflow and getting people to use them effectively.
This latest client has opened up my eyes to the fact that I've been living in a bubble. I could not be more glad, as I was beginning to suspect that OpenAI is taking over the world.
I don't think AI is the second coming of Christ like some pretend (it is over-hyped outside of tech and somewhat under-hyped inside of tech IMHO) but it's impressive and extremely useful. I've used it many times to help point me in the right direction. I never take what it says as fact, I always "check it's work" but it often saves me considerable amounts of time by getting me on the right track and then I can refine what it gave me. I don't use it for writing "real" code (aside from maybe a few small algorithms) but I do use it to help with certain debugging tasks if I think it will be useful. Also for things like "spit me out a shell script that takes a CSV and does X, Y, Z" it's incredibly useful. These are normally 1-off tasks that I can do by hand or code if needed but ChatGPT makes it way easier.
When it comes to writing, I'm often faced with "Blank Screen Syndrome" or a similar type of feeling and so getting something on the screen that I can then edit/revise/improve/fix is a huge boon to my productivity.
Question: Do you know about ChatGPT? If yes, have you used it?
(n = 1056 in both March and June)
March 2023
Know about it and have used it: 4.8%
Know about it but haven’t used it: 25.5%
Don’t know about it: 69.7%
June 2023
Know about it and have used it: 15.2%
Know about it but haven’t used it: 55.8%
Don’t know about it: 29.1%
The survey also suggests that awareness is higher among younger people and that usage is higher among males.
Search results at Amazon Japan and Amazon USA show that many books are being published about ChatGPT in both Japanese and English [2, 3]. Quite a few Japanese magazines have had cover stories about it recent months [4].
[1] https://markezine.jp/article/detail/42509
[2] https://www.amazon.co.jp/s?k=ChatGPT
[3] https://www.amazon.com/s?k=ChatGPT&ref=nav_bb_sb
[4] https://www.amazon.co.jp/s?k=ChatGPT&rh=n%3A13384021&__mk_ja...
Note that every "AI summer" prior to this one has produced something useful, just never all that world-changing compared to people's expectations. Most people think that if it can BS (excuse me, generate convincing text), then it can do lots of other jobs. Well, in previous AI summers, people thought that if it can play chess, or answer Jeopardy questions, it could do many other things that it turned out, it could not do (or could not do well enough).
For that matter, the ability to do math, at one time, was thought of as a sign of great intelligence. But, it turned out that computers could do math, long before they could do anything else. Our intuition about, "if it can do this, soon it will be able to do that", is not very good.
I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
If you're trying to pump that VC hype machine for $$, though, cryptocurrency is not going to work anymore, so they need something.
I mean, I'm in "tech", and I don't anticipate it happening soon. People want something they can trust, and current solutions are nowhere near that.
To expand and to tell of a very recent story: For work I used ChatGPT to help me write a 500 line bash script to automate a bunch of stuff. It took me around 1 day versus the 5 it would have taken me if I would go the google/ddg/stack overflow route of slowly crawling through outdated content and SEO noise to find the signals.
It worked, completely!
Solely from that experience, I'm convinced that it's not just farts and kool-aid. To go one step further, even, i'de say that anybody who doesn't at least have a cursory awareness or AI is in fact the one in the "echo chamber", isolated from the possible.
It's not the do-all solution, of course, but in certain scenarios, it's quite obviously, trivially demonstrably, revolutionary.
My spouse works in early childhood education and they use ChatpGPT routinely for low-value boilerplate stuff (social media posts that no one reads, etc).
A relative is in commercial real-estate management, and they also use ChatGPT routinely (in fact they started using it before I did).
So I don't think it's an echo chamber.
But AI tools are mainstream now. You can hear them mentioned on TV, written about online and in traditional media, see them discussed on social media in non-tech circles, etc.
So I'd say your friend is likely not well informed. AI tools are in the same arena as cryptocurrencies, perhaps slightly less widely known. Most informed people have heard about them, but not many outside of tech circles actually have any experience with them, and even less understand how they work.
This is the natural progression of technology[1]. We've seen it happen with computers, the internet, the web, cell phones, etc.
[1]: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle
People are unavoidably ignorant of vast swathes of “probably relevant to them” things. No one has time, inclination, or actual ability to keep up with it all. That reinforces the above but I am again doubtful that it is an “echo chamber” per se.
Hacker News regulars, especially those engaged with comments, are operating within an echo chamber, sure.
Outside of a couple of 70+ folks and some bikers, everyone I know is at least conversationally aware of recent “AI” developments so it’s most likely a function of your particular uselessly small sample size. :)
By applying the label AI, they bring in the connotations of all the stories we have told ourselves about djinn, golems, robots, HAL-9000, Skynet, and (not frequently enough) the Sirius Cybernetics Corporation.
This will affect us all, in tech or outside tech.
>Overall, 18% of U.S. adults have heard a lot about ChatGPT, while 39% have heard a little and 42% have heard nothing at all.
>However, few U.S. adults have themselves used ChatGPT for any purpose. Just 14% of all U.S. adults say they have used it for entertainment, to learn something new, or for their work.
https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...
I wrote up my thoughts about it last month https://kyledrake.com/writings/ai
Christopher Nolan just finished a movie profile of J Robert Oppenheimer, has presumably spent a lot of time thinking about nuclear weapons, and has similar lesser-concerns about AI https://www.wired.com/story/christopher-nolan-oppenheimer-ai...
Two anecdotes:
1. I saw a posting for a prompt engineer which had virtually no requirements beyond some passing familiarity with LLMs, who's job it was to think up clever prompts and archive them in a library. Salary - $350k+
2. I heard a real conversation between two highly trained technical folks around using an LLM to do a simple data transform from one wire format to another. Yes, let's use a cluster of GPUs and some faulty hacked together prompt to transform a well written structured format to another at speeds that approach molasses. Nobody had a clue as to how much the runtime costs of this would be. The solution to it being slow? Add more clusters. -- absolute idiocy.
We're burning money on this stuff like it's mid-80s Japan spending money on slightly different variations of pocket calculators and American real-estate. Meanwhile we're exploiting Kenyan workers to some of the worst filth humanity can produce in an effort to keep one of these AIs of producing child gore porn because it's illegal to pay first world people $2/hr to do the same job -- and there's not a psychotherapist to be found anywhere in the chain.
And then it's being pushed at the regular consumer as if it's some kind of knowledge oracle to replace the "horrors" of the search box that:
a) won't only know what the state of the world was when it was trained 2 years ago
b) won't produce a worthless hallucinated answer that could send somebody off to take a poison for a cold
This shit is terrible and I just used it to give me advice on updating my resume a few weeks ago for a job in the field.
Fuck it.
Do you think this doesn't get 100% better in the next few years with the billions of dollars that are pouring in? Because a GPT-4 that is 100% better at generating useful content is a game changer. Now what if instead of a 2x improvement we see a 3,4,5 or even 10x improvement in the ability of the technology?
All the people comparing the hype around AI with crypto have good pattern matching without any judgement. I personally never got into crypto. I'm very into AI.
I can't stop shaking my head whenever I read any article on AI written by non-tech journalists (and even many by tech journalists). AI is vastly more dangerous and more urgent than climate change, than Russia and China, than literally any other hot topic today, and it's being treated like a combination of a science fiction story and an entertainment tool.
Also the high school student I am mentoring has told me that he knows people feeding chat gpt their essay writing styles and asking it to write about other topics with the same style.
This is in the Midwest.
My friend is a librarian at a high school. She tells me teachers are worried about what ChatGPT means for take-home essays. (I'm guessing 2023 is the year these stop getting assigned)
Everyone in the world with internet now has access to a personal research assistant. This thing gives better medical advice than doctors!! Give it 3 years and let's see who hasn't heard of ChatGPT.
Not saying that it's still not overhyped, but maybe the AI skeptics among the tech community are in their own echo chamber too?
People who aren't in tech or aren't informed don't know about the latest development in tech, same reason you don't know about the latest development in healthcare or construction...
But, and I'll only speak for myself, GPT allowed me to add more features to my side project in 2 months than I did in 3 years... If you know how to use it then it becomes a great tool, the 16k context 3.5 or the 32k tokens context in GPT 4 (when necessary because expensive) are really good and can do a lot and infer context and assume things about your project etc.
You shouldn't be a contrarian to be a contrarian, same thing happened with the blockchain, HN is still shitting on it in 2023 but I just transfered some crypto to my sister in another country instantly, where if I had to do it using the usual financial system it'll involve a lot more headache and will usually take a couple days... and all of this on the ethereum proof of stake blockchain aka not as power hungry as it was, this is objective value it adds.
Now the contrarians usually see posts written by 90 IQ "journalists" (bots) that say "AI will make humans irrelevant" or "blockchain will kill all banks" and they start responding against this but also miss the actual value these things add, of course chatGPT will not build you a house that's obvious and only another bot would argue for and against it, but it can help you become a lot more efficient.
My friends: very, very much in tech, we have a channel in our discord where we just laugh at AI/ML/Musk/Crypto because it's so stupid. It can't even fucking add two numbers. It doesn't actually help with internal, brownfield projects with complex business logic and custom internal integrations. It can't summarize legal documents or answer technical questions without completely making crap up. It's just sparkling auto-complete.
My work: I've been put on an LLM project. The purpose? Dunno. The goal? Dunno. Our VPs are on the hype train and I'm along for the ride as long as my paycheck keeps clearing (and interviewing in the meantime). They're literally taking tech and trying to find a use for it. It's just the blockchain hype all over again.
But how active are they? Have they had 100M users briefly dally with it and ask a few banal questions to pick apart the answers, or have they got 100M dedicated repeat users who are heavily using it and keeping on it?
It will take longer than some "evangelists" anticipate, but other than a lot of tech hypecycles (like web3), this one is different for 3 reasons:
a) It's firmly in the public mindspace, including mass media
b) It has already proven it's usefulness and applicability to real world problems
c) Politics are taking it very seriously
There are people who don't use it, for various reasons, but it is increasingly hard to never have heard of it.A large part of that drive is due to the simple accessibility: Making ChatGPT as a convenient and simple webapp that appeals to non-techies was a brilliant move, and Microsoft driving integration into it's product suite will further adoption as well.
I don't think this is unusual. Adoption of iPhones and GUI-based computers was that way too. People who had those things were by definition more active on them, but not necessarily getting more done.
There could be some self selection going on here. Someone who was really fluent at the old way might not see such a boon from the new way.
I confess to being among the ambivalent, but I see it being used around me, chatted with people about it etc., so I'm not ignorant about it.
I've been in the process of moving so I haven't read through many of the responses, but it seems as though there was plenty of interaction between others which is cool!
I'll definitely review everything I can over the next week and follow up, I'm genuinely interested in understanding how others perceive this situation as well.
I talked to a blue-collar worker (has his own business) the other day and he was in some Telegram group that "leveraged AI for marketing". You ever wondered who the target audience for these thin bullshit AI marketer wrappers on top of ChatGPT are? Or the courses and "mastermind" groups on how to write marketing prompts? Apparently, blue collar workers with a small business who want to save on actual marketing and don't have the expertise to realize the downsides.
For instance, hallucinations. They're a function of LLMs. But they're also something that everyday people have to deal with from each other in the miasma of the post-truth world.
What an opportunity, within the artificial intelligence field, to think of these tools not just as automation and fact-finding, but teaching how to avoid hallucinations in your own life?
Personal finance Grammar, rhetoric, logic, reason Information literacy Media literacy
The list goes on...
I don't think so. Anecdotally I have had numerous non tech friends ask me about AI. The first thing they always seem to ask is if it will harm people to which I explain its just a tool and more specifically a big-data chat bot that can be manipulated just like the social media algorithms but give more confident and realistic sounding answers using language models to simulate human mimicry. The risk would be based on what entities tune it and the fact it can't or won't show it's work. A tool will do what a tool can do. The intentions of the tool operators and users determine what they tool will be doing.
Some of my non-tech friends are trying to find ways that tool can make them money and I have no doubt they will come up with clever uses for it until such time that the walled garden around these tools becomes to cost prohibitive for the average person to afford. After the masses have helped tune the tools I suspect they will be exclusively for large corporations and government entities to rent. I predict that prior to the walls going up, the public interfaces may appear to have lower quality results so that there isn't an uproar when the interface becomes cost prohibitive. This is why I warn them not to base an entire business on this service but rather to augment something.
Sadly I think you've contradicted yourself a small bit here, while also stumbling across the exact reason AI mainstream adoption will go quicker than anticipated: it'll happen without mainstream awareness.
I work in a security role and we're currently trying to do risk assessments of AI adoption by tech workers, and integration of AI APIs/services into our products, and it's not even a case of "how bad will it be", it's much more "how bad is it". AI is being used a lot by individuals in the workplace with very little discourse or auditing of that usage.
Even outside of tech, someone somewhere in it administration was already seeing enough daily AI usage to warrant the drafting of this official guidance, for quite a non-tech-industry audience: https://twitter.com/marcidale/status/1645972869393047552
The echo chamber in tech is people thinking they're in an echo chamber. I mean come on, 90% of people in tech have never built an LLM in their lives they just read some laymans article on it like everyone else and harp on how it's not an AI and it's just a "token predictor" and other generic arguments like that.
What I'm saying is that MOST people in tech don't do AI and therefore their perspective on LLMs is equivalent to someone completely outside of tech. Every typical software engineer thinks because they took a small course on ML or they read a little bit about chatGPT and that puts them on a pedestal above an average non-tech worker... well I hate to break it to you, the non-tech worker can look up those articles too.
They can also play with chatGPT extensively which doesn't require any tech skills and they can literally see for themselves what a game changer the technology is.
There are regular articles on it in mainstream news. Most of the people I know have tried it. Some of them can't see it will be useful, and some of them already find it useful. Some of them are techies and some aren't. Some techies are in the set that don't think it is useful.
In this case it seems kinda the opposite, the enterprises are the first off the bat - existing products getting easier to use by ingraining LLM's into the workflow.
ChatGPT to a large extent also does seem to be moving search terms away from google. I would love to see trends on google.com vs ChatGPT vs Bing over a period of time - Search definitely seems to be changing, where we want summaries and content rather than a bunch of sponsored ads and websites where the context has to be derived from search.
I am also interested if the same is happening with StackOverflow -> the search on SO -> ChatGPT as a code generator.
For mainstream adoption, the adoption will be driven by the everyday software and applications we use. We are still in the shovels and picks part of the world, yet to see native AI consumer applications emerge (barring ChatGPT).
Saw an old friend of mine recently. He's smart - majored in math - but spent many years running a landscaping company. He was showing me a spreadsheet he made to show musical scales and the position of notes on a guitar. You could select picking style and it would change the layout. Very complicated cell formulas - no code. I asked how he ever did all that. He. Said ChatGPT saved him countless hours. He praised it highly for the conversational style vs Google that just gives links to generic information, and the ability to refine your request to get better answers. It can also give formulas in some instances but he tests them. He's taken up using it for other things as well.
To be honest though, he could have be a deeply technical person, but he's only interested in such things to the extent it can solve real world problems. Not sure how many people fall in that category.
Technically, the answer is yes and no, but with the context here, no, we aren't in an echo chamber. People who are not in tech are routinely using ChatGPT and AI functionality. I know this because people who I know aren't in tech have told me how they are using it, and how many other people in their industry are using it.
The key here is what industry are they in, and can they make use of what ChatGPT provides.
And it's not just ChatGPT, but AI stuff in general.
The difference between this and crypto and web3 is that crypto and web3 required a LOT of explaining on how it could be useful. AI didn't, and simply said here is the input, and here is the output. That's it. They didn't suggest value. The value was easily apparent. It was tangible. Something that was real.
Does this mean every industry is talking about AI? I'm sure the answer is no. But every industry has it's own echo chamber. That's immaterial.
> maybe AI mainstream adoption will take longer than we anticipate.
Everything always takes long than we anticipate. And it's not like AI today is the holy grail yet. It's still slow, expensive, with too many errors, and we are just less than a year in the real hype. Legal systems are busy with working out the corners and acceptance. We are still in the exploring-phase, and this can go on for several years. There will be significant changes in the next years, but the big bang is still upon us, and until then it will a locally fast, but globally average slow change.
So yes, your friend is unusual, especially as a millienial where even more have heard of it and used it. When you're asking a statistics question, make sure you cite statistics instead of asking people to write comments from their toilet.
Also: "Just 14% of U.S. adults have tried ChatGPT". I think we need to take a step back and consider how much the bar has been raised. "Just" 14% of Americans? This is an incredibly high number.
[1] https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...
My In-laws talk about chatGPT (lawyer and store owner) my parents (doctors) ask me about ChatGPT, my colleagues ask me about it re: its impact on film production, my siblings talk about it. Hell the WGA is talking about chatGPT as part of the strike.
Again, I obviously have my own little social bubble of somewhat like-minded people, but I feel like I’m talking about AI and ChatGPT every other day. I see articles on main stream publications at least weekly. It’s basically as well known as Bitcoin - if not more so - as far as I can tell. Very surprising to me this person still hasn’t heard about it!
I am not aware of that!
I am definitely inside the tech echo chamber but I do not use ChatGPT, will not do so, don't care about it, and frankly don't respect people who do. It is not a useful thing, and all the "AI" (it isn't) garbage that's popping up right now is equally useless.
I say this on HN often and people come back with "but it reached 1M users so fast!" I don't care. Millions of people smoke cigarettes, that doesn't prove they're useful.
I am going to sit this hype cycle out. Maybe it will eventually be useful for something. I am clearly not the one to try to imagine what that will be.
Interestingly, there's a big age component in this poll, and rather than being a smooth grade like most tech, there's a very strong divide at 45yo—those younger than 45 are over 4x as likely to report they often use AI tools than those over 45.
https://docs.cdn.yougov.com/ifywkae5dt/econTabReport.pdf#pag...
We already had AI in our daily lives, such as translation, object recognition, voice assistant, spam filter, fraud detection systems and a lot more.
Most of these are old and boring and yet deeply embedded in our daily life.
The hype bubble is on the new LLM and generative art AI tech, which are backed by a lot of money, so a lot of hype is generated. Without hype, corporations can’t sell us on novelty but once their pockets are filled, they exit and these eventually cool down when something new comes along.
However, with each novelty, we are usually left with some bits and pieces of useful residual tech that gives us a little improvement on our daily life, so every hype leaves with something useful.
My own personal opinion is that depending on your particular area of expertise, you will have various levels of resolution of knowledge. We might be in an echo chamber on AI here, but many of us aren’t even real specialists on the subject. So, we fall prey to various cognitive biases [1]
[1] https://en.m.wikipedia.org/wiki/Curse_of_knowledge
Regarding your friend - a few technological revolutions occurred without the general population really knowing anything about how they work. New tools show up, people use them.
There's a lot of overhype, A LOT. And many startups are reinventing the wheel in a more expensive way just to ride the wave, but unlike crypto, LLMs are tools that can solve more "dev/user" level problems, unlike crypto, that was supposed to solve a "global financial system" problem, and since that's too hard to grasp, people used it as investments, because it's closer to their real lives and needs.
I think no. I think most of the people will use the products without even hearing about GPT or LLM. There will be products that are based on LLM but they will have nice and easy to use UI which will be not much different that any other usual UI but they will have more advanced capabilities. People will just use them without realizing what is behind those UIs.
I think, because there is no need for user to know what is behind UI adoption of new LLM based AI will be much faster than other new technologies that we seen so far.
An echo chamber would mean everyone having the same opinions, and that's not the case here, at least speaking to tech people broadly.
So this case where you are talking to people outside the bubble, and they don't realize what is going on, IS THE PROBLEM.
Outside the bubble, people will be seeing images, video, text, voice, that is all fake and manipulated, and THEY ARE NOT PAYING ATTENTION, they don't know it is fake. Inside the bubble we are noticing it, outside the bubble, large chunks of population don't know it is happening.
This tech is in its infancy, and the whole "prompt engineering" thing even though fun and all, will eventually be unnecessary, and _then_ it'll be widespread.
I've even spoke to friends IN the tech industry that are not using it, at least not to the extent of a lot of people is using it.
And even I am not using it as much as I would like, but then I don't have enough free time.
All in all, just give it time. In my case, I hardly use Google these days.
I think we just witnessed something akin to the birth of personal computers which puts us around the ‘60-‘70s. It’ll take a while to (vastly) improve and eventually disseminate into society. I’d say a decade at least.
A lot of the surrounding “infrastructure” is still missing, like in the olden days. Techies are scarce and needed for other things. Energy and thus compute is expensive and society still has some important open questions left to answer about its own stability if it wants to survive this next wave of evolution.
Especially college students
The worst part is the current state though, where all the crypto hype bros have come out of the woodwork and are rebranding as AI hype bros now. My Linkedin feed is sadly now full of idiots calling themselves "AI Expert" and similar such terms with made up resumes trying to sell their 'expertise'.
… so my take is that it takes a bit of time to leverage these new tools popping into existence only a few months ago
Common reactions to ChatGPT and a lot of the fear are definitely overemphasized in the tech field, but that makes sense.
I do think the integrations available in many of the tools I encounter is probably a better signal of potential and impact. We are in year one. I suspect LLMs will continue to evolve and still have market legs in a decade.
Please share products that exist today that leverage ChatGPT and are useful. A product that could not exist w/o ChatGPT. A product with actual users and a productive chatGPT.
ChatGPT is nothing more than a tech preview or fun experiment that will pave the way for the future. However, it's still confidently wrong, easily confused and incredibly filtered.
I'd love to see a product with tangible results.
I would say it will mostly depend not on the public hype around it but on the pace at which it will acquire capabilities. If it stays at GPT 4 level, it will be gradually used to automate various tasks over the years.
If on the other hand it keeps the progression GPT 2 -> 3 -> 4, then by the time we are at GPT 6, it will be able to easily replace a wide range of jobs.
It's going to be terrible in a few years time when you try to cancel a subscription or raise a complaint with a company and they throw an AI in front of you. Most call centres are designed to waste your time so you go away, we are going to see impressive new frontiers of absolute bullshit form.
AI though, struck a nerve because people intuit a killer app. Or more precisely, AI could potentially be applied in many different places. The last time we had something that felt like this was probably the web browser — at least for me, it is.
I think it is because we are now really close to what the non-technical mainstream public thinks computing should be — “do what I mean, not what I say”.
Anyway, call it AI, ML, statistics, the truth is: all these algorithms are helping us get rid of the boring stuff in tech.
ie.: I won't waste my time reading a regex documentation to learn how to remove every white space after a combination of characters, because I'll completely forget it two days later. and the examples goes on and on.
Maybe, but like, ChatGPT has broken outside the echo chamber. NPR and marketplace have had plenty of stories on it. Like, my parents probably haven't but they are retired, and veg out on soap operas, game shows and avoid anything that sounds like news.
My sister in law (an orthodontist) had chatGPT draft a job ad when hiring an assistant.
My wife just graduated with a non-tech master's degree. On her graduation, the head of the program made humorous references to chatGPT and cited it ln a few topics in her speech.
To be honest it's a no brainier. It's helpful tech with a low barrier to entry, and free. People will use it.
I give it some restrictions and a few preferences and ask it to suggest 20 things for me to cook. It's very helpful.
So, are LLMs as popular outside of HN and similar communities - no. Are people outside of them interested - definitely yes.
Some family have heard of ChatGPT and have misunderstood what it can do, so I’ve had fun showing them the limitations of it.
If I have to anthropomorphise the gpt3.5 prompt, "it" is an average intelligence intern who can only google up to 31st of march 2021 and has several insecurities and character flaws. It is unable to follow a plan, needs guidance at every step, needs to repeat itself constantly and sometimes makes things up as it goes.
If you can export a UI with defined behaviors from Figma to React / NextJS, and AI can figure out how to code it, then Front End Developers become a thing of the past.
If there's a similar modelling tool for Back End and AI can code the exported design, then BE devs become obsolete.
It is irrelevant if lay people have heard of it or not - we're still out of a job.
She uses AI more than me, who’s hunting for deals on gpu’s to train my own DRL model.
I was honestly shocked when I found out. I had figured that it was a tech echo chamber too.
For my own experience I use it every day, but the only thing it really solves is saving me time. I've seen lots of neat demos, hacks, agents etc and still haven't figured out what business problem LLMs solve besides time-saving.
Crypto was more mixed from my recollection, a bit less ivory tower as the tech isn’t that complex, and very prominent people in the tech itself werent so readily making huge claims.
I’m seeing tech leaders saying things that make me concerned, like engineers saying Bard was sentient.
Plenty of them.
And I showed it to a few and they were immediately impressed by it.
I also overheard people talking about it on the street.
So in my opinion: no.
And chatgpt has over 100 million users. People can use it in bing every day.
Adobe added it to their products too.
People already benefit from ai.
My company is also working on introducing more features it's just the time required to do it.
It has the potential to be a Google killer as it’s much more effective at sussing out specific information. But, the ideological guard rails OpenAI insist upon are super annoying. You can never fully trust that you’re getting a non-politically biased output.
Practically every mainstream comedy show I hear in the UK mentions chatgpt, it's been constantly in the news over the last 6 months. Every high school child knows how to use it, hell there was a south park episode a couple of months about chatgpt written by chatgpt.
This isn't niche stuff.
The actual developers range from enthusiastic to skeptical. Actual developers take a practical perspective IMO.
I talked to her about prompt engineering for a little, but not sure that I really got through what I was talking about.
Personally? I think we're sleep walking into social disaster and this tech is only going to make people's lives more difficult.
--"You know it's free, right?"
"Yes."
I found this inexplicable and infuriating. Perhaps someone can shed light on the psychology of such people?
My 86 yo father is far, far from Silicon Valley and his main activities are playing piano and talking with other residents of his retirement home, but he uses GPT-4 every day
So far the recipes have been pretty good!
It’s fascinating. Though it probably isn’t intentional, AI service providers are already hooking kids early to have customers later.
I am bored of ChatGPT, AI this, AI that. I scarcely talk about ChatGPT with my peers. While I am certainly interested by AI and even dabbled with it in my MSc years, I found it tiresome to only read about it.
I kept think that the entire conversation sounded like two vocal transcription bots reading through r/AI and HN with a sprinkling of Twitter.
Yesterday I was having a drink with some friends, one of whom is like me in tech. The others in shipping, finance and sales. They all had experimented with ChatGPT and/or Midjourney.
Some people get it, some people don’t.
And that’s OK by me.
I have an AI startup with thousands of active paying monthly users and they are all marketing people and not highly technical
…like what you’re doing right now.
Its making headlines left and right, and businesses are all trying to figure out what this stuff does, but if you're not watching much news and not in the tech side of business you probably don't know or care?
The things it is good for are being obscured by promises of a waterfall of tasks performed all by AI ending up in something usable. If you tell me you have a neat programme that does exactly what you expect 99.4% of the time yet you have no knowledge of why the 0.6% fail you will have to demonstrate that is has some very desirable properties not offered by other solutions. Hammer, looking for nuts, what is nuts is people taking £100 mill to wrap some langchain... the burn rate to get noticed is going to be half your spend, the other half is subsidising loss making operations with the idea that there is some pie to win here.
It's a novel database. Fun, creative, unpredictable in interesting ways, like my cousin.
Generative AI is a huge topic of discussion amongst virtually all creatives (painters, graphic artists, musicians, authors, journalists), in business (at all levels largely through business process management, finance, and business intelligence), amongst media (joualism and entertainment), amongst governments (regulation, electoral politics, international relations, military / strategic risks, intelligence, competitiveness, impacts on general employment and social stability), amongst the technological boomer and doomer communities (impacts on future technological development will likely be profound, though agreement on sign bits differs), and more.
It's true that the general person on the street likely has little sense of the potential and risks, but that is virtually always the case with new technological developments. Potential impacts are always hard to see, and the discussion about these almost always tends toward various elites (technological, business, government, academic, religious). And that's the case now.
But the conversation and concern is not limited to the information technology elite, by any measure.
It’s now just another tool in life’s toolbox.
( No no no nooo noooOooo noooOooOoooo )
I also think adoption will depend on "industry." ChatGPT in education for example:
* BestColleges [1] (n=1000) found 43% of college students have used ChatGPT, and 22% have said they used it to help complete assignments or exams.
* Study.com [2] (n=1100) had some crazier numbers. "Over 89% of students have used ChatGPT to help with a homework assignment. ... 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper."
* Interestingly, in K-12, adoption appears to be higher by teachers than students [3]: "Within two months of its introduction, a 51% majority of teachers reported using ChatGPT, with 40% using it at least once a week, and 53% expecting to use it more this year. Just 22% of students said they use the technology on a weekly basis or more."
* In Japan, a recent survey [4] (n=4000) of undergraduate students conducted from Tohoku University showed 32.4% have used ChatGPT. This is compared to about 7% office workers in Japan using ChatGPT on the job [5] (n=13814) in a recent poll by MM Research Institute.
Of course, education isn't the only industry with an outsized impact (although it's interesting in the sense that it's a good temperature check for the upcoming generation, especially if it's something that is so prevalent at colleges/universities).
But there are other industries as well. a16z Games has been doing a game development survey on Generative AI use in games, and their preliminary results [6] are in-line with my personal experience/view into the game industry - that it has already been completely re-aligning/disrupting the production pipeline: "We heard from 243 game studios - large and small - the results were astonishing ... 87% of studios use an AI tool or model in their studio TODAY. 99% of studios PLAN to use the technology in the future."
I think it's worth noting, that while the sources are dodgy, that even if the 100M user number is accurate for ChatGPT, that's only ~2% of global internet users, and <1% of the world population. You could probably confidently say that 99% of the world population has not directly used a generative "AI" product yet (obviously anyone that's used a mobile phone or the internet has been interacting with ML for years). I do think this is going to change very rapidly however, but no matter how fast it goes, it won't be overnight.
[1] https://www.bestcolleges.com/research/college-students-ai-to...
[2] https://study.com/resources/perceptions-of-chatgpt-in-school...
[3] https://www.waltonfamilyfoundation.org/chatgpt-used-by-teach...
[4] https://www.asahi.com/ajw/articles/14927968
[5] https://asia.nikkei.com/Business/Technology/Use-ChatGPT-at-w...
[6] https://www.linkedin.com/posts/troykirwin_ai-x-game-developm...
Yes.
Here's how the adoption of this technology is going to go (this is the way all AI technology adoption has gone for 60 years):
1) Papers will come out showing how by creating a more effective way to leverage compute + data to make a system self-improving, performance at some task looks way better than previous AI systems, almost human-like. (This already happened: "All You Need Is Attention")
2) The first generally available implementations of the technology, in a pretty raw form, will be released. People will be completely amazed at how this machine can do something that was thought to be a hallmark of humans! And by just doing $SIMPLE_THING (search, token prediction) which isn't "really" "thinking"! (This will amaze some people but also form the basis of a lot of negative commentary) (Also already happened: ChaGPT, etc)
3) There will be a huge influx of speculative investment capital into the space and a bunch of startups will appear to take advantage of this. At the same time, big old tech companies will start putting stickers on their existing products that say they're powered by LLMs. (Also already happened)
4) There will be a wave of press, first in the academia, then in technology circles, then in the mainstream, about What This Means. "AGI" is just over the horizon, all human jobs are about to be gone, society totally transformed. (We are currently here at step 4)
5) After a while, the limits of the technology will start to become clear. A lot of the startups will figure out that they don't really have a business, but a few will be massively successful and either build real ongoing businesses that use LLMs to solve problems for people, or get acquired. It will turn out that LLMs are massively, massively useful for some previously-thought-to-be-nearly-impossible or at least contigent on solving the general AI problem work: something like intent extraction, grammarly-type writing assistants, Intellisense on steroids, building natural chat interfaces to APIs in products like Siri or Alexa that understand "turn on the light" and "turn on the lights" mean the same thing. I have no idea what the things will actually be, if I was good at that sort of thing I'd be rich.
6) There will be a bunch of "LLMs are useless!" press. Because LLMs don't have Rosie-from-the-Jetsons level of human-like intelligence, they will be considered "a failure" for the general AI problem. Once people get accustomed to whatever the actual completely amazing things LLMs get used for, things that seemed "impossible" in 2021. Startups will fail. Enrollments in AI courses in school will drop, VCs will pull back from the category, AI in general (not just LLMs) will be considered a doomed investment category for a few years. This entire time, LLMs will be used every day by huge numbers of people to do super helpful things. But it will turn out that no one wants to see a movie where the screenplay is written by AI. The LLM won't be able to drive a car. All the media websites that are spending money to have LLMs write articles will find out that LLM-generated content is a completely terrible way to get people to come to your site, read some stuff and look at ads, with terrible economics, and these people will lose at least hundreds of millions of dollars, probably low billions, collectively.
7) At this trough point where LLMs have "failed" and AI as a sector is toxic to VCs, what LLMs do will somehow be thought of as 'not AI'. "It's just predicting the next token" or something will become the accepted common thinking that disqualifies it as 'Artificial Intelligence'. LLMs and LLM engineering will be considered useful and necessary, but it will be considered a part of mainstream software engineering and not really 'AI' per se. People will generally forget that whatever workaday things LLMs turn into a trivial service call or library function, used to be massively difficult problems that people thought would require human-like general intelligence to solve (for instance, making an Alexa-like voice assistant that, can tell 'hey can you kill the lights', 'yo shutoff the overhead light please?', 'alright shut the lights', 'close the light' all mean the same thing). This will happen really fast. https://xkcd.com/1425/
Sometimes when you see an amazing magic show, if you later learn how the trick was done, it seems a lot less 'magical'. Most magic tricks exploit weird human perceptual phenomena and, most of all, the magicians willingness to master incredibly tedious technique and do incredibly tedious work. Even though we 'know' this at some level when we see magicians perform, it's still deflating to learn the details. For some reason, AI technology is subject to the same phenomenon.
What do you think?
(meaning "yes")