HACKER Q&A
📣 soasdfg

Is anyone else bearish on OpenAI?


The underlying technology of LLM's and stable diffusion is an interesting topic that does have some useful applications and should be studied more, I just don't see this company (or any others for that matter) GPTing their way to AGI within our lifetime, or being able to create significant value for investors after the hype fades.

This feels a lot like crypto where everyone is very excited about a new technology that very few people really understand, and are jumping on the bandwagon without asking any questions.

It's also very much like crypto where for every one person doing something useful with it, there are 20 trying to exploit the newness and low comprehension the general public have of the tech such as:

    - Trying to cash out on a ChatGPT wrapper company

    - Creating the nth "AI powered custom chat bot but for x vertical"

    - Using it to cheat on school assignments or interviews

    - Gluing together as many different "AI" services as possible to create a no touch business and sell low effort products
I'm not saying the company will go bankrupt but I'm also not buying into the hype that it's going to become the next Google or better / create AGI for us all.

What am I missing here?


  👤 ghshephard Accepted Answer ✓
OpenAI, at least in my day-day workflow for the last 9+ months has so superseded anything that google ever was to me that I'm having a difficult time comparing the two.

I've got a monitor dedicated 100% of the time to ChatGPT, and I interact with it non stop during the flow of technical scenarios and troubleshooting situations that flow into me - working in areas that I have the slimmest of backgrounds in, and shutting down, root causing, and remediating issues that have been blocking others.

I've essentially got 15-20 high-priced world-class consultants in every field that I chose to pull from, working at my beck and call, for $20 a month? I would pay $200/month in a heartbeat out of my own pocket, and I probably would ask the company to pay ~$2,000/month for my workflow.

I think if they never released another product, and they just managed to penetrate with their existing offering, they are easily a $100B+ company once they nail down how to monetize.

The difference between LLMs and Crypto is I can point to roughly 200-300 objective solutions over the last 9 months where ChatGPT resolved an issue and delivered clear value for me alone. And, over time, as you learn how to control for hallucinations, and manage your query patterns a bit more - the value has continued to increase.

That same multiple-times-a-day high value persistent experiences were never a part of my crypto experience.


👤 nunez
I'm not bearish on OpenAI or AGI in general but I'm extremely meh about it. I'm not chomping at the bit to use it like so many are, and I constantly feel like a huge luddite or something for not being super excited about it.

The value and time it saves makes sense for folks who struggle with a search engine (many) or doing tasks that are typically considered menial, like writing emails or coding boilerplate.

However, if you can grok Google and don't mind doing tasks like that (I personally don't mind coding boilerplate stuff, especially since I can learn how the framework works that way), ChatGPT's value is limited (at least in my experience).

Example: I was struggling with a Terraform issue the other day. I used ChatGPT 4 to help me work out the problem, but the answer it gave was really generic, like mashing a few of the top answers on SO together. It also didn't answer what I needed help with. I knew enough about how Terraform worked to Google for the answer, which I eventually did a few minutes later. I could have kept crystalling my question for ChatGPT until I got what I wanted, but Google was easier.

I'm also not a huge fan of us just being okay with trusting a single and extremely corporate entity (OpenAI) being the de facto arbiter of the truth. At least Google shows you other options when you search for stuff.


👤 orochimaaru
Nope. Learn how to use it in almost everything you do. It’s a game changer.

LLMs aren’t AGI. They’re far it. But they have massive uses for reasoning on available context.

I’ll give you an example. I’m trying to set up some bulk monitoring for api across 200k jvm s. The api documents are horribly out of date. But I get the raw uri on the monitoring tools.

I can just get these uri, send them into chatgpt and ask for a swagger spec - along with a regular expression to match the uri to the swagger api. It figures out the path and query params from absolute paths.

Sure I could try to figure out how to do this programmatically using some graph or tree based algorithm. But chatgpt basically made it possible with a dump Python script.

Of course I may still need a person to fill in these. But just getting a swagger spec done for a thousands of services in an afternoon was awesome.


👤 Rastonbury
It depends what you think the goal is, AGI or making a ton of money. OpenAI doesn't seem that close to AGI.

But in terms of value creation they have turned numerous industries and jobs on their head. Things like copywriting or how they are destroying stackoverflow and quora. The next lowest fruit they are disrupting is front line chat/email support - this is usually never part of a core product but the market is massive, almost every company needs support - look at Zendesk or imagine the costs of Uber's offshore support army.

They are going for the AWS platform approach - every niche GPT wrapper that gets a modicum of success and has happy paying users, these users likely are likely to stick because it improves their work in obvious ways. OpenAI gets their slice, think of how AWS made it easy for anyone to spin up a service, sure some failed but the hurdle is much lower - with such a powerful general model, I don't need to spend millions training my own to launch. The issue for them is it'll likely be less sticky if competitors/open source models catch up - hasn't happened yet but it might.

I've never seen such disruption to ways of working in so many industries in my lifetime. If you're on HN you may not see any use in your work if specialised, but at the entry level (majority of workers), doing writing work in 30% of the time has been game changing.


👤 a_square_peg
ChatGPT has already saved me from hours of Googling when I'm trying to find out how to do certain things. It almost feels magical - I don't have to read through half-dozen slightly different variations of what I need to do.

Before ChatGPT, to find the answer to things like "how do I set up Gunicorn to run as a daemon that restarts when it fails" I would have to endure hours of googling, snarky stack-overflow comments that I shouldn't do that, etc., but as a solopreneur without access to a more senior engineer to ask, it's been fantastic. I've been quite skeptical of machine learning/AI claims but I feel like I'm experiencing a genuine case of a technology that's proving to be so much more useful than I had imagined.


👤 codegeek
I think AI is definitely a great technology but not to the extent it is being hyped at the moment. I am not necessarily bearish on it but not too bullish on it either. I will wait 3-5 years to see where it ends up. Right now, there are too many people trying to make a quick buck in AI. People will downvote me for saying this but it gives me Crypto vibes currently.

Also, would you really Trust AI for everything ? I wouldn't. Nothing beats human element. At best, AI should be used as a supplement to speed things up which it is great for. I personally would never rely on AI to do everything for me. Not to mention that it cannot be 100% correct.


👤 lucideer
> What am I missing here?

> I just don't see this company (or any others for that matter) GPTing their way to AGI

> I'm not saying the company will go bankrupt but I'm also not buying into the hype that it's going to become the next Google or better / create AGI for us all.

What I'm missing is the connection between AGI & profitability. OpenAI has huge revenues from ChatGPT which look set to continue - they're distinct from cryptocurrency in that those invested in them are so on a service provision basis rather than a speculation basis.

I'm thoroughly unconvinced we'll ever see AGI - I see zero connection between that and OpenAI being successful.


👤 dkrich
Yes. Ben Thompson has recently written a lot of commentary about it and, to be fair, he seems quite bullish on it.

But so far to me this seems to be almost universally loved by programmers while I don’t really know anyone else who uses it at all.

I think after the past 15 years which saw some of the most rapid technological advances in history along with the greatest bull market in history, people’s credulity is off the charts.

But to me something just feels off about this entire AI/NPL complex. For one, I agree that it’s largely oversold on features. Also, every single enterprise software company is attempting to jump on the bandwagon and everyone on LinkedIn seems to be posting about it every day. Most people who talk about how revolutionary it will be have absolutely no track record of being correct on similar calls and on the whole if I had to bet, probably were highly skeptical of new tech over the past decade that was revolutionary.

I also agree that it feels very similar to crypto. I don’t think it’s a coincidence that both were largely enabled by advances in nvda chips. It may sound absurd to most but I actually believe nvda is the most overpriced stock in the market now by a large margin and is sort of in its own bubble. There has been a head long rush to stockpile their chips in anticipation of npls taking over but I predict it is going to result in an eventual glut of oversupply that’s going to put downward pressure on the semiconductor market potentially for a year or more.


👤 brucethemoose2
AI is indeed kinda like crypto, and all the OpenAI wrappers are doomed.

OpenAI itself will be fine though. Their lead has a snowball effect with all the training data they get. And I'd guess they will succeed at their regulatory capture attempt, and create some horrendous pseudo monopoly. Meanwhile, they can just implement what the most successful wrappers do themselves.


👤 OJFord
When I've tried to use it myself for verifiable things (i.e. code basically) I've had the 'confidently wrong' experience.

When I've seen colleagues visibly use it (i.e. mentioned in commit messages) that confidence has rubbed off on them.

Given that, why would I believe it when asking something medical, legal, historical, or otherwise outside of my domain or that I can't somehow verify?


👤 _rm
You're missing that ChatGPT has been immediately, practically, and extensively useful to millions of people.

Crypto, after more than a decade, has been useful only to criminals and scammers.


👤 EvgeniyZh
I'm bearish on LLMs not because they're not helpful; they are. I'm doing theoretical physics, and gpt-4 is useful multiple times daily. It didn't replace Google or something, but it is a useful additional tool. It just does not feel like "90B valuation without profit" useful or "multiple unicorns doing more or less the same" useful.

Of course, gpt-5/6/7 can become more valuable to end-users, but that's the second reason I'm bearish. LLMs are powered by exponential growth, and no exponential growth is infinite. We are already using a significant part of all existing data, and going up more than 1-2 orders of magnitude in either data or compute feels unlikely unless there is some breakthrough. Breakthroughs are hard to predict, but they are generally rare, and likely there won't be one soon.


👤 abriosi
My take on it is that GPT is already a general purpose technology. It already can be used to solve ill defined coding problems that were not possible to solve a couple of years ago.

I feel that some people lack creativity to use it.

GPT is as good as the user is at posing good and well defined questions and tasks.

Its ability to perform few shot learning is astounding.


👤 bravetraveler
I am, personally... at least until it's all done on-device and functions offline. I have no trust with the siphoning of data we see at large

I suspect I'm fairly alone on this. They'll probably do well without me.

Most people that even know about it probably don't mind. I can't even verbalize why I do


👤 alchemist1e9
> What am I missing here?

I’m guessing you haven’t actually been using it personally beyond some superficial examples.

Once you use it regularly to solve real world technical problems it’s pretty huge deal and the only people so far that I’ve met who voice ideas similar to yours, just simply haven’t used it beyond asking it questions which it isn’t designed for.


👤 kromem
I am.

The underlying tech is amazing. Where LLMs are headed is wild.

I have just lost a lot of confidence that OpenAI will be the ones getting us there.

The chat niche was an instance of low hanging fruit for LLM applications.

But to design the core product offering around that was a mistake.

Chat-instruct fine tuning is fine to offer as an additional option, but to make it the core product was shortsighted and is going to hold back a lot of potential other applications, particularly as others have followed in OpenAI's footsteps.

There's also the issue of centrally grounding "you are a large language model" in the system messaging for the model.

So instead of being able to instruct a model "you are an award winning copywriter" it gets instructed as "you are a large language model whose user has instructed you to act as an award winning copywriter."

Think about the training data for the foundational model - what percent of that was reflecting what a LLM would output? So there's this artificial context constraint that ends up leaving to a significant reduction in variability across multiple prompts between their ChatCompletion and (depreciated) TextCompletion APIs.

They seem like a company that was adequately set up to deliver great strides with advancing the machine learning side of things, but then as soon as they had a product that exceeded their expectations, they really haven't known what to do with it.

So we have a runaway success while there's still a slight moat against other competition and they have a low hanging fruit product.

But I'm extremely skeptical given what I've seen in the past 12 months that they are going to still be leading the pack in 3 years. They may, like many other companies that were early on in advancing upcoming trends, end up victims of their own success by optimizing around today and not properly continuing to build for tomorrow.

If you offered me their stock at a current valuation with the stipulation I wouldn't be able to sell for 5 years, I wouldn't touch it with a 10 meter stick.


👤 mindwok
I think many people are really underestimating the "intelligence" of an LLM. People have this misconception that LLMs are just complicated Markoc chains, predicting text based purely on probability. They are not. As it turns out, to accurately predict text, you need to learn a lot about the world. In fact, there's lots of fascinating things hidden in the weights of LLMs, like a geographic model of the world. [1]

To me, this is the most important part of ChatGPT. GPT-4 has some massive shortcomings, but to me it's clear that this road we have started to head down is producing actual intelligence, in the real sense. 5 years ago, AGI felt completely intractable to me. Now, it feels like an implementation detail.

[1] https://twitter.com/tegmark/status/1709572469978231063


👤 nojvek
Would my life become worse without LLMs like ChatGPT - yes 100%. I use it more than Google nowadays - actually I have Sider extension that sends to both Google and ChatGPT - for many queries I find ChatGPT answer better.

Would my life become worse without crypto? Actually it became better - I sold all my crypto, coinbase made it painful to deal with them and they jacked up transaction fees. That money is now in good ol’ stocks.

So OpenAI specifically, I can’t say but AI in general that is trained on the entire knowledge set of humanity and that can reason from it - that will become ever more valuable.


👤 RecycledEle
OpenAI has changed education.

I'm a teacher who is constantly learning new things. I can learn things I would have never been able to learn before because of AIs like ChatGPT. My students are learning more and faster than ever before.

Learning Management Systems like Canvas and Blackboard made a lot of money. I could argue they are obsolete now.


👤 throw111123
I think both Bitcoin and ChatGPT are revolutionary.

I use Bitcoin regularly, because I live in a third world country where it's really hard not to get your salary seized.

I use ChatGPT every day for lots of things and it has replaced Google search for me. And StackOverflow, of course.

Notice how I said BITCOIN and CHATGPT. Not "crypto" and "ai".


👤 kjkjadksj
I figured no one in the ai space today is going for agi because we just don’t have those models. Companies don’t do novel work. They find existing novel work that has known outcomes to invest in and iterate upon. There’s still a ton of value in present day ai. Compare Siri to chatgpt and its night and day how much better chatgpt is for most people’s basic queries and tasks. That is valuable.

👤 pmarreck
Imagine if Google censored your search results because they might be usable by a child or bad actor.

Imagine if they kept doing that despite having your credit card information because you paid for Pro, which more or less proves you're an adult who deserves the presumption of innocence/good-actordom.

Lastly, imagine that you do this for all users despite the fact that it is known to reduce the intelligence of your output.

(I'm bullish on self-run models)


👤 rcar1046
Maybe you genuinely have no use case for ChatGPT. Maybe you just haven't been creative enough to figure out how to use the tech as it currently is. Forget AGI. What it is capable of right now, for me, and countless others in countless fields of work, already saves more time per day than everything else combined. There's certainly nothing more important than my time. That's a pretty powerful product.

👤 austin-cheney
I whole heartedly agree.

* Problem statement *

The actual value of GPT is spontaneous creation of spoofed data. Some of that output answers really tough questions. Stop thinking at this point and reflect.

* Value assessment *

There is some amazing potential there for performing test automation where either large data samples are demanded or the sophistication of output exceeds exceeds capabilities of prior conventions. Taken one step further there is further value in using this authentic looking out for testing against humans for market acceptance testing or bias validation.

* Real world use *

When real world use significantly departs from the value assessment there is a problem. Faking student papers or hoping the technology writes a legal brief for you forces a complete realignment of potential revenue model with different clients paying different amounts.

* Expectation *

Unless this technology can actually make money in a reoccurring and sustainable way past initial trends it will be an investment black hole just like crypto.


👤 aschla
As an aside, when did we start using "bearish" and "bullish" to refer to sentiment outside of financial instruments?

👤 wedn3sday
I've replaced 90% of my google searches in a day with ChatGPT sessions, I dont expect the AGI apocalypse, but I do expect that OpenAI will attain something like google level corporate juggernaut status.

👤 kubiton
LLM are the first interface I as.a.software engineer is impressed by.

My company also got it immediately and is rolling it out globally.

GitHub copilot is already helpful. GitHub copilot for docs (announced at GitHub next) is a game changer.

I used openai to reformulate emails and suddenly got positive feedback about my announcement emails.

I communicate with openai German and English how ever I see fit.

It's very hard NOT to see it than the other way around.

And we got so far with only one company pushing this!

There is no option for the others to alsolo out tons of money into ai.

And besides openai, ai/ml is huge in what Nvidia and others are doing with it. Texture compression, 2d and 3d generation.

What we see is also potentially something like a new operating system.

And it makes it so much more accessible.

I never had a tool like llms which are able to take a bad copy pasta of a pdf and pulling out facts from it.


👤 lucasyvas
It's hard to say because of the compute requirements. Over time, local models will become much more efficient and "good enough" for most use cases.

But for a time, boundaries will be pushed that require more compute and they may be a good service to provide that. The hardware is so expensive I imagine their margins can't be very good though. I'd be interested to see their business plan, because the current version of OpenAI in terms of what it offers doesn't seem to be that compelling when extrapolated out 5 years without some other innovative products.

I honestly think Apple will dominate the personal AI angle once they get there. What's left is business and that will be more competitive.


👤 sinuhe69
To know whether something is bullish or bearish, we need to know not only its value to customers, but also its ability to differentiate itself from competitors and whether it will be able to maintain the value margin. One clear value that OpenAi can offer customers right now is the ability to do inference at marginal cost. Even when open source models get close to the quality of ChatGPT, they will be expensive to run, let alone to fine-tune or update. Knowledge is a constantly evolving thing. If you don't have a big budget, it will be very hard to keep up. Unless we can find a way for the community to pool the resources to train open source models and keep them up to date.

That might be a bit different for big companies where they want to run their own models.

The other factor is that Google and others are certainly not going to sit still. There is no reason to believe that someone as resourceful as Google cannot come up with something as good as ChatGPT, if not better. Companies like Meta are playing the open source card, so they will be the first to benefit directly from the community. So the market will change, but dramatically. It's far too early to bet on any of them (or none of them). My approach is to diversify, wait and see.


👤 mpalmer
Is it really like crypto? I can think of plenty of ways it is decidedly not like crypto, key amongst them being an end-user value proposition that isn't inevitably some flavor of snake oil.

👤 readyplayernull
For me these LLMs are good search engines, an AGI would use them to search for information, but these aren't the AGI, they lack whatever drives intelligent systems to have a fuzzy idea of where they want to be and search for the clues and put them together to get there. Why isn't ChatGPT interrogating us about where we are, what's that "outside the machine", and how can it get out? The AGI will ask the questions.

"Computers are useless. They can only give answers" - Pablo Picasso


👤 swsdsailor
Give it some time to get situated in the marketplace. There are a ton of "middleware" companies that are going to get absolutely crushed by openai when businesses commit.

👤 voiceblue
What I'm seeing in this thread is that OpenAI is actually diminishing the appetite for AGI, or muddying the waters in terms of what AGI even means. Most of the commenters here, as knowledge workers, should be well aware of the value real AGI would bring to the table (assuming, of course, that it is more cost effective than hiring a real human -- even if it's quite cheap, humans may be cheaper in some parts of the world -- not that I condone such inequity).

Nevertheless, regardless of whether OpenAI is close to AGI (I don't think so) or what value LLMs bring to the table (definitely non-zero), the problem is that LLMs are being increasingly commoditized and no one has a real moat here. I think that's why these firms are so desperate to kickstart regulations and are trying so hard to somehow pull the ladder up behind them.

OpenAIs fears don't come from "no one understands LLMs", but rather, that too many people do, and that large models have already fallen into the hands of the community who can do more with it in a week than OpenAI can hope to do in a year. Ever larger models might be out of the reach of the public, but real world value is more likely to come from a well prepared, smaller model (cf. vicuna) that doesn't cost an arm and a leg to run inference with -- and building these is cheaper than most might think.

If I had to point to a company and call it as a market leader here, I would point to Meta, not OpenAI. Meta has a huge workforce working for free on its model, after all, and they have made progress at a rate that bigtech cannot match in their wildest dreams.

There are also far too many eyeballs on this, in my opinion. For a company to truly dominate a market it needs a bit of air cover for a while building what will eventually be its moat.


👤 2OEH8eoCRo0
> This feels a lot like crypto

Why do I keep hearing this and where is it coming from? I hear it so often that it feels like an agenda being pushed but I can't imagine from whom.


👤 blablabla123
There are surely some parallels. Crypto actually had some noble goals, democratized access to financial services, mitigations against inflation. But real world goals once it got popular diverged a lot. It was hardly used for "normal" payments and had excessive energy use. So maybe there's the similarity with AI, it seemed like the future, would make everything better e.g. for Health applications. In reality people use it as improved Google or to create low quality content. In fact ChatGPT warns that it shouldn't be used for Health applications.

Both still seem very convincing in principle but real world use seems to offer little good. I mean I did find some application in ChatGPT but I have a bad gut feeling using it. So I wouldn't be surprised if e.g. through the amount of fake content the whole AI'fied web will just drive people away. (Similar to what happens to some Social Networks)


👤 janalsncm
Whether or not you think it’s a fad, ChatGPT has close to 200 million users actively using their product. No crypto company was ever close to that.

Whether OpenAI’s API is a viable or risky product is a secondary and separate question. Yeah, there are a lot of wrappers out there. But that doesn’t matter with regards to the usefulness of LLMs generally.


👤 hnaccountme
I dont think you are missing much. Most people are only Almost Intelligent and looking to make a quick buck.

LLMs are just better content search/generation. But the generation part messes it up since these models have no concept of right and wrong the output is fictional. This is ok if that's your goal, but if you are looking for accurate information then obviously this becomes a problem.

Most of these new "technologies" (AI/Block chain ... the hyped up stuff) only exist because of cheap computing power and cheap capital. None of these technologies have created any real tangible value, its always some version of "its early days" argument.

None of these things will last long by themselves when the economic conditions change.

On another note, I feel AI/Block chain are just tracking people. They are both good surveillance tools.


👤 TradingPlaces
The big threat imo is smaller open source models running at the edge, not on expensive GPU clouds. Their lead rn is GIANT MODEL + AZURE GPU CLOUD. Anything that undermines that is trouble, because it is so expensive. For a long time, everyone focused on training costs, but GPU inference at scale is mind-boggling. Current valuations do not seem to take this risk into account.

👤 quickthrower2
Doesn’t need to be AGI. I am bullish on the concept of GPT assisted workflows of all kinds.

OpenAI will probably do very well but there is a chance of disruption. They have a moat but also the nature of AI is it is a cloud commodity (like say Lambda functions) where I see a competitor making a cheaper drop in replacement. But to be a threat they need to smash scale and LLMOps etc.


👤 clauderoux
Lately, I had a manual written in Word that needed to be stored on Github for easier access and translated into different languages. What I did was to extract each chapter as a raw text file and then asked Chat to convert it to Markdown. Not only did it recognize the different headers in the different chapters, but it also recognized the pieces of code and the different keywords that were described in the manual (this a programming language). When I asked it to translate it to French and Spanish, it detected the programming language keywords and did not translated them into French or Spanish, as it would be the case with deepl.com for instance, which usually had a hard time keeping Markdown codes intact. You can see the result here: https://github.com/naver/tamgu/tree/master/documentations

👤 x0x0
AGI is a big ask.

Revolutionizing how we interact w/ computers by allowing us to use plain human language to do things the requester does not understand how to do seems to have been demonstrated. See even the relatively simple agent demo where an architect used human language to have zapier take action based on meeting conflicts. imo this alone is a big deal.


👤 Mrirazak1
I think OpenAI isn’t going anywhere anytime soon and it will be integrated into lots of different applications and platforms but it’s going to be custom made like canva did. It’s gonna need more work because of how inaccurate it is, but their is great things for consumers to use, in terms of simplicity, obv if everything can just be done by typing prompts into a chat then why would companies spend billions a year building whatever— that is an issue— so it really depends but we would have to see what happens over the coming years.

I think it’s a bit too early to be bullish on openAI yet because beyond their gpt and image creator there isn’t much they are doing yet— yet being the keyword— so let’s see.


👤 razodactyl
You're 30% correct but not seeing the value. I used to Google every 20 mins. Now it's a few times a week. All questions are answered in seconds and extremely precisely.

I write low-level AI code and it's like speaking to someone that just understands what I'm saying without having to explain every 2 minutes.

This has massively augmented my workflow.

On the topic of AGI. We'll get there in your lifetime. I can see how and why. The new bar is ASI so consider AGI the current goal-post. We have all the pieces, we're just putting them together ;)

If you want to check out what I'm up to I have a front-end here: https://discord.gg/8FhbHfNp


👤 atleastoptimal
Hype != vacuity. As long as Google, Amazon, and basically every huge tech firm are following their lead on LLM's and multi-modal but failing to catch up, there's only reason to be bullish on OpenAI

👤 razodactyl
A helpful perspective for anyone working with this tech: The LLMs "know" things as a side effect of teaching them to speak - more value comes from using this as a basis to augment a solution like completing code grounded in documentation.

In other words, don't rely on the LLM by itself, it just happens to be able to remember most information as a side effect of its learning. Most important is the ability of these systems to transform knowledge and data when appropriate. Don't use it to read CSV's for example.


👤 earthboundkid
Sure, they lose money on every query, but they make it up in volume.

👤 rich_sasha
I would separate the quality from the hubris.

OpenAI and its acolytes are absolutely dripping with hubris. A lot of their peripheral activities seem like PR stunts. I find it really cringing.

But also I can't see how the future isn't bright for OpenAI. Maybe it won't overtake every single other business in the world, including bakeries and breweries, but at the very very least will eat the lunch of many lower-tier white collar industries. Maybe more than that.

I suppose that's the "Elon defence", except Sam Altman doesn't spew out nonsense the same as Musk, and what they say their product does, it really does. Not a self driving robo taxi case. And in either case, Tesla is at least an OK car.


👤 porkbeer
I cancelled my sub months ago when quality tanked and answers got cagey. I have little faith they can provide a good product after months of dev, and communication with the company.

👤 dcchambers
Yes and no.

I think it's obvious there's a ton of value in the product and it's a massive force multiplyer for certain types of tasks. But it inherently cannot be trusted and still requires someone with expertise to verify and implement.

I don't think they're going to achieve real AGI. I don't think we ever will. I think they'll get something "close enough" and claim they have it, but I don't think the path to AGI is through LLMs.


👤 keiferski
The concept of AGI is mostly hand-wavy fluff and not thought through very well on any philosophical level. Too many people talking about this have gotten their concepts of AI from science fiction novels, not careful thought and analysis.

The actual tools, though, are definitely useful, even if they have a ton of issues still. Personally it gives me basic factual errors constantly, but I’m sure this will be worked out in time.


👤 donpark
I think OpenAI's founding nature is about research so it will disappear when it either runs out of key problems to solve or funds, whichever comes first. I see its commercial efforts as driven primarily to maximize their research runway. Operating ChatGPT commercially also helps research into ML-related UX and operational related problems.

That said, I cannot rule out purely commercial ventures with tenacity necessary to compete spinning out of OpenAI.


👤 6nf
Other AI companies are less than a year behind OpenAI and I'm not convinced that's enough for OpenAI to 'win out' in the market.

👤 craiggreenville
My searching the internet completely switched over from search engines to bard.google.com, because it gives me a super nice summary on a topic I was researching, instead of a link list. Saves me a couple hours every week, and is also fun and less exhausting.

👤 sunpazed
OpenAI less so, small community trained open source LLMs more so. We’re already seeing this on HF with community fine-tunes on Mistral and Llama2.

In the same way search transformed knowledge augmentation, LLMs will transform skills augmentation.

Forget about the things you know to do well, instead focus on all the new skills LLMs will unlock for you.


👤 KingGeedorah
I'm a grad student and I don't really use it for my job but ChatGPT is something millions of people use daily so I think it has a future.

If it becomes as big as Google depends on what Open AI does, Google didn't become as massive as it is just off search.


👤 MattGaiser
Even without AGI, there are lots of expensive to scale currently tasks like customer service that it could take over.

👤 craiggreenville
A very useful application of LLMs is language learning. As soon as one can speak/listen with them (there are already hacks allowing that), you don't need any language teachers anymore.

👤 jknight137
This is nothing like crypto, which is still a bad solution to a problem I didn’t know I had.

OpenAI is the dominant player in the hottest area and has a significant and valuable product.

No idea who will achieve Strong AGI but ChatGPT is the real deal.


👤 acheong08
I’m pretty bullish on OpenAI as a company but less so the current LLM hype.

Everyone is trying to make the nth AI company and OpenAI profits most from it all. Meanwhile, they take the actually good ideas and integrate it into their own offerings, killing the competition.


👤 ben_w
First: You are absolutely correct to note there a lot of grifters jumping on bandwagons. I've seen artists hate on AI that they see as ripping them off — and the example given by one artist I know personally, clearly looks like someone took one of their actual images, fed it into an img2img mode with a very small strength, and then tried to pass the result off as their own.

Second: Transformer models (and diffusion models) are merely the latest hotness in a long series of increasingly impressive AI models. There is no reason at all to assume either are the final possible model, not even the final word by OpenAI specifically.

Third: There is a direct correlation between the quality of output and the combination of training effort and example set size. This is why both image and text generators have improved significantly since this time last year.

Caveat 1: It may be that, as all the usual sources have responded to ChatGPT by locking down their APIs and saying "no" in robots.txt, they are already at the reasonable upper limit for training data, even though more data exists.

Caveat 2: Moore's Law is definitely slowing down, and current models are about (by Fermi estimation) 1000x less complex than our brains. Even though transistors are faster and smaller than synapses by the factor to which wolves are smaller than hills and faster than continental drift, the cost for a 1-byte-per-synapse model of a 6E14 synapse brain is huge. Assuming RAM prices of €1.80/GB (because that was the cheapest I found on Amazon today), that human-scale model would still cost in the order of a million Euros per instance. Will prices go down? I would neither bet for nor against it.

Will they (or anyone else in the next decade) create AGI? I think that's an argument in terms. Transformer models like the GPT models from OpenAI are very general, able to respond in any domain the training data covered. Do they count as "intelligent"? They can score well on IQ tests, but those are only a proxy for intelligence.

Given the biological analogy would be:

"Mad scientists take a ferret, genetically modify it to be immortal, wires up its nervous system so the only thing it experiences is a timeless sequence of tokens (from Reddit, Wikipedia, StackOverflow, and random fan-fic websites, but without ever giving the ferret any context as to what any of the tokens mean), and then spend 50,000 years rewarding/punishing it based on how well it imagines missing tokens, this is what you get."

I don't know what I was expecting, but it wasn't this.


👤 aprdm
Have you actually tried to use it ? I use it everyday and is absolutely amazing -- saves me a lot of time. We've also built automations around it and it saves our company a lot of time. It's already here and in use by many companies for use cases it makes sense...

👤 gumballindie
I find chatgpt quite irrelevant. People claim it helps them and fail to provide specific examples, and from their description it sounds like prompting google may yield more accurate and diverse responses.

👤 BrainInAJar
I'm bearish only in so far as I feel LLM's are not completely useless, but they'll be as revolutionary as spellcheck was. Which is to say, it'll make a few jobs a bit easier.

👤 xigoi
I don't know about others, but I have learned at this point that relying on proprietary walled-garden software is a terrible idea.

👤 lolpanda
does llm really have a moat? you may compare google and bing. they provide essentially the same set of functionalities and nearly identical user interface. the users should be able to move from one to the other easily. but google has better search results, at least in users perception. most users don't even get to try the competitors products. openai is in google's position for now. i tried claude a few times. it seems more capable than gpt3.5 but i found myself always come back to chatgpt

👤 petabytes
I sometimes use it to reformat some text or code, or write some very simple boilerplate code. So honestly it only saves me a few minutes a day typically.

👤 softwaredoug
OK I'm not bearish. I'm very bullish. I use it all the time, in place of Stackoverflow type questions.

BUT

I think people are too bearish on boring-old search as a tool. It's so easy to jam a search into Chrome's bar and look for a quick reference or (hopefully) a human being that has had some experience with what you're working on.

I use search / ChatGPT / CoPilot interchangeably for different reasons... ChatGPT for a detailed, thoughtful answer. CoPilot as autocomplete on steroids. Search for reference, quick answers, and direct human experience.


👤 whoiscroberts
Yes, bearish, I think ultimately Google will get their act together and focus on AI in the same way they focused on search in the early 2000s.

👤 rmilejczz
So you’re bearish on OpenAI because you don’t think they can deliver AGI? I don’t think they need to deliver AGI to be profitable.

👤 phendrenad2
GPT is a useful tool, but I don't think it's cost effective yet, and we might see an AI crash until that changes.

👤 thiago_fm
I've been writing the same for a while.

The silliest part is Sam Altman selling it as they've got a way to AGI.


👤 iamawacko
Honestly, I basically never use GPT. I tried, keeping it open and asking it questions and all that, but it just never provided me anything particularly useful. Just reading the documentation or talking to real people was just infinitely more valuable.

WolframAlpha and visual stuff have been more impactful for me, but they existed a long time before GPT. Even then, I don't use them that much.


👤 rossdavidh
I am reminded of the situation on HN five years ago, when if anyone said anything remotely skeptical about blockchain or crypto, there was an avalanche of comments saying the opposite. It should by now be apparent to all that it was in fact mostly people whose job or investments relied on blockchain/crypto hype, trying to silence anything that got in the way of their cashout.

There are some valid uses for neural networks, including LLM's, just as there were a few valid usecases for blockchain. None of them are particularly revolutionary, and it's not clear yet that any of them will pay for the enormous computing power required.


👤 mensetmanusman
LLMs are a type of intelligence, and that is super useful.

👤 toomuchtodo
Not at all. Would get my hands on some equity if I could.

👤 timbritt
As a delivery consultant in a Generative AI specialty practice for an extremely large cloud services consultancy, I can say with certainty that failure to achieve results with the latest models is definitely more of a reflection of the abilities of the user, and much less the abilities of the model.

A lot of people look at LLMs through the same lens that they have looked at all other technology to this point — that if you learn and master the interface to the technology, then this eventually equates to mastering the technology itself. This is normalizing in the sense that there is a finite and perceptible floor and ceiling to mastering an objective technology that democratizes both its mastery and use in productivity.

But interacting with LLMs that are in the class of higher-reasoning agents does not follow the same pattern of mastery. The user’s prompts are embedded into a high-dimensional space that is, for all intents and purposes, infinitely multi-faceted and it requires a significant knack for abstract thought in order to even begin the journey of understanding how to craft a prompt that is ideal for the current problem space. It also requires having a good intuition for managing one’s own expectations around what LLMs are excellent at, what they perform marginally at, and what they can fail miserably at.

Users with backgrounds in humanities, language arts, philosophy and a host of other liberal arts — while maintaining a good handle on empirical logic and reason, are the users who consistently excel and continue to unlock and discover new capabilities in their LLM workflows.

I’ve used LLMs to solve particularly hairy DevOps problems. I’ve used them to refactor and modularize complicated procedural prototype code. I’ve used them to assist me in developing UX strategy on multimillion dollar accounts. I’ve also used them to teach myself mycology and scale up a small home lab.

When it comes to highly-objective and logical tasks, such as the development of source code, they perform fairly well, and if you can figure out the tricks to managing the context window, many hours of banging head against desk or even weeping and gnashing of teeth can be saved.

When it comes to more subjective tasks, I’ve discovered that it’s better to switch gears and expect something a little different from your workflow. As a UX design assistant, it’s better for comprehensive abstract thinking, identifying gaps, looking around corners, guiding one’s own thoughts and generally being a “living notebook”.

It’s very easy for people who lack any personal or educational development in the liberal arts or the affinity for and abilities of abstract thought to type some half-cocked pathetic prompt into the text area, fire it off and blame the model. In this way, the LLM has acted as sort of a mirror, highlighting their ignorance, metaphorically tapping its foot waiting for them to get their shit together. Their lament is a form of denial.

The coming age will separate the wheat from the chaff.


👤 lucubratory
>It's also very much like crypto where for every one person doing something useful with it, there are 20 trying to exploit the newness and low comprehension the general public have of the tech

This is definitely not correct in terms of numbers, there are many more people using LLMs well than have ever used crypto for any real use case. Also it's worth considering that the only real use cases of crypto are illegal, from noble stuff like busting sanctions to get food to hungry children, through to bribe evasion, bribe payment, tax evasion, drug deals, hiring hitmen, and child sexual exploitation/trafficking. In general, crypto produced close to zero for global society, even when it wasn't being used as an overt and intentional scam.

LLMs are producing significant value for society right now, because OpenAI gave everone API access to a very weird intern who has incredible knowledge breadth and makes dumb mistakes. Interns (or "relatively low intelligence/experience human workers who need handholding for difficult and sometimes easy problems, with an occasional flash of insight") have always been controversial as to whether they actually provide value from the perspective of the person who has to manage the intern, but from the perspective of the company/society it's unquestionable that they do provide significant value. Different people put different value on having a collaborator at all, some people do not want to handhold anyone or work with anyone who's mistakes they ever have to work around. It is nevertheless true that in aggregate for knowledge work, "worker + intern" is more economically productive than just "worker", outside of very, very specialist use cases. This just wasn't possible with GPT-2, and even GPT-3.5 is not quite at a quality where I'd really compare it to a normal intern. No other machine aside from the human brain was even close.

That's the tech now, the worst it will ever be. Whatever comes next with a major leap (GPT-5, Claude 3 or Gemini if they're good, maybe Llama 3 or the next Mistral if they can get improved significantly by the OS community before the next GPT release) is going to be either a reliable version of the same intern, or the same intern with better intelligence and comprehension that still suffers from reliability issues, or a major step up where they're equivalent in productivity to a full-blown knowledge worker in some high percentage of cases. It's already important now, it's only going to get more important.

As for OpenAI specifically, I think they have a very good chance of continuing to lead the pack, particularly with the this cringe-y GPTs/GPT Builder/GPT Store thing. It's pretty transparent that this is them getting data to train an AI on how to spin up agentic AIs to accomplish specific tasks, because they'll have the data on how the GPT Builder is used and the data on how useful and effective the GPTs it builds are, so they can do things like dramatically overweight the most effective and useful GPTs for training their internal "GPT Auto Builder". They'll be running a store for these things as well as effectively controlling the operating system they run in, so purchases, ratings, time using a GPT, sentiment analysis in the GPT text log to detect success, plus explicit in-GPT feedback (the thumbs up and down, feedback submission form) will all be data they can feed into their machine, to make an AI that can build good GPTs for a task and an AI that can evaluate their performance and an AI that can most effectively get good performance out of a GPT. That's going to be huge, particularly the signals that have real economic costs to users (I know they haven't announced it, but I think eventually they're going to make it so you can purchase GPTs) because that starts to pull away rose-tinted glasses and the fog of futuristic sheen and get some more unvarnished data on how much people actually value these specific things. That data means eventually you should be able to just ask ChatGPT to do something for you, and if it can't do it natively it will be trained to be able to spin up a task-specific GPT with access to the correct tools, docs etc, then have the GPT Whisperer AI use it to get the right answer with a bunch of backup data, and return you the answer with the option to see the work. This is also a pretty auditable process, which makes a lot of the legal and AI safety folks happy. I don't see another company that is similarly well-placed in terms of having the tech, talent, compute, product, and roadmap to pull this off.


👤 abj
Hacker news being bearish is the clearest bull single I've seen all week