HACKER Q&A
📣 behnamoh

OpenAI does have a moat. Why do people think it doesn't?


I have yet to see any model (close or open source) to get even close to GPT-4. OS models can at best be near GPT-3. Bard is a joke. PaLM and PaLM 2 are not as powerful as Google advertises them to be. Claude is terrible at coding.

OpenAI seems to be doing something special w/ GPT-4, like a secret sauce. It seems like they do have a moat after all. Why do people keep saying it doesn't?

On top of that, OpenAI has been pushing certain standards in the industry, like the , ... (ChatML), funcs, plugins, etc.


  👤 crazygringo Accepted Answer ✓
Because "moat" in the business world doesn't mean what you seem to think it means.

Simply having a better product isn't a moat at all. A moat, in reality, is a defensive obstacle that is strategically difficult/impossible to cross.

If competitors can keep improving their products to eventually catch up close to yours, there's no moat because there's no obstacle. To use the castle analogy that "moat" originates from, it's just terrain to cross normally.

A classic example of a moat is network effects, because no matter how superior your product is, you can't get people to switch to yours. That's a strategic obstacle.

Other common moats include things like switching costs (cloud computing lock-in), or economies of scale that are impossible for competitors to achieve (Amazon doing its own deliveries, or Wal-Mart's distribution network).

For more: https://en.wikipedia.org/wiki/Economic_moat


👤 paddw
I think OpenAI's moat is a large corpus of well-cleaned training data. There is no brilliant insight behind the model architecture for GPT-4, but rather 100s of small incremental improvements upon previous model iterations. Eventually, large companies will be able to catch up. Open source should also be able to, in theory, all though it will be hard for things to come together.

👤 cornercasechase
OpenAI’s models were a joke too, until they weren’t. It’s just a matter of time until the open source models catch up.

👤 andrewmcwatters
Because ironically, people in software undervalue software quality itself as a moat. Look at longstanding decade-plus projects, proprietary and open source.

Some projects will never, ever catch up to competitors because of engineering labor availability, but something widely overlooked is that project philosophies can dampen marketshare goals more than just throwing more labor at a product.

You can't toss more (engineer) monkeys at a problem and expect a better solution. There will be better engineers.

In fact, I think software as a differentiator is one of the most undervalued moats we have in the industry.

As an example I was thinking about recently: the Oculus platform. This is a VR computer strapped to your face with more compute power than decades of hardware we've had in the past that could arguably be more useful today than desktop hardware of yesteryear.

Yet, if you bluetooth pair a mouse to it, mouse scrolling doesn't work. You can't build meaningful VR apps for it outside of games, or, rather the environment itself isn't conducive to attracting people who will. I can't open a terminal on it. There's no text editor on it.

You can have all the raw power in the world, and if you have no sophistication of implementation, no nuance in software, no good experience, you're a brute wielding a hammer.

Software quality is a moat.


👤 weare138
Because open-source models keep catching up faster than companies like OpenAI can stay ahead of the curve. They're generally staying ahead of the curve by virtue of being able to throw money and resources at the problem but they're quickly maxing out how many parameters they can possibly incorporate into solutions like GPT.

And that's really the 'moat' companies like OpenAI have right now. It's not a technological moat but a resource moat. There's not much OpenAI can do to stave off competing AI solutions but there's still only a handful of companies that can currently run this stuff at scale.

If companies like OpenAI were smart they would switch to a service provider model running these AI systems at scale rather than centering on a single AI model/system.


👤 seydor
Do they make profit from GPT4?

Do they have IP that makes it impossible for someone with money (e.g. a bored saudi 'businessman') to train 8 220B LLama models and some RLHF?

Do they have exclusive content to feed their model?

Everything seems to point no, transformers are a common model, data is still available, and the limiting factor seems to be consumer-facing GPU time.

Plus there is no indication that GPT is the "final" model for language, this is still active research field


👤 austin-cheney
A moat refers to protections from competition.

There is also the opposite of a moat, an ultimate siege weapon. If a moat is a defensive obstacle then the opposite is any new technology that both ignores any current obstacles and simultaneously is too expensive to fight. An example would be the Gutenberg Printing Press or railroads in the 18th/19th centuries (industrialization). Those technologies simply walked past current approaches like they weren’t there and produced cheaper output than the prior approach could possibly dream. Worse, competing with such ultimate siege weapons is more expensive than ignoring them until they kill you.

What’s surprising is that any such ultimate business siege weapons are typically well known for years while they develop slowly until they actually figure out how to work and are ignored due to bias until it’s too late.


👤 softwaredoug
Most companies with just competent ability to build a software product have a moat.

Somehow most companies suck at it despite years of being in this business. Dozens of clones of common products for web search to chat to ???, yet how often do they just break at that one thing you absolutely need? The product is a usually an OKish 80/20 implementation of something, but the missing last 20% is often the differentiating polish.

For ChatGPT* that polish is the extensive grunt work RLHF to really fine tune these models to be relatively helpful. It’s the extensive backend data prep work, tokenizers, orchestrating a massive cloud to train LLMs, and creating a good enough user experience that it “just works”.

Simple made easy is never easy :)


👤 endisneigh
I've still yet to see a groundbreaking use case for any of these LLMs. Have you seen any?

👤 TheAlchemist
Moat ?

I think we talk about moat when discussing established companies with a product making big $$$.

As far as I know (may be totally wrong here) OpenAI is not in that category at all - it's more of a POC product.


👤 b20000
AI right now is just advanced pattern matching and remixing there is no "intelligence"