OpenAI seems to be doing something special w/ GPT-4, like a secret sauce. It seems like they do have a moat after all. Why do people keep saying it doesn't?
On top of that, OpenAI has been pushing certain standards in the industry, like the
Simply having a better product isn't a moat at all. A moat, in reality, is a defensive obstacle that is strategically difficult/impossible to cross.
If competitors can keep improving their products to eventually catch up close to yours, there's no moat because there's no obstacle. To use the castle analogy that "moat" originates from, it's just terrain to cross normally.
A classic example of a moat is network effects, because no matter how superior your product is, you can't get people to switch to yours. That's a strategic obstacle.
Other common moats include things like switching costs (cloud computing lock-in), or economies of scale that are impossible for competitors to achieve (Amazon doing its own deliveries, or Wal-Mart's distribution network).
Some projects will never, ever catch up to competitors because of engineering labor availability, but something widely overlooked is that project philosophies can dampen marketshare goals more than just throwing more labor at a product.
You can't toss more (engineer) monkeys at a problem and expect a better solution. There will be better engineers.
In fact, I think software as a differentiator is one of the most undervalued moats we have in the industry.
As an example I was thinking about recently: the Oculus platform. This is a VR computer strapped to your face with more compute power than decades of hardware we've had in the past that could arguably be more useful today than desktop hardware of yesteryear.
Yet, if you bluetooth pair a mouse to it, mouse scrolling doesn't work. You can't build meaningful VR apps for it outside of games, or, rather the environment itself isn't conducive to attracting people who will. I can't open a terminal on it. There's no text editor on it.
You can have all the raw power in the world, and if you have no sophistication of implementation, no nuance in software, no good experience, you're a brute wielding a hammer.
Software quality is a moat.
And that's really the 'moat' companies like OpenAI have right now. It's not a technological moat but a resource moat. There's not much OpenAI can do to stave off competing AI solutions but there's still only a handful of companies that can currently run this stuff at scale.
If companies like OpenAI were smart they would switch to a service provider model running these AI systems at scale rather than centering on a single AI model/system.
Do they have IP that makes it impossible for someone with money (e.g. a bored saudi 'businessman') to train 8 220B LLama models and some RLHF?
Do they have exclusive content to feed their model?
Everything seems to point no, transformers are a common model, data is still available, and the limiting factor seems to be consumer-facing GPU time.
Plus there is no indication that GPT is the "final" model for language, this is still active research field
There is also the opposite of a moat, an ultimate siege weapon. If a moat is a defensive obstacle then the opposite is any new technology that both ignores any current obstacles and simultaneously is too expensive to fight. An example would be the Gutenberg Printing Press or railroads in the 18th/19th centuries (industrialization). Those technologies simply walked past current approaches like they weren’t there and produced cheaper output than the prior approach could possibly dream. Worse, competing with such ultimate siege weapons is more expensive than ignoring them until they kill you.
What’s surprising is that any such ultimate business siege weapons are typically well known for years while they develop slowly until they actually figure out how to work and are ignored due to bias until it’s too late.
Somehow most companies suck at it despite years of being in this business. Dozens of clones of common products for web search to chat to ???, yet how often do they just break at that one thing you absolutely need? The product is a usually an OKish 80/20 implementation of something, but the missing last 20% is often the differentiating polish.
For ChatGPT* that polish is the extensive grunt work RLHF to really fine tune these models to be relatively helpful. It’s the extensive backend data prep work, tokenizers, orchestrating a massive cloud to train LLMs, and creating a good enough user experience that it “just works”.
Simple made easy is never easy :)
I think we talk about moat when discussing established companies with a product making big $$$.
As far as I know (may be totally wrong here) OpenAI is not in that category at all - it's more of a POC product.