There is a global race on to acquire vector math hardware (GPU, TPU, etc) and its manufacturing capabilities. LLMs at their best are intimate and deserve first-class isolation, and doing so on owned hardware is the best way to guarantee that.
OpenAI enjoys a market leading position and may for some time, but this technology is the last thing that makes sense to be centralized in the long term. For now, it’s just impractical for most to run a 1.7 trillion parameter model on hardware they own, but OpenAI is catalyzing a movement of hundreds of thousands people who have been able to run finely-tuned 13 billion parameter with usable results on consumer hardware- in the past few months.
I don’t worry much about an OpenAI rug pull. I don’t worry as much about AI revolution happening too quickly either, not at this pace. Should I?
This coincided with the update that made v4 the default option for new chats for Plus subscribers (of which I am one), drastically increased its speed, and removed the hourly message limits.
ChatGPT has the lion's share of name recognition and media buzz but every one of the competitors in this space is probably just waiting for them to misstep so they can step into the spotlight.