Anyways, it's very good. Possibly the best. It can write really well and much better than GPT-4. Maybe it's just my own taste, but it looks less LLM-ish.
I'm just not sure if it's worth the $200/month. Claude is cheaper and you can use it on Cursor so well.
The tiered pricing for OpenAI offering looks very weak. They have the best product, but it isn't worth $200, sorry.
For example, I pasted my HN profile and asked it to extract my email address in JSON format ({"email": ""}). I would expect any HN reader to be able to do this in seconds using an online ROT13 calculator. Certainly a "phd-level reasoning model" should be able to do this correctly. Claude outputs the correct answer in seconds. o1 Pro thought for two minutes and eventually output an email address that was invalid.
I'm a little surprised that there isn't more discussion about this on HN, as it seems highly relevant to the recent pivot from training-compute to inference-compute.