If you’ve used either — or ideally both — would love to hear your insights. Feel answers to the following questions will provide some context when you respond:
- What are the strengths & weaknesses of each from your experience?
- Any tips/tricks or prompting techniques you use to get the most from these models?
- How do you typically use them? (via. native apps like ChatGPT, Claude; or via Cursor, GitHub Copilot, etc.?)
- What programming language(s) do you primarily use them with?
Hopefully this thread provides a useful summary and some additional tips for readers.
(I’ll start with mine in the comments)
O1 is higher quality, more nuanced, and has deeper understanding; the biggest downside rn is the significantly higher latency (both due to thinking, and also, continue.dev doesn't support o1 streaming currently, so you're waiting until it's all done), and higher cost.
In terms of tools: either vscode with continue.dev / cline, or cursor
Languages: node.js / javascript, and lately c# / .net / unity
- better for when the response has to address many subgoals coherently
- usually will not undo previous bugfix progress that was made earlier in the conversation, whereas with Claude if you start having extremely long conversations I have noticed it allowing certain bugs it had already fixed to be reintroduced at much later times
Claude:
- image inputs are actually very complementary for debugging issues, esp if visual at all (eg debugging why a GUI framework rendered your UI in an unexpected way, just include a screenshot)
- surprisingly very good at taking descriptions of algorithmic or mathematical procedures and making captioned svg illustrations, then taking screenshots of those svgs + user feedback to enhance the next version of svg illustrations
- more recent knowledge cutoff, so generally speaking somewhat less likely to deny newer APIs/things exist (eg o1 told me tokenizer.apply_chat_template and meta-llama/Llama-3.2-1B-Instruct both did not exist and removed them both from the code I was feeding it)
- Sonnet 3.5 seems good with code generation and o1-preview seems good with debugging
- Sonnet 3.5 struggles with long contexts whereas o1-preview seems good at identifying interdependencies between files in code repo in answering complex questions
- Breaking the problem into small steps seems to yield better results with Sonnet
- I’m using primarily in Cursor/GH Copilot and with Python
Also, https://aider.chat/docs/scripting.html offers some nice possibilities.
https://github.com/StephanSchmidt/ai-coding-comparison/
(no comparison there yet, just some code to play around with)
Sonnet 3.5 if you can provide context (e.g. with cursor)
gpt-4o for UI design. Also for solving screenshots of interviews