HACKER Q&A
📣 upmind

Do you think China will produce a SOTA model in the next 2 years


Recent models like Kimi, Qwen, GLM, Deepseek etc seem to do well in benchmarks but not when actually using them in practise. Do you think they'll be an actual SOTA model by them in the next 2 years? Why/why not?

NOTE: referring to text models


  👤 A_D_E_P_T Accepted Answer ✓
Dude what are you even talking about?

Kimi-K2.5 is probably the top text model for writing and any form of artistic or creative pursuit, and its "Deep Research" mode is better than GPT-5.2's. I use it alongside Opus 4.6, and I don't feel that one is "worse" than the other; they have different strengths.


👤 kevin061
State of the art literally just means that it is the latest and most capable, compared with peers.

As such, China launched DeepSeek R1, and they kind of broke the web, because it was pretty good compared with OpenAI, but also fully self-hostable. The self-hostable OpenAI and Meta models just aren't very good, Grok has nothing self-hostable, and I think Gemini only has a small model released.

Meanwhile China has the best self-hostable models, up there with Mistral.

So yes, Chinese AI is SOTA. Maybe not better than the American cloud-based models, but definitely SOTA for self-hostable ones.

Also I think you are wrong about "actual practice". Chinese AIs work great. They are not perfect, but OpenAI Codex also messes up a lot.