To me it seems like their goal of developing common sense reasoning has been at least partly subsumed by the common sense knowledge contained by LLMs like ChatGPT, Llama models, Mistral, Anthropic, etc. The knowledge in LLMs, trained on what I think of as a ‘shadow’ representation of the world, the shadow being text, seems adequate to build some form of representation of the real world.
If they cannot demonstrate the connections and base that Cyc has, then the training is probably gone wrong somewhere.