I’ve heard some amount of agreement among professionals that yes we are, and with things like papers showing Chain of Thought isn’t a silver bullet it calls into question how valuable models like o1 are it slightly tilts my thinking as well.
What seems to be the consensus here?
We've seen the trend of distilling models at what seems to be the cost of more nuanced ability to iterate and achieve correct results.
I'm very convinced LLMs can go much further than we've achieved so far but I'm very welcoming of newer techniques that will improve accuracy / efficiency and adaptability.