Don't get me wrong. I fully believe in the potential of current-gen AI. I am myself employed in the field. But to me it seems pretty obvious that these models are for the most part just memorizing and interpolating at huge scales with limited generalizability. I just don't see how we can go from current-gen to AGI without a huge paradigm shift. But in the statements made by these CEOs it is implied that this is not necessary.
I just don't get what sort of strategy they are following. Wouldn't these embellishments eventually come to haunt them when their promises are not realized? This seems to be Tesla FSD all over again. Then too it was claimed that only scale was necessary to achieved the objective.
Is Tesla "haunted" by their repeated failure to deliver Full Self-Driving? I haven't been following, but it doesn't really seem like it.
Having something concrete to promise seems like a winning tactic: you don't want to actually deliver the thing, because then you'd have to train the public to expect something else (and that takes more creativity). Whether it's AGI, FSD, the Second Coming, a border wall, universal Medicare, peace in the Middle East... there doesn't seem to be any particular downside to promising the public something that repeatedly fails to happen. After all, if it didn't happen yesterday, that means it could still happen tomorrow — so you'd better make sure you're prepared!