Many people have been urging caution for a long time. As we get closer to more features and capabilities those people are going to ride on the publicity of those achievements to call out their warnings. This will garner more interest because the threat seems closer or more real as the capabilities increase (or at least appear to be so due to company hype).
Then there are many others who are excited and still pressing ahead.
So this is basically different interest groups competing with each other. Usually the more technically advanced ones won towards the end but the more important thing is how long the other groups can drag it down.
Under the current social-economic world order, any large scale tech advance such as AGI will almost surely bring destruction to a whole generation of workers.
You think AGI, nuclear fusion and such will bring prosperity to us? Too naive, mon ami.
Keep on going man, the future shall arrive, and not everyone will be ready.
We must continue this debate of our human competence, relevance of public narratives, and true potential of outcomes. All the meanwhile the future presses upon us.
Humans are still by far the greatest threat to our humanity.
We have a few very specialised "AI"s and still struggle since 30 years and trillions of USD investment to even just let an AI drive a car (in all conditions not just when everything is easy).
The step from ChatGPT to an real AGI is much bigger then the step from going to the moon to colonising other planets that are lightyears away.
Both AGI and colonising other planets lightyears away will not happen in our lifetime.
That being said, once we really reach the AGI level - I suppose it will be like this - very sudden.
In any way I think overhype is playing a role here as well. There is a lot of money to be made here, so motivations are high, tends to make people polarize their opinions.
So someone wrote a letter. That doesn't mean anything. It doesn't even mean that the average opinion has substantially changed.
There is no consensus on anything here. Beware of so-called "experts" on this subject. There are no "AGI experts". There are AI experts, but speculation by humans on what a hypothetical superhuman intelligence might be like will never be more than, well, speculation. It's like ants trying to wrap their minds around what they imagine human philosophy to be.
Your opinion on AGI is as good as mine, or as Elon Musk's, or as Eliezer Yudkowsky's. Nobody knows anything. Get comfortable with uncertainty, and don't jump to the conclusion that anything has happened just because somebody somewhere said something.