According to this government-commissioned report, AI at its current SOTA frontier represents an unprecedented existential risk to humanity. The best course of action, according to its authors, is for the United States to restrict compute on training models and punish open-sourcing weights with jail time.
> The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds.
> Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says.
> And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends
Do you believe, if the risks of AI are true, that this is the best course of action? Would this still allow AI development to reach a state of usefulness that has been promised for years, or will it completely destroy the momentum and lead to an AI winter?
Astounding. IMHO the biggest achievement wasn't technical at all--it was how they were all able to do this gigantic end-run around copywrite, trademark, or patent protections.
Essentially, there is no such thing as intellectual property anymore.
- Power being consolidated to only some small number of companies that can lobby and spend well. This leads market capture and oligopoly. And bars smaller players from entering the market. This leads to poor conditions for consumers similar to aviation industry, cellular networks and broadband market.
- Innovation stops, and growth stagnates. Consumers and developers are left with poorer and higher paid products. The development of the technology stagnates.
- I guess government will also monitor development of weapons, etc. China, Russia won't have similar restrictions, and will get ahead.
I feel like I can guess their stance on the topic without even knowing what it is.
It is better that the ban start now because no one has a method for determining which frontier models will become dangerously capable just as no one at OpenAI had any way of determining at the start of training that GPT-4 would be able to get a 92 on the bar exam. That is a pity because the benefits of creating more capable models is considerable: they could've made us all wealthier and saved lives through medical advances.
But anyway, there's no way to enforce not running certain programs without some kind of Orwellian surveillance baked in to hardware that I would abhor more than Roko's basilisk, and you now make the people at the forefront of the technology criminals and geopolitical adversaries. It would be an ultra-bonehead move. As always, technology and law don't mix well.
Even if China or some other country does not impose similar restrictions, a ban in the US and Great Britain would buy humanity a decade or maybe even 2 decades in which we might find a way out of the terrible situation we are in. (A decade or 2 is about how far the US and Great Britain are ahead in frontier AI research.) Also, buying an extra decade or 2 of life for everyone on Earth is a valuable goal in its own right even if after that Chinese AI researchers kill everyone.
If humanity can survive somehow for a few more centuries, our prospects start improving because in a few centuries, the smartest among us might be smart enough to create powerful AI that doesn't quickly kill us all. It is not impossible that the current generation of humans can do it, but the probability is low enough (3.5 percent would be my current estimate) that I'm going to do anything I can to slow down AI research or preferably stop it altogether -- preferably for at least a few centuries.
I have not read the report, what are the reasons they recommend such drastic measures?