HACKER Q&A
📣 atleastoptimal

Should the US pass sweeping restrictions on AI development?


https://time.com/6898967/ai-extinction-national-security-risks-report/

According to this government-commissioned report, AI at its current SOTA frontier represents an unprecedented existential risk to humanity. The best course of action, according to its authors, is for the United States to restrict compute on training models and punish open-sourcing weights with jail time.

> The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds.

> Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says.

> And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends

Do you believe, if the risks of AI are true, that this is the best course of action? Would this still allow AI development to reach a state of usefulness that has been promised for years, or will it completely destroy the momentum and lead to an AI winter?


  👤 rhelz Accepted Answer ✓
The more we are getting ourselves all worked up about how dangerous AI is, the less we are thinking about the fact that LLMs are sucking up all the intellectual property we created in the information age, and then selling it back to us.

Astounding. IMHO the biggest achievement wasn't technical at all--it was how they were all able to do this gigantic end-run around copywrite, trademark, or patent protections.

Essentially, there is no such thing as intellectual property anymore.


👤 xeckr
Absolutely not. One has to wonder if the cultural climate was what it is today during the golden age of microchips whether there would be calls to impose a maximum transistor count per area, especially if they could foretell how much power this technology gives to those who master it.

👤 __rito__
What that genuinely achieves are:

- Power being consolidated to only some small number of companies that can lobby and spend well. This leads market capture and oligopoly. And bars smaller players from entering the market. This leads to poor conditions for consumers similar to aviation industry, cellular networks and broadband market.

- Innovation stops, and growth stagnates. Consumers and developers are left with poorer and higher paid products. The development of the technology stagnates.

- I guess government will also monitor development of weapons, etc. China, Russia won't have similar restrictions, and will get ahead.


👤 smoldesu
> According to this government-commissioned report

I feel like I can guess their stance on the topic without even knowing what it is.


👤 BriggyDwiggs42
“Outlaw the publication of weights” is this regulatory capture or just the usual gov stupidity?

👤 hollerith
>Would this still allow AI development to reach a state of usefulness that has been promised for years

It is better that the ban start now because no one has a method for determining which frontier models will become dangerously capable just as no one at OpenAI had any way of determining at the start of training that GPT-4 would be able to get a 92 on the bar exam. That is a pity because the benefits of creating more capable models is considerable: they could've made us all wealthier and saved lives through medical advances.


👤 shrimp_emoji
I think we're probably heading toward another AI winter anyway, so this would be like the government covering it up until things thaw again.

But anyway, there's no way to enforce not running certain programs without some kind of Orwellian surveillance baked in to hardware that I would abhor more than Roko's basilisk, and you now make the people at the forefront of the technology criminals and geopolitical adversaries. It would be an ultra-bonehead move. As always, technology and law don't mix well.


👤 artninja1988
It's just a bunch of cranks who did the report. I hope no important politican takes this "report" seriously.

👤 hollerith
Yes. The US government should ban the training of new foundational models and should do whatever it can to slow down progress in GPUs because the way things are going now, we are very probably doomed.

Even if China or some other country does not impose similar restrictions, a ban in the US and Great Britain would buy humanity a decade or maybe even 2 decades in which we might find a way out of the terrible situation we are in. (A decade or 2 is about how far the US and Great Britain are ahead in frontier AI research.) Also, buying an extra decade or 2 of life for everyone on Earth is a valuable goal in its own right even if after that Chinese AI researchers kill everyone.

If humanity can survive somehow for a few more centuries, our prospects start improving because in a few centuries, the smartest among us might be smart enough to create powerful AI that doesn't quickly kill us all. It is not impossible that the current generation of humans can do it, but the probability is low enough (3.5 percent would be my current estimate) that I'm going to do anything I can to slow down AI research or preferably stop it altogether -- preferably for at least a few centuries.


👤 lambdaba
Even following all of this very closely it's hard to tell where the ceiling is, there are constant improvements, in 1 year or so we've seen at least 3 generations of significant performance upgrades.

I have not read the report, what are the reasons they recommend such drastic measures?


👤 romanhn
The genie is out of the bottle, you can't just undo the last year of progress and knowledge. Even if the US took this drastic step, there will be other countries that will push forward. Or private interests. Or well-funded terrorist organizations. Etc, etc, you get the point. Much better, in my opinion, to stay ahead of the curve and tackle whatever issues head-on rather than burying your head in the sand, hoping for the best, and pulling out the surprised Pikachu face when someone else inevitably pushes state of the art forward.

👤 thoughtstheseus
Throwing large amounts of compute, power, and learning cycles is building fascinating outcomes. That alone means strategic competitors should be secret.

👤 animal_spirits
I genuinely can't understand the doom. Can someone provide me with a realistic step by step scenario where language models destroy human civilization? How can it counter each individual step we take to correct/secure our computer systems? I feel like I'm lost here. There is beautiful mental gymnastics we have to make to conclude that AI will cause the civilization destroying dangers that we all think(want?) to happen. The only thing that I can think of is somehow a language model accidentally launches all the nukes in the world. But, that assumes that literally every step of the process can be actioned upon by typing text into a terminal. There's many armed people and physical keys that are in between GPT4 and the launch codes...

👤 aeternum
No, in an arms race either you win or your laws don't matter.

👤 lulznews
That’s quite lulzy. But never estimate the stupidity of USG …

👤 anon373839
God, no. The United States can’t possibly put that genie back the bottle, but it can cement the incumbents’ status as government backed monopolies in the next era of computing. That’s what all this doomer nonsense is really about.