HACKER Q&A
📣 foadm

Why is OpenAI pushing for regulation so much?


The achievements of Google and OpenAI over the past few years, especially with transformers, were truly remarkable. However, I feel that dangers of AI are ridiculously overstated. Can anyone give an example, or the sequence of actions, that would lead to something going horribly wrong? AI could destroy humanity, Sam Altman mentioned that. Doesn't that seem completely wacky? Or I just lack imagination?

Lets say you give access to the internet to it and say "hack into a pentagon". Well, surely, if it succeeded it means that Chinese and Russian hackers did this a long time ago. If you told it "destroy humanity", well, again, surely, it would not be able to do that even without imposed restrictions by OpenAI.

So, how exactly AI would lead to something horrible? That is the first question.

And the second one is: why is OpenAI pushing for regulation so much? Do you think they are honest? What could be alternative motives for pushing this agenda?

Could it be that Microsoft and Google are doing something that top corporations always have tried to do? Make deals with governments and impose restrictions so competition is unlikely to emerge?


  👤 Peritract Accepted Answer ✓
Massively overstating the risk of AI, and focusing on fictionalised risks rather than practical ones, has a host of benefits to them.

OpenAI pushes for regulation because

- It drives excitement, which helps their business

- It will make it harder for competitors to succeed

- It aims the regulation in the wrong direction

- It lets them dismiss criticism from other sources

None of the robot god stuff is real; it's a honey trap for the credulous, laid by profiteers. LLMs are really exciting, but they aren't actually an existential threat, and the people who trumpet that are doing their own (otherwise interesting and valuable) work a massive disservice.


👤 ftxbro
One reason is regulatory capture.

Another could be that they see that it will be regulated anyway, so they are getting ahead of the narrative and getting some good will with the regulators at the same time.


👤 rvz
Because:

1. Open source AI research is a threat to O̶p̶e̶n̶AI.com's business [0] and they cannot win the race to zero if the participants (Stability, Apple, etc) are already at the finish line.

2. Scapegoating 'AI safety' and spreading doomsday tales to governments so that they can wipe out open source AI models and setup 'licensed' and 'compliant' AI models (whilst O̶p̶e̶n̶AI.com open sources their own compliant LLM) [1]. ie regulatory capture.

3. Promoting Worldcoin crypto snake-oil as an 'antidote' against everything getting faked by AI to 'verify' human eyeballs once their doomsday tales becomes true. [2]

Sources:

[0] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

[1] https://www.reuters.com/technology/openai-readies-new-open-s...

[2] https://worldcoin.org/blog/engineering/humanness-in-the-age-...


👤 cratermoon
> why is OpenAI pushing for regulation so much?

To raise legal and regulatory barriers to entry for competitors. They regulations they are pushing for are the sort that will enshrine their business model into law, locking out newcomers to the technology.


👤 Bostonian
As Adam Smith (author of the "The Wealth of Nations" (1776)) wrote, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” The same is often true when businessmen talk to politicians.

That is a cynical take, which I cannot prove true, but I think it is part of the answer.