HACKER Q&A
📣 behnamoh

LLMs and Linux are just tools. Why one must be "safe" and restricted?


Karapathy's tweet [0] made me think about this question. If LLMs and Linux are both tools to let humans do more with their computers, why is there so much rage about "safety" concerns of LLMs? Linux can be as dangerous as you want it to be. Why is it that people think of it as a tool but then when it comes to LLMs they suddenly say "it can be dangerous". I mean, more dangerous than Linux?

[0]: https://x.com/karpathy/status/1746609206889951642?s=20


  👤 thesuperbigfrog Accepted Answer ✓
>> Why is it that people think of it as a tool but then when it comes to LLMs they suddenly say "it can be dangerous". I mean, more dangerous than Linux?

It comes down to both determinism and how well people understand the technology.

LLMs do not work in a way that we can reason about in a deterministic way. We just put the components together, feed it a lot of data, and then apply an input in the hopes that a suitable output comes out. Individual components of the model are understood, but the whole is much more vague. When we ask how LLMs work at a macro level, the true answer is that we are not really sure. If one part of the model is changed, what will the result be? If some of the training data is changed, what will the result be? Why do LLMs "hallucinate"?

Linux is just an operating system kernel. What it does and how it does it are well understood. We can reason about how it works and when we change out individual components, we understand what the results will be. It is easy to reason about how the pieces work and what they do as a whole.

People are fundamentally afraid of things that they do not understand. LLMs can be even more scary to non-technical people because of the way that artificial intelligence and robots have been portrayed in science fiction for decades.

"safe linux" as given in the tweet seems like a ridiculous idea. It quickly brings to mind HAL 9000 in "2001: A Space Odyssey" saying "I'm sorry, Dave. I can't do that." Maybe I'm just old, but I don't want AI or LLMs between me and my computing task unless I choose it to be there. Get off my virtual lawn and let me see the files.


👤 illuminant
Ignorance, confusion, incompetence, neglect.

Most of those are "your fault." We all make jokes about "stupid users" as these are responsible for thinking for themselves.

LLM however literally produces and conveys formed "thoughts". Due to the variance of minds and the nature of consumer society some consider bad LLM negligent on behave of their creator.

This interpretation is philosophical such that one side (which I agree with, Randian and all) says stupid is as stupid does and individuals are wholly responsible for what they do based on any input (LLM, encouragable friend, or otherwise), and other sides believe civilization only stands where party's influencing one and other (for commercial purposes particularly) are in some way responsible for accuracy and socially stable influence.

Who's right? The real question is, which world do you want to live in? One supporting darwinian competence or social standards?

Linux is only a tool that does what you tell it. LLM is a tool which people may do what it tells them.


👤 solardev
You can run uncensored models on your own computer all you want. Companies typically don't do that because they want to be business-friendly and without controversy. Advertisers and users pull back otherwise (like for Twitter) unless they're specifically looking for controversy (like the chans) or porn (civit AI).

A lot of US businesses still abide by old fashioned Puritanical values and don't want anything remotely edgy. And after the Trump years they probably want to stay away from politics and wrongthink too.

LLMs can communicate with people in a way that a bare utility like Linux can't. Every sentence and image is propaganda based on the desires of the trainers and dataset. They are the public face of the company that trained them. When they're insufficiently reined in (like Microsoft's AI chatbot before they lobotomized her), they can embarrass the company.

I doubt uncensored AIs are a risk to much of anything aside from a company's public image and bottom line. The alignment folks worried about Skynet are different, but that's not the same thing as the self-censorship these companies are doing right now.