HACKER Q&A
📣 af3d

Is AI going to ruin FOSS?


I almost feel like this deserves a blogpost, but I'll just go ahead ask the question directly.

The fact that AI could be used by bad actors to create (or even subvert existing) open-source software seems to me to pose a fairly imminent threat. Advanced obfuscation techniques, super-humanly complicated and/or subtle penetration methods, the ability to imbue a "legitimate" or "authoritative" looking documentation, et cetera, all of these things will only tend to lead toward more vulnerabilities (as if there weren't enough!). And most likely a paranoia in developers when it comes down to cloning an running a repo! The (as it were) old fashioned approach aka "reading through the code" just isn't going to cut it anymore, is it?

The recent XZ backdoor comes to mind, actually. I have a sneaking suspicion that this too may have been constructed in such a way. This could really stall FOSS projects imo. After all, the harder it is to trust a code base, the less likely one is to bother even participate (much less use the software, for that matter).


  👤 mikerg87 Accepted Answer ✓
I’d argue the opposite. This happened, as how I understood it when it was explained, is that while code gets lots and lots of eyeballs, non- trivial make files get little to no real review as it’s often hard to reason about them. The exploit took advantage of going where the security was weakest. The more AI is involved in reviewing all aspects the better off the process will be, especially the tedious and boring parts of all this.

I expect we will see a spate of security bots looking at all manner of things in the process as a reaction to all this.

Ken Thompson’s issues with trust will still apply. However these issues are with us with or without AI being employed.


👤 talldayo
I don't think so. Sophisticated threat actors have been attacking the Open Source community for decades, adding AI into the equation changes nothing from my perspective. I'd actually imagine it's harder to get the precise and perfectly-obfuscated behavior you want while going through an LLM as a midpoint.

So my guess is no. Even in the case of the XZ backdoor, it was the incredible amount of social-engineering at play that enabled the hack. The developer had been conning the repo maintainer for years, which is easier achieved with human intellect than ChatGPT and a fake Github account.


👤 krapp
You seem to think AI is more clever than it is. As far as I know, AI tends to produce generic boilerplate code at best and that code becomes extremely hallucinatory beyond non-trivial complexity or when seeking non-commonplace solutions. I don't think anyone's going to be using AI for 1337 h@x any time soon. AI doesn't innovate, it generalizes.

The more pertinent issue is unintentional bugs and low quality in AI generated code, and the effect of knowledge loss as AI generated code becomes more and more common.