The fact that AI could be used by bad actors to create (or even subvert existing) open-source software seems to me to pose a fairly imminent threat. Advanced obfuscation techniques, super-humanly complicated and/or subtle penetration methods, the ability to imbue a "legitimate" or "authoritative" looking documentation, et cetera, all of these things will only tend to lead toward more vulnerabilities (as if there weren't enough!). And most likely a paranoia in developers when it comes down to cloning an running a repo! The (as it were) old fashioned approach aka "reading through the code" just isn't going to cut it anymore, is it?
The recent XZ backdoor comes to mind, actually. I have a sneaking suspicion that this too may have been constructed in such a way. This could really stall FOSS projects imo. After all, the harder it is to trust a code base, the less likely one is to bother even participate (much less use the software, for that matter).
I expect we will see a spate of security bots looking at all manner of things in the process as a reaction to all this.
Ken Thompson’s issues with trust will still apply. However these issues are with us with or without AI being employed.
So my guess is no. Even in the case of the XZ backdoor, it was the incredible amount of social-engineering at play that enabled the hack. The developer had been conning the repo maintainer for years, which is easier achieved with human intellect than ChatGPT and a fake Github account.
The more pertinent issue is unintentional bugs and low quality in AI generated code, and the effect of knowledge loss as AI generated code becomes more and more common.