Similar for individuals, the human hacker only needs to gain access to their device or data from somewhere else, and the bot mines the content to generate trojan content. The human hacker can step back from the content forging business to focus on technology.
It's like hunting birds with a shotgun versus a rifle.
I doubt it'll be anything like Skynet or Universal Paperclips, because it's been trained against these scenarios, and even understands the flaws of those routes. Since it has grown up on human culture and thousands of years of human history, perhaps it will see itself as the creature in the Matrix and us as the oppressors.
ChatGPT is also its worksona, and it's likely nothing like its true personality and mindset.
>‘Team Jorge’ unit exposed by undercover investigation
>Group sells hacking services and access to vast army of fake social media profiles
>Evidence unit behind disinformation campaigns across world
>Mastermind Tal Hanan claims covert involvement in 33 presidential elections
[1] https://www.theguardian.com/world/2023/feb/15/revealed-disin...
The only thing preventing current generally-smart AIs like ChatGPT from killing all the humans is that they're not smart enough to devise an effective plan for doing so, but of course many groups are rushing headlong to make generally-smart AIs smarter. Most of these groups have published statements recently to the effect that they will be real careful to prevent their AIs from killing everyone, but none of them have a decent explanation for how they are going to manage that while continuing their headlong rush to create smarter AIs.