HACKER Q&A
📣 usgroup

Is the weaponisation of ChatGPT now inevitable?


I'm not sure whether its possible to obtain this model from OpenAI or how hard it may be for unscrupulous actors to train their own on the same principles.

On the assumption that it is entirely possible, wouldn't the natural consequence -- amongst others -- be weaponisation for unscrupulous purposes? Won't it result in 1000s of blogs mushrooming over night from purported bots-come-experts with some kind of engineered slant? Whole "online communities" for the unsuspecting to join initially made up of nothing but bots "talking" to each other, bots written journal papers / editorials, etc, and so on.

What comes next?


  👤 maegul Accepted Answer ✓
My rough take is that the internet as we know it (web 1.0 or 2.0 depending on your taste/age) is over.

It ended for me when I found myself in the uncanny valley moderating a Facebook group (while not personally being on Facebook). A user was acting very bot-like. I went to ban them but thought I should look into what the account was first and found conversations they had with others, who were also probably bots given the strange shallowness of the discussion. The uncanniness was that “real” Facebook conversations seemed strange and shallow to me already and I couldn’t rule out this account being real. In that moment though, I also couldn’t know for sure whether I’d ever seen a real conversation or human on Facebook, or most darkly, the difference didn’t matter because the real people on a platform or medium amenable to “bot-ification” become themselves “bot-ified“ and speak just like them. I’m old enough to remember feeling and talking about the shallowness of internet text messaging, and heard those conversations in my head again.

From the excitement of connecting with interesting people in the 90s, this was a dark turn for me. And now that World Wide Web of interesting people is dead. We may retain special guarded fortresses and hopefully manage to keep them alive. But also, maybe good riddance. If the internet evolved in a monstrous direction, maybe it wasn’t worth it in the first place or it’s at least worth letting go …


👤 crazygringo
I mean we're already basically there, and have been for a while now. There are lots of spam SEO sites that already just aggregate and repackage content in extremely low-value, barely-coherent ways. Not to mention the huge quantity of real people who spout nonsense and untruths in their personal blogs or on forum posts.

Using the internet at all is an exercise in filtering out what seems relevant and credible from all the rest of the garbage -- and it always has been. It's why we rely on major news brands, trusted personalities, and other high-reputation sites like Wikipedia, etc. And one of the main functions of search engines from the very start has always been to try to direct you to high-quality content over spam (which is an arms race and the engines aren't 100% accurate but they're certainly pretty good).

So I don't really see ChatGPT having much material impact at all in this department. It's somewhat similar to Photoshop in this regard -- whenever we see a photo these days, we're aware of the fact that it might be entirely fictional. But we've got enough sense to know that if it's a news photo in the New York Times it's probably real, while if it's something coming from a random Twitter account without any history of credibility, we should assume there's just as much chance of it being fake.

The internet has always been swamped with fake/garbage stuff, and adding more isn't going to change much, because we just continue to use the same critical thinking and skeptical eyes we've always used, evaluating which sources are actually credible.


👤 isaacfrond
OpenAi just recently created a Diplomacy player. Not just managing the pieces on the board, but also negotiating with other players--all in natural language. The programmed ended up in the first few places of a large tournament, but amazingly none of the other players had catched on to the fact that they were playing an AI.

It seems then that nothing is to stop such a program from interaction to further other goals. Even if an audio conversation is beyond it, it can send me messages on my Instagram, LinkedIn, and what have you. It may try to convince me of whatever message the highest bidder wants me convinced of. That this may not succeed in all cases, or even most cases makes little difference. Gaining a 5% advantage will sway most elections.

Think about that for a second. From now on: whenever you talk to someone online, that you haven't vetted in real life before, you cannot be sure that it is an actual person. Soon, it will be likely that it is not a real person.


👤 rchaud
ChatGPT is the death knell for discussion communities where mainstream internet users spend their time. That would be Reddit, Quora, Facebook and Twitter.

We don't need to ring that bell for search engines, it was already drowning in Wikipedia co-authored blogspam that this won't change much.

"Dead internet theory" posits that bot-generated comments and content is more common than we think. Reddit actually has subreddit simulators that are nothing but bots talking back and forth to each other, using training models based on actual Reddit comment threads. It's not chatGPT level, but it's frightening to see how bland most 'real' Reddit conversations are.


👤 heartbeats
We've had regular GPT for several years now, and nothing has happened. People predicted this with the OG Transformer model from OpenAI, and yet the fake news apocalypse didn't come upon us.

> Won't it result in 1000s of blogs mushrooming over night from purported bots-come-experts with some kind of engineered slant?

I don't say this to be rude, I really don't, but have you Googled something like "reverse a string python" at any point in the past 5 years?

> Whole "online communities" for the unsuspecting to join initially made up of nothing but bots "talking" to each other,

This has also existed for a long time, see SubredditSimulator etc. There are also sites that scrape forums and replace the usernames to do ad fraud.


👤 mbitsnbites
I think that there's a real risk that "information becomes useless" in the AI era. Neither text, speach/sound, images nor videos can be trusted, and it's far toi easy and rewarding for unscrupulous actors to use AI to put out fake/biased news/comments/opinions/media.

👤 aurbano
This has probably been said many times before - but it also seems like a new era for digital surveillance or espionage: bots could "befriend" humans, create full networks of fake people crafting any story imaginable in order to extract information. Maybe

👤 WheelsAtLarge
Yes, ALL tech eventually gets use for good and bad. It follows human will.

Even in its current state, people are figuring out how to use it for their purposes. There's no need to train it or get any more info on how it works. What we see and get is enough.

I suspect that in a few years from now the internet will be flooded with AI crappy content. I wonder how that's going to influence future AI? Since, the future models will be trained with content created by AIs.


👤 wnkrshm
When the internet is flooded with crappy AI content, people will move to walled gardens even more, since they can afford the cost of moderation.

👤 pornel
LLM is incredibly useful as an answer engine*. I can imagine this replacing the Google search box on every mobile device. Why search, browse, decline tracking popups and newsletters signup popups, and then try to fish out scraps of information out of a SEO fluff piece, on a tiny screen, if the LLM can tell you the answer straight away, and answer your follow up questions.

This can be a serious threat to search engines, and by extension information-providing parts of the web, and web advertising (you will be getting ads from your LLM provider, and everyone on the web will hate that their content has been hoovered up for free).

Then we'll have spam trying to manipulate data used to train the bot. There will be commercial spam, but also political/ideological spam. And here's the scary part: to reject the spam, people training those AIs will have to compile a big table of what is true. This will guide the AI that people listen to, and which can be an excellent bullshitter. There is a massive potential for abuse here.

*) the current one is too often confidently incorrect, but that is probably fixable to a good-enough level.


👤 baxtr
I believe the baseline expectations will simply increase like what Seth Godin said about this:

> If your work isn’t more useful or insightful or urgent than GPT can create in 12 seconds, don’t interrupt people with it.

https://seths.blog/2022/12/attention-trust-and-gpt3/


👤 wildpeaks
OpenAI added stricter rules this week-end (given all demos rely on re-using the session token from their public demo):

https://github.com/transitive-bullshit/chatgpt-api/issues/96


👤 dusted
The current version available at https://chat.openai.com/chat seems neutered to the point of being both less useful and less interesting than just searching google.

👤 br1brown
I respond with a maxim: 'Anything that can be used for surveillance will be used for surveillance' Shoshana Zuboff

I would also apply this to the case of AI - such as those for understanding whether or not someone will commit a crime


👤 gremlinsinc
What would be the point of content farms if google is basically replaced, I won't visit your blog to learn about weevils I'll just ask chatGPT, and never leave the interface at all.

👤 bjourne
This already exist. ChatGPT perhaps increases the scale of the fakery, but doesn't introduce a new concept.

👤 mejutoco
We are back in the Altavista days. We need a new Google, maybe a local one this time.

👤 ergonaught
Of course it is inevitable. Humans are involved.

👤 blockwriter
It seems only inevitable that ChatGPT will be used to exploit children.