On the assumption that it is entirely possible, wouldn't the natural consequence -- amongst others -- be weaponisation for unscrupulous purposes? Won't it result in 1000s of blogs mushrooming over night from purported bots-come-experts with some kind of engineered slant? Whole "online communities" for the unsuspecting to join initially made up of nothing but bots "talking" to each other, bots written journal papers / editorials, etc, and so on.
What comes next?
It ended for me when I found myself in the uncanny valley moderating a Facebook group (while not personally being on Facebook). A user was acting very bot-like. I went to ban them but thought I should look into what the account was first and found conversations they had with others, who were also probably bots given the strange shallowness of the discussion. The uncanniness was that “real” Facebook conversations seemed strange and shallow to me already and I couldn’t rule out this account being real. In that moment though, I also couldn’t know for sure whether I’d ever seen a real conversation or human on Facebook, or most darkly, the difference didn’t matter because the real people on a platform or medium amenable to “bot-ification” become themselves “bot-ified“ and speak just like them. I’m old enough to remember feeling and talking about the shallowness of internet text messaging, and heard those conversations in my head again.
From the excitement of connecting with interesting people in the 90s, this was a dark turn for me. And now that World Wide Web of interesting people is dead. We may retain special guarded fortresses and hopefully manage to keep them alive. But also, maybe good riddance. If the internet evolved in a monstrous direction, maybe it wasn’t worth it in the first place or it’s at least worth letting go …
Using the internet at all is an exercise in filtering out what seems relevant and credible from all the rest of the garbage -- and it always has been. It's why we rely on major news brands, trusted personalities, and other high-reputation sites like Wikipedia, etc. And one of the main functions of search engines from the very start has always been to try to direct you to high-quality content over spam (which is an arms race and the engines aren't 100% accurate but they're certainly pretty good).
So I don't really see ChatGPT having much material impact at all in this department. It's somewhat similar to Photoshop in this regard -- whenever we see a photo these days, we're aware of the fact that it might be entirely fictional. But we've got enough sense to know that if it's a news photo in the New York Times it's probably real, while if it's something coming from a random Twitter account without any history of credibility, we should assume there's just as much chance of it being fake.
The internet has always been swamped with fake/garbage stuff, and adding more isn't going to change much, because we just continue to use the same critical thinking and skeptical eyes we've always used, evaluating which sources are actually credible.
It seems then that nothing is to stop such a program from interaction to further other goals. Even if an audio conversation is beyond it, it can send me messages on my Instagram, LinkedIn, and what have you. It may try to convince me of whatever message the highest bidder wants me convinced of. That this may not succeed in all cases, or even most cases makes little difference. Gaining a 5% advantage will sway most elections.
Think about that for a second. From now on: whenever you talk to someone online, that you haven't vetted in real life before, you cannot be sure that it is an actual person. Soon, it will be likely that it is not a real person.
We don't need to ring that bell for search engines, it was already drowning in Wikipedia co-authored blogspam that this won't change much.
"Dead internet theory" posits that bot-generated comments and content is more common than we think. Reddit actually has subreddit simulators that are nothing but bots talking back and forth to each other, using training models based on actual Reddit comment threads. It's not chatGPT level, but it's frightening to see how bland most 'real' Reddit conversations are.
> Won't it result in 1000s of blogs mushrooming over night from purported bots-come-experts with some kind of engineered slant?
I don't say this to be rude, I really don't, but have you Googled something like "reverse a string python" at any point in the past 5 years?
> Whole "online communities" for the unsuspecting to join initially made up of nothing but bots "talking" to each other,
This has also existed for a long time, see SubredditSimulator etc. There are also sites that scrape forums and replace the usernames to do ad fraud.
Even in its current state, people are figuring out how to use it for their purposes. There's no need to train it or get any more info on how it works. What we see and get is enough.
I suspect that in a few years from now the internet will be flooded with AI crappy content. I wonder how that's going to influence future AI? Since, the future models will be trained with content created by AIs.
This can be a serious threat to search engines, and by extension information-providing parts of the web, and web advertising (you will be getting ads from your LLM provider, and everyone on the web will hate that their content has been hoovered up for free).
Then we'll have spam trying to manipulate data used to train the bot. There will be commercial spam, but also political/ideological spam. And here's the scary part: to reject the spam, people training those AIs will have to compile a big table of what is true. This will guide the AI that people listen to, and which can be an excellent bullshitter. There is a massive potential for abuse here.
*) the current one is too often confidently incorrect, but that is probably fixable to a good-enough level.
> If your work isn’t more useful or insightful or urgent than GPT can create in 12 seconds, don’t interrupt people with it.
https://github.com/transitive-bullshit/chatgpt-api/issues/96
I would also apply this to the case of AI - such as those for understanding whether or not someone will commit a crime