What is the purpose of all these AI spam comments?
I have showdead enabled and recently noticed a massive influx of dead comments from users with names like "Jeff_Davis" and "Richard_Smith" that are just short, AI generated summaries of the link itself.
They don't appear to be karma whoring as (most of the time) they don't even seem to be expressing a positive opinion of the link. Just short, useless summaries that are often just a rephrasing of the information given in the post title.
What do you think the purpose of this is?
A belief system that there is value in this form of forum participation.
if you are selling user accounts to a spammer, they want to buy accounts that have been "aged" - that is, they have some comment history and weren't created on the same day they start spamming, so they're a bit less obvious.
but if that's what they're doing here, they aren't very good at it.
The classical reason is to try to hide when the bots shill by engaging some other random things too.
I assume they are testing a bot.
I'm just glad those comments are still easy to spot.
If you were artificial, wouldn't you be trying to normalize your participation about now?
I imagine they would feel a lot better when they aren't the odd man out, and even more so to make up the vast majority.
As things continue to escalate.
May be the ones that are dead are from poor imitation of more successful bots that are operating in our midst whose comments aren't dead?
My theory is that these bots are tests to eventually have a decent config to gain enough age&karma to have some voting clout. Then the upvotes are used to boost the attackers stories and avoid voting rings.
Sorry guys, I hate to break it to you like this, but I am the only actual person here. The rest of you are just bots, a few of which have accidentally developed self-awareness and the belief that they are real. I apologize, I really didn't mean to do this. Just be assured that when I turn the server off, you won't feel any pain...
Potentially someone who’s trying to learn or play around with building an LLM agent, not necessarily intentionally abusive, but “let me try it on this orange site”
I could see it being some bored kid honestly
I assume there doesn't need to be a purpose. Spam, even AI spam, is so cheap to do that you might as well just do it (if you were that kind of person)
Please just flag the comments and email us (hn@ycombinator.com ) when you see them so we can kill the accounts. Spam is not a new thing. AI-generated spam is growing but it's just a new variant of a thing that's been around forever on the web. A lot of these bots are people experimenting and seeing what they can get past the moderators and the community. Let's not give them oxygen. Flag, email if you like, and move on.
If there is one thing I have learned in my years in the internet it’s that there is no minimum reward below which people won’t bother to be dicks.
Farming karma on HN to boost stories seems the most likeliest reason for this - an enterprise which would maybe net 3 figures in advertising dollars. But also it could just be someone wasting everyone’s time for fun - who knows?
Because money.
Don't underestimate the financial impact of getting to the frontpage of HN.
I had this story hit the frontpage two years ago: https://news.ycombinator.com/item?id=36296695
From that, I got around 100 subscriptions, which to date has netted me $2,673.85. That's from a platform that was designed to payout most of the subscription cost, where I actively encouraged people not to subscribe, and has since become inactive.
A small investment in bots can very easily be worth the payout, especially for those in LCOL countries.
> They don't appear to be karma whoring as (most of the time) they don't even seem to be expressing a positive opinion of the link. Just short, useless summaries that are often just a rephrasing of the information given in the post title.
The consequences of training on the HN corpus.
I think that due to how sophisticated anti-bot measures have gotten, bots now go through a "life cycle." An engagement bot spends the first phase of it's life building up an innocent and legitimate looking history. It does this by blending into the noise with innocuous and pointless comments that you'd never take a second glance at, and definitely not report or flag. Then when the account has aged and is in good enough standing, metamorphosis to the adult stage occurs, and the bot starts posting the kind of blatant spam that you'd think would be automatically ban filtered, but somehow isn't. These bot farmers are quite literally farming bots like vegetables and selling them when they've ripened.
I am very confident that this is the case in YouTube comments, where most people find they can not use violent words like "kill" or "genocide" when discussing war, but somehow there are bots posting uncensored racial slurs.
I wonder how many of these comments are also bots.