Is it time to discourage such posts in the HN rules?
Note: this is a different problem to people posting generated content for astroturfing purposes etc. The author is not hiding the fact that ChatGPT has been used.
Note 2: please don’t ask ChatGPT for me.
- People declare that they used ChatGPT/et al for commenting.
- People who hide that they used ChatGPT/et al for commenting.
You expressly want to go after the former group with new rules that would punish them for those declarations, essentially forcing them into the second group? And you think this will somehow improve the site?
I don't understand this at all. You're suggesting punishing honesty and rewarding people who misattribute their contributions. Talk about a perverse incentive.
I don't know how many would agree with me, but I would equate "I consulted an LLM and it said/spewed..." with a low-effort post by a human who didn't use an intermediary.
ChatGPT answers confidently 100% of the time, but if you know that, it is an amazing heuristic to get work done. It can give you 80% of a job instantly; say you are writing a script or some email, it will do that for you.
Every once in awhile it will output a nonsensical answer, which then if you question it, it will fix or admit it gave a nonsensical answer.
For example, it said I should connect modules together to create a graph to import, but this would clearly be an incorrect result, you have to be able to spot this kind of crazy in the result.
Otherwise, it is amazing for outlining ideas and getting started on projects, and for revising text.
I don't know, probably no. Just like AI-generated images were everywhere for a while, and now they have a negative impact on the reputation of anyone using them.
> Is it time to discourage such posts in the HN rules?
No, because sometimes they can be useful. If you don’t like a particular comment, downvote it or ignore it.