Unfortunately, the explosive popularity of ChatGPT and similar LLMs among ordinary people, for tasks like cheating on homework... breaks part of the fundamental appeal and mechanism of our site.
(Note that, even before ChatGPT, there's always been strong incentive for bad actors to "game" this application domain, so it's going to happen. Our approach actually was a barrier to those people.)
Is there any way we can prevent LLM-generated content from destroying the key to our startup?
Ideas:
* Honor code, change people's default assumptions about what's appropriate, have them take pride in it.
* Have users police submissions, such as with flagging and/or voting, and hope they can detect enough LLM-generated content that HQ can direct sufficient negative feedback at it.
* Do identity verification, to help suspensions&bans have teeth as a deterrent, and to discourage repeat offenders. Costly, and invasive.
* Find a lawyer who really hates the jerks, and wants to craft a legal way to be able to go after them (including deep-pocketed competitor saboteurs, operating through an intermediary). Like, IANAL, but maybe there's a way to offer a side product, for a billion dollars, to let people use do LLM/shilling/etc., and then it's theft of service with a price tag. Or a better idea an actual smart lawyer could come up with.
Better/additional ideas?
If LLM-generated content can "destroy" your startup, you need a new startup. Programmatically-generated content and people trying to game the algorithm has been a problem for literal decades, and Google/Facebook/Instagram/etc. is still going strong.
This is a race to the bottom, as users will fight to flag each other's submissions as AI-generated. At best, you're going to have such a high false-positive rate, that it'll be useless.