HACKER Q&A
📣 hackyhacky

Long-term approach for distinguishing AI content?


The way AI is going, it seems inevitable that the internet will soon be awash in untold quantities of AI-generated content which will be of low value, but impossible to distinguish from human-generated content. This will cause problems for anyone who wants to limit their contact to authentic content: for example, curl [0] has been receiving bogus bug reports from AI, which uselessly consume maintainer time. To say nothing of the fact that social network sites, already full of low-value content, will have an even lower signal-to-noise ratio as effective filtering becomes impossible.

So, I turn to the HN community: like, what's the plan? Five years from now, how will the internet still be usable?

- Will we need to have locked-down hardware platforms that cryptographically sign all "authentic" content?

- Will the promise of the open internet be lost as we restrict communication to only those sources we know personally?

- Will we rely on "trustworthy" third parties to mediate content via opaque means?

It seems like something big has to change and I'd love your perspective.

[0] https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/


  👤 soueuls Accepted Answer ✓
We will just change mindset, instead of filtering out crap.

We will filter in genuine content.

You can browse Twitter using only custom lists for example.


👤 bell-cot
The internet already is awash in untold quantities of crap-quality content, generated by humans and non-AI programs. And that's been the case for quite a few years now.

Emotionally, "AI-generated content" makes a good boogeyman.

But rationally, "you can have your free cake (internet content), and trust it too" was always a techno-utopian/libertarian delusion. If that's your "promise of the open internet", then it was always a lost cause.