HACKER Q&A
📣 tsevis

Are AI filters becoming stricter than society itself?


While experimenting with digital art and AI tools, I noticed how aggressively filters block historical, political, or artistic imagery. I wrote about how this impacts art, research, and cultural memory. Curious how others here see this balance between safety and censorship. https://tsevis.com/censorship-ai-and-the-war-on-context


  👤 Nextgrid Accepted Answer ✓
It’s the diversity and inclusion slippery slope again.

The initial idea was good and very much needed to eliminate (or at least heavily reduce) long-established racism/bigotry.

But the problem is that a lot of people started to abuse it as a virtue-signalling mechanism and/or a way to justify their jobs, leading to insanities like renaming the Git “master” branch.

I suspect AI safety is the same. There’s a grain of truth and usefulness to it, but no AI safety person will intentionally declare “we figured out how to make models safe, my job here is done”, so they have to always push the envelope, even towards ridiculous levels.


👤 sdotdev
I think in some places AI's arent strict enough, its all an imbalance

👤 armchairhacker
AI isn’t smart enough to permit edge-cases like artistic nudity, especially with people who’ll find and abuse any exemption for the edge-case to create something that doesn’t belong. AI is unreliable, so its censors are broad to minimize rare failures or bugs (“unintentional exemptions”) that people will find and abuse.

Despite this, AIs get fooled to this day. There are still jailbreaks for GPT-5 and nudity and piracy on YouTube.

The only way to distinguish “good” from “bad” is competence, which has never existed on a large scale.


👤 bjourne
Yes. For example, YouTube channels self-censor to an insane degree to avoid getting demonetized. Streamers don't dare saying fuck or shit in case the ai hears them.