HACKER Q&A
📣 bjourne

Thoughts on censorship resistance vs. offensive content?


Suppose someone creates a peer-to-peer distributed Youtube clone. The selling point would be that no single party would be able to kick people off the platform for publishing videos they don't like. There would also be no recommendation algorithms drawing viewers towards privileged channels (or perhaps users would select their own recommendation algorithms).

Technically I think it's solvable, but how to deal with content that "should" be taken down when there is no single controlling party?

FreeNet is a platform like that, but from their help page:

"I don't want my node to be used to harbor child porn, offensive content, or terrorism. What can I do?

This is a problem that sadly any censorship-resistance tool faces. If the capacity to remove content existed, it might only be used to remove things one finds offensive, but it could be used to remove anything. From a technological point of view one cannot have censorship-resistance with exceptions. Freenet is merely a tool that by itself doesn't do anything to promote offensive content. How people choose to use the tool is their sole responsibility."

They're saying is "it's impossible" so most people don't use FreeNet because they don't want to help child porn distributors. For the same reason, people wouldn't use a distributed Youtube. People clearly wants some moral norms/regulations/policies but not so much that it gets in the way of free speech.

Is the problem impossible to solve? What if there is no anonymity in the network? I have no idea how to solve it myself, but I'm thinking that with blockchain tech, machine learning, crowd-sourcing it could be solvable. Any thoughts are very welcome.


  👤 tombert Accepted Answer ✓
I don't think the risk of child porn is why FreeNet hasn't really taken off; I think it's due to the fact that it's slow and its content is incredibly limited, largely existing for hobbyists.

I'm not 100% sure how you'd be able to evict offensive/illegal content in a fully distributed and decentralized way, especially while keeping free-speech guarantees. If you used some kind of consensus/voting system you would risk a sybil attack (or something similar) by entities that simply disagree with an opinion posted.

I'm not saying it's impossible, but virtually every system for handling this (thus far) has required an arbiter in the middle to avoid such problems.

EDIT: I haven't read the paper yet, but in regards to Sybil attacks in particular, the creator of the Kademlia algorithm wrote a follow up to help mitigate these things called 5ttt or Tonika. [0] I don't know if this will help with what you want to do but it's probably worth reading.

[0] https://arxiv.org/pdf/0909.2859.pdf