Many literature about it from fiction like https://en.m.wikipedia.org/wiki/In_a_Grove
Through pop sci https://www.goodreads.com/book/show/789727.How_Real_Is_Real_
All the way to the deepest epistemological problems.
The main reason is similar to why AGI cannot be achieved — it is impossible to achieve something that you can’t or haven’t clearly defined first.
Humans have been lying or sharing incorrect information (whether the person had known it or not) for thousands of years. The only difference now is the volume of information being thrown at us has increased an exponential amount and our brains haven't evolved to handle that.
So my thinking is that we shouldn't be trying to reel in the "bad" information, we should just be trying to reel in the speed at which information is coming at us.
People who spread misinformation aren't evil. They subscribe to a certain set of ideals, and feel comfortable within their social circles when those ideals get reinforced. The main desire is to be validated as a person, the issue is that internet has allowed those social circles to leak and attract other people looking for validation.
All you need to do is to provide a natural organization of those social circles with a technical solution, where people can get the validation while having their ideas reinforced, except those ideas would be factually correct.
And on a hierarchy of needs, things like money and food and shelter and health come way before ideology, so you just need financial incentives in that form to get people to organize. I.e something that looks like "If you post a true wikipedia article that is peer reviewed to be true, you get tax breaks or payouts, if you post things that are factually false, you get a fee".
Multiple ways to solve this technically. All the solutions however would require a government funded/ran public key registry associated with your unique username, with a corresponding private key that is password+biometric determined, and you would need to have every piece of social media signed with a private key.
LLMs might help a little, if we could find a way to control for their bias.
Another strategy is comparing both extremes of an opinion. GroundNews has employed this strategy. Whether or not it's effective at preventing misinformation, I don't know. But it's an interesting idea.
The last and most important preventative technique is to teach people how to avoid confirmation bias. I think that is the source of most misinformation: the gullibility of the general population.
https://soundcloud.com/future-hindsight/the-truth-sandwich-g...
One of the aspects of successful misinformation is that it establishes a desired framing of an issue. Merely repeating the framing, even to indicate its incorrectness, enforces it. The way around it is to never repeat the framing and use correct framing instead. Political speech nowadays is based on extremely weaponized framing. Once you hear this you can’t unhear it.
The only way to prevent people sharing their experiences is called censorship.
What you're really asking is "can we stop people from lying?"
The answer is no.
> the code and cost to generate misinformation is essentially free
It's always been pretty low. People also want to believe things which feel good, especially when they're blatantly fake. That's not a technological problem, but a social one.
It's like the quote about people having trouble understanding ideas which threaten their income[1], but the payoff is emotional instead of financial.
[1]: https://www.goodreads.com/quotes/21810-it-is-difficult-to-ge...
Every shady AI marketing ad is misinformation.
It's absurd that Biden's administration went after individuals sharing bad takes on COVID, but not classic fraudulent advertising.
Very troubling to see such a huge mismatch between stated policy and government activity.