So, what are the best possible measures we can take to mitigate the problem a bit or even solve some low hanging fruits?
I agree with Durov’s approach.
> Every few decades there is a drastic change in how we view important societal and scientific problems. What used to be a ridiculous idea yesterday can become the predominant opinion today, only to turn into an obsolete notion tomorrow. In the history of human beliefs, change is the only constant. Most people living in 1921 shared views that today are considered quaint at best, and dangerous at worst. The chances that our present-day convictions will remain relevant by 2121 are slim. In fact, we won't even have to wait 100 years, as the speed of change is accelerating. A good example is how quickly humanity changed its mind over the origins of Covid. Just a year ago, the idea that the virus originated from a Wuhan Lab was dismissed as a conspiracy theory. Facebook, Twitter and other social media platforms blocked posts promoting the lab leak theory. Today, however, this theory is on its way to becoming the mainstream scientific view of how the virus originated. Such instances make combating fake news and misinformation particularly challenging. They can also fundamentally undermine people's trust in the neutrality of social media platforms, and jeopardize future efforts to fight misinformation. Telegram never blocked posts discussing the lab leak theory, because we didn't think it's our role to decide for our users what they should believe. At the same time, we felt that our users had the right to be informed about Covid by official sources that reflected scientific consensus. That's why we worked with 19 governments to help them reach out to every Telegram user in their countries with up-to-date information on the pandemic. Today we call upon more governments to join the Telegram anti-Covid initiative to make sure more people around the world get access critical knowledge that can save lives. In my 20 years of managing discussion platforms, I noticed that conspiracy theories only strengthen each time their content is removed by moderators. Instead of putting an end to wrong ideas, censorship often makes it harder to fight them. That’s why spreading the truth will always be a more efficient strategy than engaging in censorship.
https://bettermorningmessages.com/
Education via spreading funny memes that say not to spread stuff that is un-trusted is probably the best idea that doesn't involve the owners of those apps changing something.
In my opinion, a change on all social media platforms that would be awesome is a process to get an account listed as a "trusted source" and only trusted sources can have content go viral/global. Simple enforcement is if a trusted source gets reported to be spreading BS, they get banned. Everything else can only be 2 degrees of separation from the poster, which is friend of a friend basically. It won't eliminate the problem but it could possibly cut down on the misinformation spreading.
This seems like a new level of assuming you’re smarter than other people and should be able to tell them what to think.
Most times on memes the premis is already wrong to make a point. It's important to check and to show people with less media skills how to check innocent funny political memes for truth. Most of them just lie or misinterpret statistics or USA any other rethoric trick to brainwash meme by meme by meme.
It's important to make it visible for our Facebook scrolling boomer parents for example, to make them think before they share.
Interesting is how deep the misinformation is already rooted sometimes. In one case I lost a friend who believed in the Corvid is a Hoax lie, because I repedatly tried to convince us in our sport chat. It's sad. Intentional misinformation should be a crime. This has nothing to do with free speech.