Nowadays the best way to communicate is through the web, which offers an unlimited number of information from all over the world and from endless sources, but not every piece of information available on the web is reliable, in fact, a high percentage of its content is false; anyone can easily create a article of any topic using just what they want other users to believe; users can also edit other posts from pages such as Wikipedia; The worst part of it is that users believe all there is in the web and go day after day sharing, commenting, and publishing all their “knowledge” about it and helps create a big wave of misleading content.
The influence and impact of fake news resides in the users. They are the ones that share and spread any type of information, including fake news. According to Nicolas A. Glavin: “the bypassing of traditional media outlets has led to a flooding of the marketplace of ideas, where misinformation, extremist content and state-sponsored disinformation have proliferated”. In other words, the excess of freedom that social media usually gives the users to publish anything they want without a simple revision of whether the information is right or not, has created a big increase in bias/exaggerated information, as well as false posts based on those types of articles, which gets to the point where users believe everything there is in the web or they don’t even know what to believe.
Also, according to a report by J.M. Berger for George Washington University’s Program on Extremism, the Twitter accounts of white nationalists and neo-Nazis have grown by 600 percent since 2012. Those types of accounts are, in potential, a bomb of hate speech and extremist content waiting to explode. Indeed, it may have a negative effect on users, specially in this time when they believe all there is in the web, and as a chain effect, those users will start publishing in reaction to the post from the malevolent accounts. Later, others users will read those reaction posts and will spread the information with other users, whom will make articles about it, which will be read by common users, and at the end, there will be a society full of misinformation and probably full of hate. This clearly shows how easy it is to influence a whole society with some extremist/biased posts and how the users have a lack of critical thinking to analyze the information that is given to them and recognize it as an untrustworthy source.
One of all possible solutions would be the use of algorithms that are similar to those that act like filters to inappropriate content. According to Annemanie Pooling, “Anyone with one of the popular email provider has likely scanned his/her spam folder for a confirmation email or response from a friend that was banished to this folder accidentally.That’s because each email provider has been trained on the domains and headline formats that most commonly indicate spam to mark it as such before users even see it”. Social networks could incorporate similar algorithms that would warn users about dangerous or untrustworthy content. Social networks could also make an extra effort to go even further to get rid of that content, for example, Facebook has been more aggressive in taking down pro-ISIS content. This shows that this strategy works.
Another possible solution relies in governments. According to Nicolas A. Gavin, “ governments have a role in policing fake news as well: Turkey has been flagging pro- kurdish material, Germany has pressure Facebook to target xenophobic racist content and abusive statements from social media” This demonstrates that governments have some responsibility and influence to regulate the content that will be showed to their citizens or to avoid malevolent content. Another recent example is Venezuela. The president from Venezuela has recently prohibited the reproduction of the CCN channel and any other content related to it because it portrays a wrong and inappropriate content of this country.
According to Annemanie Pooling, It is no secret by now that most social networks rely on the emotional responses of users to share and spread content. But if users are trained to identify fake or incendiary posts, including that which isn’t really news, it might be possible for platforms to break this cycle. This is, in fact, the main solution to the problem and can be even better by adding all the possible solutions and apply them. So if users learn how to identify between reliable and not reliable content; if they use their critical thinking to analyze if the content in front of them is biased or from a untrustworthy source; if governments accept their responsibility to protect their people from hate content towards them; if companies take a step forward to apply filters for false content and make it easy for users to denunciate such content; if we all follow this plan, we will get rid of fake news once and for all.