Fake News is a problem, but what’s the answer?

Rumours, misinformation and disinformation have forever been part of human societies and politics and were spread conventionally through the established media.  Recently however, this has been overshadowed by blatant outrageous lies, fabricated conspiracies: the easily debunked ‘Fake News’, which are now spread over social media and manipulate the citizenship and voters.

 

But how exactly did Fake News come about and how might it be stopped?

 

Well, the phenomenon of Fake News illustrates modern social media technology, specifically Facebook, where the overwhelming majority of fake news stories are spread, interplaying with societal structures and human nature to produce this almost unpredictable outcome.

 

People want to see things that interest them. That includes one’s friends and family but also one’s beliefs and political orientation. At the same time, Facebook wants to appeal to as many people as possible. Hence they show you, considering previously collected data on your past activity, likes, friends and even previously visited sites, and utilizing an algorithm, what they think you want to see. That’s how the personalized ‘news feed’, you’re familiar with, is made just for you.

Although this sounds lovely at first, it effectively sets up an echo chamber, where you are only shown, among other things, news articles and ‘facts’ you probably agree with. You’re pre-existing beliefs are reinforced, because as humans we tend to believe what we want to believe, what is repeated and what feels familiar. This is because it gives you the sensation that you’re right, which makes you feel good: a phenomenon called cognitive ease.

The fact that Facebook and other social media allows instant sharing among friends, means that potentially false articles are further echoed in groups of likeminded people.  Now factor in that this affects a vast number of people, as over 60% of adults in the US get news from social media, especially Facebook.

At the same time, because all articles are displayed in the same manner and the icon and name of the distributer is relatively small compared to the headline, it is really difficult to judge, whether a source is legitimate and in turn it becomes easier to just go with whatever is echoed in your bubble.

Now enter: Fake News sites, who have an incentive to make purposefully insidious, false, outrageous headlines and statements to get as many clicks and shared in those filter bubbles as possible to make ad revenue. And then every single click amplifies the preference for the article in the algorithm used to create your ‘news feed’, and around and around it goes…

All of these above factors decisively interplay to add up to the hardly foreseeable spreading of heinous falsehoods: Fake News, a true problem is born.

 

In the 2016 election, it was found that months before the election, more fake news, such as that the Pope endorsed Trump and that Clinton runs a child trafficking ring from a pizzeria, than true news was shared. It has come to a point, where Fake News, not only has real consequences on a global scale, as it manipulates voters, arguably against their best interest, but it also leads to specific, dangerous, outcomes. One example being an armed man entering and firing his weapon in said pizzeria trying to ‘self-investigate’ that story.

 

So now we have to solve this insanity, right?

At least the current attention to Fake News shows a lot of people are concerned, resulting in the proposal of several different approaches.

 

After initially denying that Facebook’s enabling of Fake News had significant consequences, Mark Zuckerberg seems to have recognized the problem and the role of Facebook as global publisher to about 1.86 billion people. Therefore, some measures have been implemented into Facebook recently. It’s now easier to report on suspected fake news stories, which will be flagged but not censored if third party fact checks dispute its legitimacy. Also, users are warned but not prohibited form sharing those disputed news.

These seem to be sensible ways to improve the technology platform, as reporting fake news draws on knowledge of crowds, coupled with different fact checking services to distribute the power of deciding what is true, thus democratizing the process. Merely informing and warning users of potentially false information allows choices and is a technological fix that will not generate significant opposition, potentially making it more effective than banning certain news.

 

In contrast to these, some more radical and questionable ‘solutions’ have been introduced.

Arguing human fact checking is not enough anymore, even though the number of fact checking sites has been greatly increasing,  has led to the suggestion of and heavy work on  automated, algorithmic, fact checking. This technological fix, although in theory a good idea for accurately informing people, might have some inherent downfalls.

As the outcome of implementing a technology is at least somewhat unpredictable, one has to consider that this could lead to replacing the individual judgement of multiple independent human fact checkers with a technology and algorithm that determines singlehandedly what is regarded as true. This could concentrate considerable power in the hands of a single entity in control of the technology. Combining this with the fact that the effectiveness of the technological fix in practice depends strongly on the entity applying it, the manner it is applied in and the intentions of applying it, it becomes obvious that it could be potentially misused.

 

This seems especially concerning in combination with recent, open calls from Clinton and others to ‘boost government response’ through legislation proposals censoring what social media can and can’t show to users. In Germany, where fake news is also seen as an urgent issue, legislation to impose huge fines of up to 500.000€ on social media sites, such as Facebook, which carry fake news, is also being considered.

But who determines what is and isn’t legitimate news in these cases?

Obviously, this is a very slippery slope, as the proposals effectively give a small, often partial group in government, authority to determine what is a permitted news source and to shut down descending voices. Speaking of political figures who love to shut down any objectively factual news that do not play in their favour, have you noticed the most powerful man on earth maligning any disagreeing media outlet on a daily basis?

It’s obvious that handing over this power from the people, who could self-regulate via i.e. the improvements made by Facebook, to few people in government, is dangerous!

 

So, what might be some viable solutions?

  • Supply information using independent democratic fact checking sites and new Facebook implementations to help people discern fake news and possibly change their stance through reason.
  • Teach the new generation to understand their mind’s inherent biases and evaluate what they read on the web. Some proposals for this have been made by California lawmakers.
  • Allow more as opposed to less free speech and let objectively true ideas, with which there is no way to argue with, win on a battleground of ideas.

 

 

[WORD COUNT Blog 2: 1151]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s