Scoring FB and Google's Fake News Solutions And A Proposal

This article first appeared in Tech in Asia on 19th April 2017.

I made a prediction over a week ago about that Google and Facebook would respond to the ongoing controversy over fake news, and outlined steps that they should take. I advocated a 3 point strategy for dealing with fake news in my previous piece. Step 1 being to whitelist authoritative news sources; Step 2 to flag potential fake news and limit their reach; Step 3 to use journalists as moderators and give them power to overrule Step 2. Let’s check out the scorecard.

Google has done the Googly thing by utilizing the work of third parties like Politifact and Snopes.com. The new feature sees a truthfulness rating given to each Politifact or Snopes search result on hot button news topics. Ratings appear beneath the meta description, and range from True to False.

My suggested solutions focused on Google News, so I’m surprised that Google has moved to label questionable news items with the conclusions of fact-checkers. Google decided that this is an enormous problem. Unfortunately they only took a timid half-step forward. By outsourcing the problem of fighting fake news to Politifact and Snopes, Google has replaced one problem with another. They are now associated with outcomes of the fact-checkers' work while having no control over the conclusions. It is ultimately a weak response.

Facebook on the other hand is instituting exactly the measures I predicted. So far, Facebook has reassured the public that they will enforce existing ad policies more aggressively, preventing fake news publishers from advertising their tall tales. Stories flagged as fake news by users cannot be promoted via ads either. Flagged stories may also be sent to partner news organizations to be investigated, and if the story is judged as fake, it’s reach will again be subject to dampening. These steps are not 100% sufficient, but they are the right first steps.

Facebook also launched the Facebook Journalism Project and News Integrity Initiative, both of which aim to combat fake news by conducting research into new products, perform public service announcements, and educate the public, increasing public trust. As anyone who has seen overzealous signs regulating public behaviour will tell you, this sort of response is frequently necessary, but often does not work well. What is needed is a way to nudge users in the right direction.

The Problem

The main problem with how Google and Facebook are approaching the problem of fake news is that they perceive it as a rational problem with a technical solution. This is a misunderstanding of the nature of fake news.

People seek news that validates their opinions. Studies show that when presented with evidence disproving their belief, people actually harden their beliefs. Against that context, Google’s inclusion of fact-checker ratings only hardens the beliefs of people on both sides of the issue. Ultimately, nothing is solved, and everybody on the opposite side of Politifact and Snopes will still be angry.

I think that Facebook did a better job than Google. It still was a surprise that these measures weren’t already put in place a long time ago. It seems natural that Facebook would want to moderate the spread of news that elicits an overwhelmingly negative reaction. However, when it comes to the currency of engagement, Facebook opts for the money. As former Facebook product manager Bobby Goodlatte said “Unfortunately News Feed optimizes for engagement… and BS is highly engaging”.

I disagree with Facebook that there is a technological solution to identifying fake news. The signals that Facebook uses as a signal for fake news, such as a low sharing-rate, or high rate of negative comments look the same for fake news and real news that people dislike.

Relying on people to discern the difference between real and fake news so that you can build an algorithm to tell them apart is putting the cart before the horse. The incentives and signals are all wrong. Users are prone to be duped by fake news that they agree with, and likely to judge news that they disagree with as fake. We are asking algorithms to learn black from white by observing how angry we get at colours.

The Solution

In my view, the best solution has to make use of behavioral insights. A Yale study found that a high level of scientific curiosity correlated with a high level of flexibility in beliefs, even for controversial topics.

An unfair consequence of the fake news controversy is that people expect Google and Facebook to ‘correct the record’ for all those people who disagree with me. A clever solution would place the initiative back with each user, and encourage them to discover a perspective on issues that they had not considered before. In this method, the truth of a claim is less important than encouraging users to consider different viewpoints.

Let me elaborate. Google and Facebook have already experimented with showing users additional stories after they click on a link in search results or the News Feed. Rather than showing closely-related stories, users could be shown stories that are only loosely related, but emotionally opposite to the original story.

Say for instance that there is a leader who was quoted saying that he hates animals. When a user clicks on a news story about this heinous outrage, an old news report would appear beneath this, talking about his phobia of dogs, and how despite this fear, he once sat through a state visit with dogs around (true story by the way. guess the world leader). The opposite story does not address the original story directly, but it does something even better, by inviting users to try on a different emotional response.

In a face-to-face social setting, there’s too much personal reputation at stake for someone to perform an about-face and change their view on something highly personal. However, social media is perfect for this. Someone who is open to an opposing viewpoint doesn’t have to be subject to peer pressure to conform.

At first, users will be annoyed that Google and Facebook are polluting their pure and right filter bubble with the opposite side. We’ll let these users opt out with a click. For everyone else, Google and Facebook have plenty of time to collect and analyse data on what motivates you to open your mind. As they get better at it, we enjoy the benefits of a more open-minded, civilised society.

This works on a PR level as well. Google and Facebook can’t be accused of evading responsibility, for they are sacrificing real dollars that could have come via clicks and engagement on stories that matched the user’s beliefs. However, even that has a solution.

Presenting opposing viewpoints can even be a profitable feature. Openness to experience is a domain in the famous Five Factor Model of personality. Both companies can offer advertisers an additional targeting option. People with a low score may see ads for jobs that require strict adherence to operational detail, while people with a high score may see job ads for creative positions. This is an ad product that has never been seen before. Everybody wins.

My original 3 point strategy focused on the immediate steps that both companies could take to reduce the spread of fake news. The final, and lasting solution has to be ambitious enough to ask for change from users themselves, so that instead of indulging the worst of our human impulses, our technology helps us grow and become the best versions of ourselves.

Use the wikipedia example

It is, in sum, more of the same, pummelling yourself at the gym harder while continuing to reach for the salted caramel double-shot mocha frappuccino.

Google is introducing a feature on search results, showing the conclusion reached by fact-checking organizations politifact and snopes. Presumably they will find local equivalents for news from countries around the world, if available.


comments powered by Disqus