Facebook is doubling down on its efforts to prevent the spread of misinformation on its platform.

The social media giant, in an expansive (close to 2,000 words) blog post, unveiled a slew of new policies that the company will put into place to clamp down on fake news stories, images and videos.

The plan, titled ‘remove, reduce and inform,’ addresses what has been one of the major criticisms against Facebook, concerning the continued presence of harassment, hate speech and false content on its platform.

Click Gap signal

The most notable among the initiatives is the use of what the company calls likes to call a “click-gap” signal, which will essentially down-rank links to purported news articles that are receiving large amounts of traffic from Facebook, but aren’t linked to other parts of the web.

Facebook says it hopes that the new signal will decrease the prevalence of “low-quality content,” like disinformation and clickbait.

Head of news feed integrity at Facebook, Tessa Lyons said the company’s research has shown that where web traffic comes from is a sign of how authoritative a site is. Search engines use similar signs to determine the quality of websites. As part of the drive, Facebook convened close to two dozen journalists at its headquarters to explain the changes.

“Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph,” said the blog post.

Outlining what is allowed and what isn’t

A new section on Facebook’s Community Standards site is also being launched where people can track updates made by the company every month. This new section aims at informing people about policy changes and outline what is and isn’t allowed on Facebook.

There’s also a new feature called ‘Group Quality’ being introduced which offers admins an overview of content removed and flagged for most violations. This includes a section for false news found in the group. Facebook’s goal here is to provide admins a clearer view into how and when Facebook deals with content that goes against it rules.

Fact checking stories

On a related note, Facebook is also making some minor changes around fact-checking stories. The Associated Press is now going to start fact-checking videos for the social media giant and Facebook will start including “Trust Indicators” when users click to view the credibility of a publication. The indicators will be generated by ‘The Trust Project’, a group built by news organisations that determine whether a publication is trustworthy or now.

The company is also expanding the ‘Context Button‘ to images, which reveals more background information about the publishers and articles they see in News Feed.

The other major notable change is to groups. Groups on Facebook which “repeatedly share misinformation” will now be visible to fewer people in the News Feed. The reason why this change is an important one because it was frequently-visible group pages that were used to distribute propaganda and misinformation in the run-up to the 2016 US presidential elections.

Certain tools which were introduced on WhatsApp designed to reduce misinformation are also coming to Messenger. Facebook has apparently already begun rolling out the ‘forward’ indicators, to let people know when a message has been forwarded to them.

Clear history feature delayed, yet again

The company also announced that it will deploy a “clear history” feature, which will allow users to wipe their accounts clean of both contents posted on the service and the ad preferences the company has accumulated on that user over the lifetime of the account.

This particular feature was announced first back in May 2018, but its launch has been delayed. Facebook’s VP of site integrity, Guy Rosen reasoned that launching the feature has taken longer than expected because the company has been re-engineering how data is processed. Clear History is expected to roll-out later this fall.

Algorithms will play a huge role

The social media giant, however, did say that even with this drive to curb misinformation, the onus would remain on algorithms. Facebook acknowledged that it will never be able to hire enough fact-checkers and moderators to keep an eye on all the content posted on its site and that ultimately, it will be up to users to discover content that is false and to flag it off accordingly.

Meanwhile, the firm will continue to consult academics, journalists and other parties to develop new ways to tackle misinformation.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed
Scroll to Top
%d bloggers like this: