Facebook Takes More Proactive Steps to Alerts Users Who’ve Engaged with Misinformation

Facebook is taking a new approach[1] to help keep users better informed within the app by sending out specific notifications to people who’ve engaged with posts which have later been identified as including misinformation, with an initial focus on COVID-19 updates.

As reported by Fast Company[2]:

[Facebook] will now send notifications to anyone who has liked, commented, or shared a piece of misinformation that’s been taken down for violating the platform’s terms of service. It will then connect users with trustworthy sources in effort to correct the record.”

As you can see in these example screenshots, the new notifications will include more specific wording to help users understand the purpose of the notification:

“We removed a post you liked that had false, potentially harmful information about COVID-19.”

The notifications also include details on the removal, and an explanation of why the content was removed.

The new approach comes after recent research[3] found that Facebook’s current process for labeling misinformation isn’t always working as intended.

As reported by Platformer[4]:

“For the interview study, eight of 15 participants said that platforms have a responsibility to label misinformation and we’re glad to see it. The remaining seven took a hostile attitude towards labeling, viewing the practice as “judgemental, paternalistic and against the platform ethos.”

Indeed, one study participant noted that[5]:

“I thought the manipulated media label meant to tell me the media is manipulating me.”

Questions around censorship and manipulation by the media in general have been fueled by US President Donald Trump, who has repeatedly labeled anything critical of his administration as ‘fake news’, and tagged reporters as ‘lamestream’ media, who are peddling lies for their own benefit. That narrative has prompted more people to question all news stories they see, which is partly why Facebook’s labels are seen by many as another exercise in control, as opposed to being informational updates.

The new labels likely won’t counter this, but they will provide more specific context to each user, which could get more clicking through and re-thinking their sharing habits. Previous research[6] has shown that flagging false news does have an impact on subsequent distribution. 

But then again, it could also have unwanted side-effects. 

1 2