The Upshot : Some measures it has taken to counter fake news may not work, research shows.
Since the 2016 presidential campaign Facebook has taken a number of actions to prevent the continued distribution of false news articles on its platform-
Most notably by labeling articles rated as false or misleading by fact checkers as ''disputed''. But how effective are those measures?
Todate, Facebook has offered little information to the public other than a recent email to fact checkers asserting that labelled articles receive 80 percent fewer impressions.
But more data is necessary to determine the success of these efforts. Research suggests the need to carefully evaluate the effectiveness of Facebook's interventions.
Gordon Pennycook and David Rand, both of the psychology department at Yale University offer two principal warnings:
First, the effects of exposure to false information are not easily countered by labeling, as they find a in a paper they wrote with Tyronne D, Cannon.
False information we have previously encountered feels more familiar, producing a feeling of fluency and causes us to rate it as more accurate than information we have not seen before.
This effect persists even when Facebook style warnings label a headline ''disputed'' .
We should be cautious about assuming that labels tagging articles as false are enough to prevent misinformation on social media from affecting people's belief's.
In a second paper, Mr. Pennycook and Mr. Rand find that the presence of ''disputed'' labels causes study participants to rate unlabeled false stories as slightly more accurate -an ''implied truth'' effect.
If Facebook is seen as taking responsibility for the accuracy of information in its news feed through labeling, readers could start assuming that-
Unlabeled stories have survived scrutiny from fact checkers [which is rarely correct -there are far too many for humans to check everything].
Since the 2016 presidential campaign Facebook has taken a number of actions to prevent the continued distribution of false news articles on its platform-
Most notably by labeling articles rated as false or misleading by fact checkers as ''disputed''. But how effective are those measures?
Todate, Facebook has offered little information to the public other than a recent email to fact checkers asserting that labelled articles receive 80 percent fewer impressions.
But more data is necessary to determine the success of these efforts. Research suggests the need to carefully evaluate the effectiveness of Facebook's interventions.
Gordon Pennycook and David Rand, both of the psychology department at Yale University offer two principal warnings:
First, the effects of exposure to false information are not easily countered by labeling, as they find a in a paper they wrote with Tyronne D, Cannon.
False information we have previously encountered feels more familiar, producing a feeling of fluency and causes us to rate it as more accurate than information we have not seen before.
This effect persists even when Facebook style warnings label a headline ''disputed'' .
We should be cautious about assuming that labels tagging articles as false are enough to prevent misinformation on social media from affecting people's belief's.
In a second paper, Mr. Pennycook and Mr. Rand find that the presence of ''disputed'' labels causes study participants to rate unlabeled false stories as slightly more accurate -an ''implied truth'' effect.
If Facebook is seen as taking responsibility for the accuracy of information in its news feed through labeling, readers could start assuming that-
Unlabeled stories have survived scrutiny from fact checkers [which is rarely correct -there are far too many for humans to check everything].
0 comments:
Post a Comment
Grace A Comment!