Posts, links and videos that have been flagged as false will be marked as such to users, and people will be warned if a post they are about to share has been found to be false – but they will not be stopped from sharing or reading any content, false or not. However, Facebook’s newsfeed algorithm does intervene to artificially demote false content, ensuring that it reaches fewer people than it would otherwise. “People don’t want to see false news on Facebook, and nor do we,” said Sarah Brown, a training and news literacy manager for Facebook. “We’re delighted to be working with an organisation as reputable and respected as Full Fact to tackle this issue. By combining technology with the expertise of our fact-checking partners, we’re working continuously to reduce the spread of misinformation on our platform.”. Since its launch in the US, Facebook’s fact-checking programme has received mixed reviews.It has been praised for attempting to tackle the spread of misinformation on the platform, and particularly for Facebook’s decision to give fact-checkers’ findings real weight in its algorithmic promotion. But the social network has also been criticised for its unwillingness to pay for fact-checking, which relies on users to flag content to third parties, who then check the veracity of factual claims.
1 other references
Latest Stories From Referenced Companies
Jan 18, 2019