As Facebook faces new critiques, it starts to rate its own users’ reputations
Facebook Inc. has come under more fire after a recent study from the University of Warwick has revealed where the platform is being used more, there are more hate crimes.
As reported in the New York Times Tuesday, the study looked at 3,335 anti-refugee attacks in Germany over a two-year period. Researchers scrutinized each community where the attacks took place, looking at things such as economic status, demographics, newspaper sales, far-right support, how many protests there had been there and how many hate crimes had happened there.
The results showed something similar in every case. Whether the community was affluent or poor, mainly liberal or aligned to the far right, the more Facebook was used, the more attacks seemed to happen in those communities.
“Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent,” wrote the Times. The researchers said Facebook, used as a digital nexus to inflame hatred, was responsible for a 10th of all antirefugee violence.
Hate crimes or possibly just the feeling of hate, may at times be inspired by exaggerated news or just plain lies that often appear on social media platforms. Facebook has already taken a number of steps to reduce the spread of such news to avoid being used as a “propagation mechanism” for hate speech. That was the expression used by the British researchers.
One of those steps, revealed Tuesday by The Washington Post, is to rate its users regarding their trustworthiness. In an interview, Tessa Lyons, the product manager in charge of assessing Facebook users’ reputations, said users were being rated as either trustworthy or not. It seems there’s no middle ground.
Lyons explained that when “fake news” is reported by a number of users, the story is then scrutinized by Facebook. If certain users often flag stories as containing erroneous facts and then Facebook finds that to be true, that user will gain a trust rating. If it turns out to be the opposite, that user will be deemed untrustworthy.
“People often report things that they just disagree with,” explained Lyons, so users were being assessed for their credibility. A trust rating for its users may come across to some users as perhaps invasive, but later in an email sent to Gizmodo, Facebook clarified what it was doing in perhaps a manner that will put users less ill at ease.
“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong,” said Facebook. “What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”
Image: Thought Catalog visa Flickr
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.