<td id="kg486"><optgroup id="kg486"></optgroup></td>
<button id="kg486"><tbody id="kg486"></tbody></button>
<li id="kg486"><dl id="kg486"></dl></li>
  • <dl id="kg486"></dl>
  • <code id="kg486"><tr id="kg486"></tr></code>
  • Facebook announces plan to fight misinformation campaigns

    Apr 28, 2017

    Facebook made its most direct statements about how the platform has been used to spread misinformation in a report released today by its security team. The report acknowledges how actors created a coordinated campaign on the platform to spread misinformation during the 2016 U.S. election, and explains measures Facebook is taking to combat it.

    “Our mission is to give people the power to share and make the world more open and connected,” the reports’ authors — Facebook CSO Alex Stamos and Threat Intelligence team members Jen Weedon and William Nuland — wrote. “The reality is that not everyone shares our vision, and some will seek to undermine it — but we are in a position to help constructively shape the emerging information ecosystem by ensuring our platform remains a safe and secure environment for authentic civic engagement.”

    Facebook calls the campaigns “information operations” and says the goals of such campaigns are usually to distort or manipulate political sentiment. Ordinary users can get caught up in the operations and take part in the spread of misinformation, Facebook said.

    The company’s response includes collaboration with other organizations to educate users, undermining campaigns that have a financial motivation, creating new products that slow down the spread of fake news and informing users when they encounter untrustworthy information.

    Facebook explains that information operations on the platform often manifest in three ways: targeted data collection, content creation, and false amplification. Stealing and publishing data allows actors to control public discourse, the company said, and that data can then be amplified across fake Facebook profiles.

    These tactics allow operations to sway public opinion about specific issues, sow distrust in political institutions, and spread confusion. This kind of behavior is often attributed to bots, but Facebook claims that most of the activity it sees on its network isn’t automated.

    “In the case of Facebook, we have observed that most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts,” Facebook said. The company added that specific language skills and knowledge of regional political context indicated that those involved in the misinformation campaigns were humans, not bots.

    To fight back, Facebook is amping up its efforts to detect false amplification. It’s trying to block the creation of fake accounts and use machine learning to detect abuse.  The company says that the new measures are proving effective in France, where an election is currently underway.

    “In France, for example, as of April 13, these improvements recently enabled us to take action against over 30,000 fake accounts,” the report says.

    Facebook used the recent U.S. election of Donald Trump as a case study into misinformation on its platform. The company concluded that a coordinated campaign existed, “with the intent of harming the reputation of specific political targets.” The campaign included inauthentic Facebook accounts that were used to amplify certain themes and information, the report notes, adding:

    These incidents employed a relatively straightforward yet deliberate series of actions:

    • Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook);
    • Dedicated sites hosting this data were registered;
    • Fake personas were created on Facebook and elsewhere to point to and amplify awareness of this data;
    • Social media accounts and pages were created to amplify news accounts of and direct people to the stolen data.
    • From there, organic proliferation of the messaging and data through authentic peer groups and networks was inevitable

    Although Facebook admitted it was the unwitting host of a disinformation campaign during the election, the company said that the reach of this operation was “statistically very small” in comparison with overall political activity and engagement.

    Facebook also said it did not have enough data to definitively attribute the campaign to its creators, but nodded to a report published by the Director of National Intelligence which attributed hacking campaigns during the election season to Russian operatives and said that Facebook’s data does not contradict the findings of the Director.

    Putting the responsibility for fighting misinformation under the purview of its security team is an interesting move for Facebook, indicating that the company views the problem as a security risk similar to hacking or fraud.

    The company said it would continue to work directly with politicians and campaigns to make sure they use the social network securely.

    “Our dedicated teams focus daily on account integrity, user safety, and security, and we have implemented additional measures to protect vulnerable people in times of heightened cyber activity such as elections periods, times of conflict or political turmoil, and other high profile events.”

     

    Source: TechCrunch


    Copyright ? 2017, G.T. Internet Information Co.,Ltd. All Rights Reserved.
    主站蜘蛛池模板: 一级毛片60分钟在线播放久草高清在线| 国产欧美久久一区二区三区 | 北条麻妃74部作品在线观看| 亚洲第一永久色| free哆拍拍免费永久视频| 粉嫩小仙女脱内衣喷水自慰| 尾野真知子日韩专区在线| 动漫毛片在线观看| wwwfuqercom| 男生和女生打扑克差差差app| 天天干天天操天天玩| 亚洲综合无码一区二区| 99re最新地址精品视频| 欧美日韩色综合网站| 国产精品亚洲欧美日韩区| 亚洲AV永久无码一区二区三区| 国产情侣一区二区| 日本孕妇大胆孕交| 国产v片成人影院在线观看| 中文字幕123区| 窝窝社区在线观看www| 天堂网www在线资源网| 亚洲欧美在线播放| 亚洲自拍欧美综合| 日韩人妻无码精品无码中文字幕 | 久久中文字幕无码专区| 美腿丝袜亚洲综合| 天海翼大乱欲在线观看| 亚洲精品国产电影| 亚洲日本va在线观看| 日韩一品在线播放视频一品免费 | 妞干网在线观看视频| 亚洲精品免费视频| 色人阁在线视频| 日本一卡精品视频免费| 午夜老司机福利| 99re免费99re在线视频手机版| 欧美一级免费看| 国产亚洲精品无码专区| аⅴ中文在线天堂| 欧美成人午夜片一一在线观看|