Thursday, September 19, 2019

Facebook, Twitter, Google Detail Efforts Against Online Extremism to Lawmakers

【Move to another page】
Quote
https://ift.tt/2Nn0DVE
Facebook, Twitter, Google Detail Efforts Against Online Extremism to Lawmakers

In a hearing Wednesday to examine the spread of extremism online and the effectiveness of measures taken to prevent violent content, leaders from Facebook, Twitter and Google faced tough questions from U.S. lawmakers, accentuating the positive steps taken, while acknowledging the work remaining.

Policy representatives from the social media giants told members of the Senate Committee on Commerce, Science and Transportation that their companies had made significant progress in curbing bigotry and extremist content on their platforms.

Senators suggested the companies could do much more, however, as part of their "digital responsibility" to prevent terrorists and extremists from using the internet to encourage violence.

Sen. Roger Wicker, R-Miss., takes the stage during a rally in Tupelo, Miss., Nov. 26, 2018.
Sen. Roger Wicker, R-Miss., takes the stage during a rally in Tupelo, Miss., Nov. 26, 2018.

"No matter how great the benefits to society these platforms provide, it is important to consider how they can be used for evil at home and abroad," Sen. Roger Wicker, a Mississippi Republican, said in an opening statement, citing incidents in which white nationalists and Islamic State sympathizers used social media to radicalize and post their crimes.

The role of social media companies has come under scrutiny in recent months in the aftermath of assorted high-profile mass shootings that were posted online.

Following a shooting at a Walmart in El Paso, Texas, in August that killed 22 people, police reported the suspect posted an anti-immigrant manifesto on a website called 8chan just 27 minutes prior to the shooting.

In a separate incident in March in Christchurch, New Zealand, a self-avowed white supremacist opened fire at two mosques, killing 51 people while live-streaming his actions on Facebook.

Representatives of the three tech giants said they were working to remove extremist content from their platforms more quickly by continuing to develop their artificial intelligence capabilities and by improving their human moderation expertise. At the same time, they noted they are building partnerships with other companies, civil society and governments.

"There is always room for improvement, but we remove millions of pieces of content every year, much of it before any user reports it," Monika Bickert, head of Facebook's Global Policy Management, told lawmakers.

FILE - Monika Bickert, Facebook's head of global policy management checks her mobile phone before attending a content summit at France's Facebook headquarters in Paris, France, May 15, 2018.
FILE - Monika Bickert, Facebook's head of global policy management checks her mobile phone before attending a content summit at France's Facebook headquarters in Paris, France, May 15, 2018.

Quicker action

Bickert said her company has been able to reduce the average time it takes for its machine detection systems to find extremist content on its live video streaming to 12 seconds. Additionally, she said the company has hired thousands of people to review content, including a team of 350 people whose primary job is dealing with terrorists and other dangerous organizations.

Twitter said it has taken drastic action against terrorism-related content, particularly propaganda related to IS.

Nick Pickles, Twitter's public policy director, told senators his company has "decimated" IS propaganda on its platform and suspended more than 1.5 million accounts that promoted terrorism between August and December 2018.

"We have a zero-tolerance policy and take swift action on ban evaders and other forms of behavior used by terrorist entities and their affiliates. In the majority of cases, we take action at the account creation stage — before the account even tweets," Pickles said.

He noted that the company, since introducing a policy against violent extremists in December 2017, has taken action against 186 groups and permanently suspended 2,217 unique accounts, many related to extremist white supremacist ideology.

Redirect method

Google said its video-sharing website YouTube continues to employ what it calls the "Redirect Method," which uses targeted advertising technology to disrupt online radicalization by sending anti-terror and anti-extremist messages to people who seek out such content.

Google Director of Information Policy Derek Slater testifies before a U.S. Senate Commerce Committee hearing on the issue of the disemination of mass violence and extremism on social media platforms  in Washington, U.S. Sept.18, 2019.
Google Director of Information Policy Derek Slater testifies before a U.S. Senate Commerce Committee hearing on the issue of the disemination of mass violence and extremism on social media platforms in Washington, Sept. 18, 2019.

Derek Slater, Google's global director of information policy, said his company spends hundreds of millions of dollars annually, and now has more than 10,000 people working to address content that might violate its policies, including those promoting violence and terrorism.

While recognizing the progress made, U.S. lawmakers pressed the major technology companies to develop ways to detect violent content in a more timely manner.

Republican Sen. Deb Fischer of Nebraska said there was "tension" between how Facebook's algorithms boost content while gaps still exist in detecting extremism content on time.

"I think we need to realize when social media platforms fail to block extremism content online. This content doesn't just slip through the cracks. It is amplified to a wider audience. We saw those effects during the Christchurch shooting. The New Zealand terrorist Facebook broadcast was up for an hour before it was removed ... and it gained thousands of views during that time frame," Fischer said.

The Anti-Defamation League, an advocacy group founded to combat anti-Semitism, said the companies also need to become more transparent by sharing data.

George Selim, Senior Vice President of programs for the Anti-Defamation League, testifies during a hearing of the Civil Rights and Civil Liberties Subcommittee on
FILE - George Selim, senior vice president of programs for the Anti-Defamation League, testifies during a House Oversight subcommittee hearing on Capitol Hill, May 15, 2019.

George Selim, senior vice president of programs for ADL, said the companies must provide insight by providing metrics that are verified by trusted third parties in order to assess the problem of hate and extremism on social media platforms.

"Meaningful transparency will allow stakeholders to answer questions such as: How significant is the problem of white supremacy on this platform? Is this platform safe for people who belong to my community? Have the actions taken by this company to improve the problem of hate and extremism on their platform had the desired impact?

"Until tech platforms take the collective actions to come to the table with external parties and meaningfully address these kinds of questions through their transparency efforts, our ability to understand the extent of the problem of hate and extremism online, or how to meaningfully and systematically address it, will be extremely limited," Selim said.


September 19, 2019 at 10:38AM

注目の投稿

List of companies founded by University of Pennsylvania alumni

 投稿 L List of companies founded by University of Pennsylvania alumni 投稿者: Blogger さん 7  Nation's Most Visible Mass Gathering During Cor...

人気の投稿