U.S. social media firms say they are removing violent content faster

By David Shepardson

WASHINGTON (Reuters) – Major U.S. social media firms told a Senate panel Wednesday they are doing more to prevent to remove violent or extremist content from online platforms in the wake of several high-profile incidents, focusing on using more technological tools to act faster.

Critics say too many violent videos or posts that back extremist groups supporting terrorism are not immediately removed from social media websites.

Senator Richard Blumenthal, a Democrat, said social media firms need to do more to prevent violent content.

Facebook’s head of global policy management, Monika Bickert, told the Senate Commerce Committee its software detection systems have “reduced the average time it takes for our AI to find a violation on Facebook Live to 12 seconds, a 90% reduction in our average detection time from a few months ago.”

In May, Facebook Inc said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Bickert said Facebook asked law enforcement agencies to help it access “videos that could be helpful training tools” to improve its machine learning to detect violent videos.

Earlier this month, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill after police in Texas said they were “reasonably confident” the man who shot and killed 22 people at a Walmart in El Paso, Texas.

Facebook banned links to violent content that appeared on 8chan.

Twitter Inc public policy director Nick Pickles said the website suspended more than 1.5 million accounts for terrorism promotion violations between August 2015 and the end of 2018 with “more than 90% of these accounts are suspended through our proactive measures.”

Twitter was asked by Senator Rick Scott why the site allows Venezuelan President Nicolas Maduro to have an account given what he said were a series of brazen human rights violations. “If we remove that person’s account it will not change facts on the ground,” Pickles said, who added that Maduro’s account has not broken Twitter’s rules.

Alphabet Inc unit Google’s global director of information policy, Derek Slater, said the answer is “a combination of technology and people. Technology can get better and better at identifying patterns. People can help deal with the right nuances.”

Of 9 million videos removed in a three-month period this year by YouTube, 87% were flagged by artificial intelligence.

(Reporting by David Shepardson; Editing by Nick Zieminski)

U.S. social media firms to testify on violent, extremist online content

By David Shepardson

WASHINGTON (Reuters) – Alphabet Inc’s Google, Facebook Inc and Twitter Inc will testify next week before a U.S. Senate panel on efforts by social media firms to remove violent content from online platforms, the panel said in a statement on Wednesday.

The Sept. 18 hearing of the Senate Commerce Committee follows growing concern in Congress about the use of social media by people committing mass shootings and other violent acts. Last week, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill.

The hearing “will examine the proliferation of extremism online and explore the effectiveness of industry efforts to remove violent content from online platforms. Witnesses will discuss how technology companies are working with law enforcement when violent or threatening content is identified and the processes for removal of such content,” the committee said.

Facebook’s head of global policy management Monika Bickert, Twitter public policy director Nick Pickles and Google’s global director of information policy Derek Slater are due to testify.

Facebook and Google both confirmed they will participate but declined to comment further. Twitter did not immediately comment.

In May, Facebook said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Facebook said it was introducing a “one-strike” policy for use of Facebook Live, a service which lets users broadcast live video. Those who broke the company’s most serious rules anywhere on its site would have their access to make live broadcasts temporarily restricted.

Facebook has come under intense scrutiny in recent years over hate speech, privacy lapses and its dominant market position in social media. The company is trying to address those concerns while averting more strenuous action from regulators.

(Reporting by David Shepardson, Editing by Rosalba O’Brien and Tom Brown)