Facebook, Instagram experience outage on Thanksgiving Day

(Reuters) – Facebook Inc’s family of apps including Instagram experienced a major outage on Thanksgivings Day, prompting a flurry of tweets on the social media platform.

“We’re aware that some people are currently having trouble accessing Facebook’s family of apps, including Instagram. We’re working to get things back to normal as quickly as possible. #InstagramDown,” Instagram said in a tweet.

According to outage monitoring website DownDetector, about 8,000 Facebook users were affected in various parts of the world including the United States and Britain.

Several users reported issues like not being able to post pictures and videos on their main feeds and an error message saying “Facebook Will Be Back Soon” appeared on log in attempts.

Facebook could not immediately be reached for comment.

(Reporting by Mekhla Raina in Bengaluru; editing by Diane Craft)

Facial recognition at Indian cafe chain sparks calls for data protection law

A visitor drinks coffee at the 'International Coffee Festival 2007' in the southern Indian city of Bangalore February 25, 2007. REUTERS/Jagadeesh Nv (INDIA) - GM1DURPKFSAA

By Rina Chandran

BANGKOK (Thomson Reuters Foundation) – The use of facial recognition technology at a popular Indian cafe chain that triggered a backlash among customers, led to calls from human rights advocates on Monday for the government to speed up the introduction of laws to protect privacy.

Customers at Chaayos took to social media during the last week to complain about the camera technology they said captured images of them without their consent, with no information on what the data would be used for, and no option to opt out.

While the technology is marketed as a convenience, the lack of legislative safeguards to protect against the misuse of data can lead to “breaches of privacy, misidentification and even profiling of individuals”, said Joanne D’Cunha, associate counsel at Internet Freedom Foundation, a digital rights group.

“Until India introduces a comprehensive data protection law that provides such guarantees, there needs to be a moratorium on any technology that would infringe upon an individual’s right to privacy and other rights that stem from it,” she told the Thomson Reuters Foundation from New Delhi.

A statement from Chaayos said the technology was being tested in select cafes and was aimed at reducing purchase times for customers.

The data was encrypted, would not be shared, and customers could choose to opt out, it added.

“We are extremely conscious about our customers’ data security and privacy and are committed to protecting it,” the statement said.

A Personal Data Protection Bill is scheduled to be introduced by lawmakers in the current parliamentary session to Dec. 13.

The draft of the bill proposed strict conditions for requiring and storing personal data, and hefty penalties for misuse of such data.

But digital rights activists had criticised a recent consultation on the bill they said was “secret and selective”.

The ministry for information technology did not respond to a request for comment.

Worldwide, the rise of cloud computing and artificial intelligence technologies have popularised the use of facial recognition for a range of applications from tracking criminals to catching truant students.

In India, facial recognition technology was installed in several airports this year, and the government plans to roll out a nationwide system to stop criminals and find missing children.

But digital rights experts say it could breach privacy and lead to increased surveillance.

India’s Supreme Court, in a landmark ruling in 2017 on the national biometric identity card programme Aadhaar, said individual privacy is a fundamental right.

There is a growing backlash elsewhere: San Francisco and Oakland have banned the use of facial recognition technology, and “anti-surveillance fashion” is becoming popular.

(Reporting by Rina Chandran @rinachandran; Editing by Michael Taylor. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, women’s and LGBT+ rights, human trafficking, property rights, and climate change. Visit http://news.trust.org)

Factbox: How social media sites handle political ads

By Elizabeth Culliford

(Reuters) – Online platforms including Facebook and Alphabet Inc.’s Google face growing pressure to stop carrying political ads that contain false or misleading claims ahead of the U.S. presidential election.

In the United States, the Communications Act prevents broadcast stations from rejecting or censoring ads from candidates for federal office once they have accepted advertising for that political race, although this does not apply to cable networks like CNN, or to social media sites, where leading presidential candidates are spending millions to target voters in the run-up to the November 2020 election.

The following is how social media platforms have decided to handle false or misleading claims in political ads:

FACEBOOK

Facebook exempts politicians from its third-party fact-checking program, allowing them to run ads with false claims.

The policy  has been attacked by regulators and lawmakers who say it could spread misinformation and cause voter suppression. Critics including Democratic presidential candidate Elizabeth Warren have also run intentionally false Facebook ads to highlight the issue.

Facebook’s chief executive Mark Zuckerberg has defended the company’s stance, arguing that it does not want to stifle political speech, but he also said the company was considering ways to refine the policy.

Facebook does fact-check content from political groups. The company also says it fact-checks politicians if they share previously debunked content and does not allow this content in ads.

TWITTER

Twitter Inc  has banned political ads. On Friday it said this will include ads that reference a political candidate, party, election or legislation, among other limits.

The company also said it will not allow ads that advocate for a specific outcome on political or social causes.

“We believe political message reach should be earned, not bought,” said Twitter CEO Jack Dorsey in a statement last month.

Some lawmakers praised the ban but critics said Twitter’s decision would benefit incumbent and hurt less well-known candidates.

Officials from the Trump campaign, which is out-spending its Democratic rivals on Facebook and Google ads, called the ban “dumb” but also said it would have little effect on the president’s strategy.

The overall political ad spend for the 2018 U.S. midterm elections on Twitter was less than $3 million, Twitter’s Chief Financial Officer Ned Segal said.

“Twitter from an advertising perspective is not a player at all. Facebook and Google are the giants in political ads,” said Steve Passwaiter, vice president of the Campaign Media Analysis Group at Kantar Media.

GOOGLE

Google and its video-streaming service YouTube prohibit certain kinds of misrepresentation  in ads, such as misinformation about public voting procedures or incorrect claims that a public figure has died.

However, Google does not have a wholesale ban on politicians running false or misleading ads.

In October, when former Vice President Joe Biden’s campaign asked the company to take down a Trump campaign ad that it said contained false claims, a Google spokeswoman told Reuters it did not violate the site’s policies.

YouTube has started adding links and information from Wikipedia to give users more information around sensitive content such as conspiracy theory videos, but a spokeswoman said this program does not relate to ads.

SNAP

Snap Inc allows political advertising unless the ads are misleading, deceptive or violate the terms of service on its disappearing message app Snapchat.

The company, which recently joined Facebook, Twitter and Google in launching a public database of its political ads, defines political ads as including election-related, advocacy and issue ads.

Snap does not ban “attack” ads in general, but its policy  does prohibit attacks relating to a candidate’s personal life.

TIKTOK

The Chinese-owned video app popular with U.S. teenagers does not permit political advertising on the platform.

In an October blog pos, TikTok said that the company wants to make sure the platform continues to feel “light-hearted and irreverent.”

“The nature of paid political ads is not something we believe fits the TikTok platform experience,” wrote Blake Chandlee, TikTok’s vice president of global business solutions.

The app, which is owned by Beijing-based tech giant ByteDance, has recently come under scrutiny from U.S. lawmakers over concerns the company may be censoring politically sensitive content, and raising questions about how it stores personal data.

REDDIT

Social network Reddit allows ads related to political issues and it allows ads from political candidates at the federal level, but not for state or local elections.

It also does not allow ads about political issues, elections or candidates outside of the United States.

The company says all political ads must abide by its policies that forbid “deceptive, untrue or misleading advertising” and that prohibit “content that depicts intolerant or overly contentious political or cultural topics or views.”

LINKEDIN

LinkedIn, which is owned by Microsoft Corp, banned political ads last year. It defines political ads as including “ads advocating for or against a particular candidate or ballot proposition or otherwise intended to influence an election outcome.”

Search engine Bing, which is also owned by Microsoft, does not allow ads with political or election-related content.

PINTEREST

Photo-sharing site Pinterest Inc also banned political campaign ads last year.

This includes advertising for political candidates, political action committees (PACs), legislation, or political issues with the intent to influence an election, according to the site’s ads policy.

“We want to create a positive, welcoming environment for our Pinners and political campaign ads are divisive by nature,” said Pinterest spokeswoman Jamie Favazza, who told Reuters the decision was also part of the company’s strategy to address misinformation.

TWITCH

A spokeswoman for Twitch told Reuters the live-streaming gaming network does not allow political advertising.

The site does not strictly ban all issue-based advertising but the company considers whether an ad could be seen as “political” when it is reviewed, the spokeswoman said.

Twitch, which is owned by Amazon.com Inc, is primarily a video gaming platform but also has channels focused on sports, music and politics. In recent months, political candidates such as U.S. President Donald Trump and Senator Bernie Sanders have joined the platform ahead of the 2020 election.

(Reporting by Elizabeth Culliford; additional reporting by Sheila Dang; Editing by Robert Birsel and Bill Berkrot)

Russian operatives sacrifice followers to stay under cover on Facebook

Russian operatives sacrifice followers to stay under cover on Facebook
By Jack Stubbs

LONDON (Reuters) – Efforts by Russian influence campaigns to stay undetected on social media ahead of next year’s U.S. elections are undermining their ability to gain followers and spread divisive political messages, a senior Facebook <FB.O> executive told Reuters.

Social media users need to stand out from the crowd to gain traction online, but that type of behavior also helps Facebook and other platforms identify suspicious activity to then analyze for signs of foreign involvement, said Nathaniel Gleicher, Facebook’s head of cyber security policy.

“If you are very, very loud, if you go viral very, very fast that’s exactly the sort of thing that our automated systems will detect and flag,” he said. “So when actors have really diligent, deliberate and effective operational security it weakens their ability to build an audience.”`

Facebook on Monday suspended a network of Instagram accounts it said targeted U.S. users ahead of next year’s presidential poll and were linked to Russia’s Internet Research Agency (IRA), an organization Washington says Moscow used to meddle in the 2016 U.S. election.

The latest Russian campaign posted on both sides of sensitive topics such as the environment and sexual equality but struggled to attract followers due to the operators’ attempts to stop the accounts being caught and disabled, said Gleicher.

Those efforts included sharing memes and screenshots of other users’ social media posts instead of producing original content in English, likely to avoid making language errors typical of non-native speakers, according to a report https://graphika.com/uploads/Graphika%20Report%20-%20CopyPasta.pdf by social media analytics firm Graphika.

This technique “gave each asset less of a discernible personality and therefore may have reduced the (campaign’s) ability to build audiences,” Graphika said.

The IRA-linked network of 50 Instagram accounts had around 246,000 followers, about 60 percent of which were in the United States, Facebook said, without providing a breakdown for each account.

That compares with charges by U.S. special counsel Robert Mueller that the IRA has previously run social media accounts with hundreds of thousands of followers each. Facebook says up to 126 million Americans may have seen Russian-linked posts aimed at the 2016 election.

Russian catering tycoon Evgeny Prigozhin, accused by U.S. prosecutors of orchestrating the IRA’s activities through Concord Management and Consulting LLC, did not respond to questions sent by Reuters.

Attorneys for Concord Management and Consulting LLC did not respond to a request for comment but have previously denied any wrongdoing.

PAYING IN ROUBLES

Facebook, Twitter <TWTR.N> and Google <GOOGL.O> have vowed to step up the fight against political manipulation of their platforms after facing fierce criticism for failing to counter alleged Russian interference in 2016.

Despite the increased scrutiny, U.S. officials have repeatedly warned of the threat posed by Russia and other countries such as Iran, who they say may still attempt to sway the result of next year’s vote.

Addressing U.S. lawmakers this week, FBI Deputy Assistant Director Nikki Floris said the bureau’s foreign influence task force was briefing candidates and running a series of public information videos to help safeguard the election.

The Department of Homeland Security did not immediately respond to a request for comment.

Moscow and Tehran have repeatedly denied allegations of election interference. The Kremlin and Russia’s foreign ministry did not immediately respond to requests for comment.

Russian efforts to avoid detection by the platforms’ security teams have been increasing since the IRA’s alleged efforts in 2016 were first exposed, said Ben Nimmo, who has helped Facebook analyze influence operations and currently runs investigations at Graphika.

A campaign exposed by the Atlantic Council’s Digital Forensic Research Lab in June, which attempted to seed false narratives online such as a bogus plot to assassinate British Prime Minister Boris Johnson, created a new account for almost every single post.

This made it harder to track connections between the accounts, Nimmo said, but also meant the posts only reached a small number of people.

Announcing the takedown of a network in July last year, which it said showed “some connections” to previously-identified IRA accounts, Facebook noted that “bad actors have been more careful to cover their tracks.”

The company said operators were using virtual private networks and internet phone services to obscure an account user’s location, and paying for advertising via third parties.

In contrast, previous campaigns linked to the IRA used Russian phone numbers and IP addresses to register their accounts, as well as paying for Facebook adverts in Russian roubles, raising suspicions about Russian involvement.

“The original IRA activity threw operational security to the wind,” Nimmo said.

(Additional reporting by Christopher Bing and Raphael Satter in Washington; Editing by Carmel Crimmins)

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader
SAN FRANCISCO (Reuters) – Disinformation campaigns helped lead to the assassination of Martin Luther King, the daughter of the U.S. civil rights champion said on Thursday after the head of Facebook said social media should not factcheck political advertisements.

The comments come as Facebook Inc  is under fire for its approach to political advertisements and speech, which Chief Executive Mark Zuckerberg defended on Thursday in a major speech that twice referenced King, known by his initials MLK.

King’s daughter, Bernice, tweeted that she had heard the speech. “I’d like to help Facebook better understand the challenges #MLK faced from disinformation campaigns launched by politicians. These campaigns created an atmosphere for his assassination,” she wrote from the handle @BerniceKing.

King died of an assassin’s bullet in Memphis, Tennessee, on April 4, 1968.

Zuckerberg argued that his company should give voice to minority views and said that court protection for free speech stemmed in part from a case involving a partially inaccurate advertisement by King supporters. The U.S. Supreme Court protected the supporters from a lawsuit.

“People should decide what is credible, not tech companies,” Zuckerberg said.

“We very much appreciate Ms. King’s offer to meet with us. Her perspective is invaluable and one we deeply respect. We look forward to continuing this important dialogue with her in Menlo Park next week,” a Facebook spokesperson said.

(Reporting by Peter Henderson; Editing by Lisa Shumaker)

Facebook’s Zuckerberg hits pause on China, defends political ads policy

Facebook’s Zuckerberg hits pause on China, defends political ads policy
By David Shepardson and Katie Paul

WASHINGTON (Reuters) – Facebook Inc <FB.O> Chief Executive Mark Zuckerberg on Thursday defended the social media company’s political advertising policies and said it was unable to overcome China’s strict censorship, attempting to position his company as a defender of free speech.

“I wanted our services in China because I believe in connecting the whole world, and I thought maybe we could help creating a more open society,” Zuckerberg said, addressing students at Georgetown University.

“I worked hard on this for a long time, but we could never come to agreement on what it would take for us to operate there,” he said. “They never let us in.”

He did not address what conditions or assurances he would need to enter the Chinese market.

Facebook tried for years to break into China, one of the last great obstacles to Zuckerberg’s vision of connecting the world’s entire population on the company’s apps.

Zuckerberg met with Chinese President Xi Jinping in Beijing and welcomed the country’s top internet regulator to Facebook’s campus. He also learned Mandarin and posted a photo of himself running through Tiananmen Square, which drew a sharp reaction from critics of the country’s restrictive policies.

The company briefly won a license to open an “innovation hub” in Hangzhou last year, but it was later revoked.

Zuckerberg effectively closed that door in March, when he announced his plan to pivot Facebook toward more private forms of communication and pledged not to build data centers in countries “that have a track record of violating human rights like privacy or freedom of expression.”

He repeated his concern about data centers on Thursday, this time specifically naming China.

Zuckerberg also defended the company’s political advertising policies on similar grounds, saying Facebook had at one time considered banning all political ads but decided against it, erring on the side of greater expression.

Facebook has been under fire over its advertising policies, particularly from U.S. Senator Elizabeth Warren, a leading contender for the Democratic presidential nomination.

The company exempts politicians’ ads from fact-checking standards applied to other content on the social network. Zuckerberg said political advertising does not contribute much to the company’s revenues, but that he believed it would be inappropriate for a tech company to censor public figures.

Reuters reported in October 2018, citing sources, that Facebook executives briefly debated banning all political ads, which produce less than 5% of the company’s revenue.

The company rejected that because product managers were loath to leave advertising dollars on the table and policy staffers argued that blocking political ads would favor incumbents and wealthy campaigners who can better afford television and print ads, the sources said.

Facebook has been under scrutiny in recent years for its lax approach to fake news reports and disinformation campaigns, which many believe affected the outcome of the 2016 U.S. presidential election, won by Donald Trump.

Trump has disputed claims that Russia has attempted to interfere in U.S. elections. Russian President Vladimir Putin has denied it.

Warren’s Democratic presidential campaign recently challenged Facebook’s policy that exempts politicians’ ads from fact-checking, running ads on the social media platform containing the false claim that Zuckerberg endorsed Trump’s re-election bid.

(Reporting by David Shepardson; Writing by Katie Paul; Editing by Lisa Shumaker)

U.S. social media firms say they are removing violent content faster

By David Shepardson

WASHINGTON (Reuters) – Major U.S. social media firms told a Senate panel Wednesday they are doing more to prevent to remove violent or extremist content from online platforms in the wake of several high-profile incidents, focusing on using more technological tools to act faster.

Critics say too many violent videos or posts that back extremist groups supporting terrorism are not immediately removed from social media websites.

Senator Richard Blumenthal, a Democrat, said social media firms need to do more to prevent violent content.

Facebook’s head of global policy management, Monika Bickert, told the Senate Commerce Committee its software detection systems have “reduced the average time it takes for our AI to find a violation on Facebook Live to 12 seconds, a 90% reduction in our average detection time from a few months ago.”

In May, Facebook Inc said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Bickert said Facebook asked law enforcement agencies to help it access “videos that could be helpful training tools” to improve its machine learning to detect violent videos.

Earlier this month, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill after police in Texas said they were “reasonably confident” the man who shot and killed 22 people at a Walmart in El Paso, Texas.

Facebook banned links to violent content that appeared on 8chan.

Twitter Inc public policy director Nick Pickles said the website suspended more than 1.5 million accounts for terrorism promotion violations between August 2015 and the end of 2018 with “more than 90% of these accounts are suspended through our proactive measures.”

Twitter was asked by Senator Rick Scott why the site allows Venezuelan President Nicolas Maduro to have an account given what he said were a series of brazen human rights violations. “If we remove that person’s account it will not change facts on the ground,” Pickles said, who added that Maduro’s account has not broken Twitter’s rules.

Alphabet Inc unit Google’s global director of information policy, Derek Slater, said the answer is “a combination of technology and people. Technology can get better and better at identifying patterns. People can help deal with the right nuances.”

Of 9 million videos removed in a three-month period this year by YouTube, 87% were flagged by artificial intelligence.

(Reporting by David Shepardson; Editing by Nick Zieminski)

U.S. social media firms to testify on violent, extremist online content

By David Shepardson

WASHINGTON (Reuters) – Alphabet Inc’s Google, Facebook Inc and Twitter Inc will testify next week before a U.S. Senate panel on efforts by social media firms to remove violent content from online platforms, the panel said in a statement on Wednesday.

The Sept. 18 hearing of the Senate Commerce Committee follows growing concern in Congress about the use of social media by people committing mass shootings and other violent acts. Last week, the owner of 8chan, an online message board linked to several recent mass shootings, gave a deposition on Capitol Hill.

The hearing “will examine the proliferation of extremism online and explore the effectiveness of industry efforts to remove violent content from online platforms. Witnesses will discuss how technology companies are working with law enforcement when violent or threatening content is identified and the processes for removal of such content,” the committee said.

Facebook’s head of global policy management Monika Bickert, Twitter public policy director Nick Pickles and Google’s global director of information policy Derek Slater are due to testify.

Facebook and Google both confirmed they will participate but declined to comment further. Twitter did not immediately comment.

In May, Facebook said it would temporarily block users who break its rules from broadcasting live video. That followed an international outcry after a gunman killed 51 people in New Zealand and streamed the attack live on his page.

Facebook said it was introducing a “one-strike” policy for use of Facebook Live, a service which lets users broadcast live video. Those who broke the company’s most serious rules anywhere on its site would have their access to make live broadcasts temporarily restricted.

Facebook has come under intense scrutiny in recent years over hate speech, privacy lapses and its dominant market position in social media. The company is trying to address those concerns while averting more strenuous action from regulators.

(Reporting by David Shepardson, Editing by Rosalba O’Brien and Tom Brown)

Facebook tightens rules for U.S. political advertisers ahead of 2020 election

FILE PHOTO: A 3D-printed Facebook Like symbol is displayed in front of a U.S. flag in this illustration taken, March 18, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

By Elizabeth Culliford

(Reuters) – Facebook Inc is tightening its political ad rules in the United States, it said on Wednesday, requiring new disclosures for its site and photo-sharing platform Instagram ahead of the U.S. presidential election in November 2020.

The social media giant is introducing a “confirmed organization” label for U.S. political advertisers who show government-issued credentials to demonstrate their legitimacy.

All advertisers running ads on politics or social issues will also have to post their contact information, even if they are not seeking the official label.

Advertisers must comply by mid-October or risk having their ads cut off.

Under scrutiny from regulators since Russia used social media platforms to meddle in the 2016 U.S. presidential election, Facebook has been rolling out ad transparency tools country by country since last year.

Since May 2018, Facebook has required political advertisers in the United States to put a “paid for by” disclaimer on their ads. But the company said some had used misleading disclaimers or tried to register as organizations that did not exist.

“In 2018 we did see evidence of misuse in these disclaimers and so this is our effort to strengthen the process,” said Sarah Schiff, product manager at Facebook.

Last year, Vice News journalists managed to place ads on behalf of figures and groups including U.S. Vice President Mike Pence and “Islamic State.” Just last week, Facebook banned conservative news outlet the Epoch Times from advertising on the platform after it used different pages to push ads in support of President Donald Trump.

Paid Facebook ads have become a major tool for political campaigns and other organizations to target voters.

The re-election campaign for Trump, a Republican, has spent about $9.6 million this year on ads on the site, making him the top spender among 2020 candidates, according to Bully Pulpit Interactive, a Democratic firm that tracks digital ad spending.

After the announcement, the Trump campaign told Reuters it thought there was a “glaring omission” in Facebook’s political ads policy because news media were not held to the same standards as campaigns for buying ads.

Facebook does not apply its ad authorization policies to certain news sources that it determines to have a good track record for avoiding misinformation, have a minimum number of visitors and have ads with the primary purpose of reporting on news and current events.

Last year, Facebook began requiring political advertisers to submit a U.S. mailing address and identity document. Under the new rules, they will also have to supply a phone number, business email and website.

To get a “confirmed organization” label, advertisers must submit a Federal Election Commission ID number, tax-registered organization ID number, or government website domain matching an official email.

Facebook has continuously revamped its policies around political advertising, which differ by country.

In 2018, it launched an online library of political ads, although the database has been criticized by researchers for being poorly maintained and failing to provide useful ad targeting information.

(Reporting by Elizabeth Culliford in San Francisco; Additional reporting from Ginger Gibson in Washington; Editing by Lisa Shumaker and Matthew Lewis)

Twitter, Facebook accuse China of using fake accounts to undermine Hong Kong protests

FILE PHOTO: A 3-D printed Facebook logo is seen in front of displayed binary code in this illustration picture, June 18, 2019. REUTERS/Dado Ruvic/Illustration/File Photo

By Katie Paul and Elizabeth Culliford

(Reuters) – Twitter Inc and Facebook Inc said on Monday they had dismantled a state-backed information operation originating in mainland China that sought to undermine protests in Hong Kong.

Twitter said it suspended 936 accounts and the operations appeared to be a coordinated state-backed effort originating in China. It said these accounts were just the most active portions of this campaign and that a “larger, spammy network” of approximately 200,000 accounts had been proactively suspended before they were substantially active.

Facebook said it had removed accounts and pages from a small network after a tip from Twitter. It said that its investigation found links to individuals associated with the Chinese government.

Social media companies are under pressure to stem illicit political influence campaigns online ahead of the U.S. election in November 2020. A 22-month U.S. investigation concluded Russia interfered in a “sweeping and systematic fashion” in the 2016 U.S. election to help Donald Trump win the presidency.

The Chinese embassy in Washington and the U.S. State Department were not immediately available to comment.

The Hong Kong protests, which have presented one of the biggest challenges for Chinese President Xi Jinping since he came to power in 2012, began in June as opposition to a now-suspended bill that would allow suspects to be extradited to mainland China for trial in Communist Party-controlled courts. They have since swelled into wider calls for democracy.

Twitter in a blog post said the accounts undermined the legitimacy and political positions of the protest movement in Hong Kong.

Examples of posts provided by Twitter included a tweet from a user with photos of protesters storming Hong Kong’s Legislative Council building, which asked: “Are these people who smashed the Legco crazy or taking benefits from the bad guys? It’s a complete violent behavior, we don’t want you radical people in Hong Kong. Just get out of here!”

In examples provided by Facebook, one post called the protesters “Hong Kong cockroaches” and claimed that they “refused to show their faces.”

In a separate statement, Twitter said it was updating its advertising policy and would not accept advertising from state-controlled news media entities going forward.

Alphabet Inc’s YouTube video service told Reuters in June that state-owned media companies maintained the same privileges as any other user, including the ability to run ads in accordance with its rules. YouTube did not immediately respond to a request for comment on Monday on whether it had detected inauthentic content related to protests in Hong Kong.

(Reporting by Katie Paul in Aspen, Colorado, and Elizabeth Culliford in San Francisco; Additional reporting by Sayanti Chakraborty in Bengaluru; Editing by Lisa Shumaker)