U.S. Justice Department to propose changes to internet platforms immunity: source

By David Shepardson and Ayanti Bera

WASHINGTON (Reuters) – The U.S. Justice Department will unveil later on Wednesday a proposal that seeks to limit legal protections for internet platforms on managing content, a person briefed on the matter confirmed.

The proposal, which takes aim at Facebook Inc, Twitter Inc and Alphabet Inc’s Google, would need congressional approval and is not likely to see action until next year at the earliest.

President Donald Trump in May signed an executive order that seeks new regulatory oversight of tech firms’ content moderation decisions and backed legislation to scrap or weaken the relevant provision in the 1996 Communications Decency Act, Section 230.

Trump will meet on Wednesday with a group of state attorneys general amid his criticism of social media companies. Twitter has repeatedly placed warning labels on Trump tweets, saying they have included potentially misleading information about mail-in voting.

Trump will meet with state attorneys general from Texas, Arizona, Utah, Louisiana, Arkansas, Mississippi, South Carolina and Missouri – like Trump, all Republicans – according to a person briefed on the matter.

“Online censorship goes far beyond the issue of free speech, it’s also one of protecting consumers and ensuring they are informed of their rights and resources to fight back under the law,” White House spokesman Judd Deere said on Monday.

Trump directed the Commerce Department to file a petition asking the Federal Communication Commission to limit protections under Section 230 after Twitter warned readers in May to fact-check his posts about unsubstantiated claims of fraud in mail-in voting. The petition is still pending.

A group representing major internet companies including Facebook, Amazon.com Inc and Google urged the FCC to reject the petition, saying it was “misguided, lacks grounding in law, and poses serious public policy concerns.”

The Wall Street Journal reported the planned Justice Department proposal earlier.

Police debunk social media misinformation linking Oregon wildfires to activists

By Elizabeth Culliford

(Reuters) – Several Oregon police departments have aimed to debunk misinformation spreading on social media platforms this week, including Facebook Inc and Twitter Inc, blaming leftist and right-wing groups for wildfires raging in the state.

“Rumors spread just like wildfire and now our 9-1-1 dispatchers and professional staff are being overrun with requests for information and inquiries on an UNTRUE rumor that 6 Antifa members have been arrested for setting fires in DOUGLAS COUNTY, OREGON,” read a Facebook post from the Douglas County Sheriff’s Office in Oregon on Thursday. “THIS IS NOT TRUE!”

PolitiFact, one of Facebook’s third-party fact-checking partners, wrote on Thursday on its website that dozens of posts blaming Antifa for the wildfires had been flagged by the social media company’s systems, and that collectively the posts had been shared thousands of times.

Antifa, which stands for anti-fascist, is a largely unstructured, far-left movement whose followers broadly aim to confront those they view as authoritarian or racist. U.S. President Donald Trump and some fellow Republicans have in recent months sought to blame the movement for violence at anti-racism protests, but have presented little evidence.

A Wednesday tweet from a self-described representative for conservative youth group Turning Point USA, which has been shared about 2,900 times, said the fires were “allegedly linked to Antifa and the Riots.”

Around half a million people in Oregon evacuated as dozens of extreme, wind-driven wildfires scorched the U.S. West Coast states on Friday, destroying hundreds of homes and killing at least 16 people, state and local authorities said.

Earlier this week, Medford police in Oregon also debunked a false post using the police department’s logo and name suggesting that five members of the Proud Boys had been arrested for arson.

The men-only, far-right Proud Boys group describes itself as a fraternal club of “Western chauvinists.”

“This is a made up graphic and story. We did not arrest this person for arson, nor anyone affiliated with Antifa or ‘Proud Boys’ as we’ve heard throughout the day,” the police department wrote in a Facebook post.

The Jackson County Sheriff’s Office in Oregon also posted on Thursday: “We are inundated with questions about things that are FAKE stories. One example is a story circulating that varies about what group is involved as to setting fires and arrests being made.”

Climate scientists say global warming has contributed to greater extremes in wet and dry seasons, causing vegetation to flourish and then dry out in the U.S. West, creating fuel for fires.

Police have opened a criminal arson investigation into at least one Oregon blaze, the Almeda Fire, Ashland Police Chief Tighe O’Meara said.

A Facebook spokeswoman said it had attached warning labels and reduced the distribution of posts about fires’ origins that were rated false by its fact-checking partners.

A Twitter spokeswoman said it did not seem that the rumors violated the social media site’s rules, saying in a statement: “As we have said before we will not be able to take enforcement action on every Tweet that contains incomplete or disputed information.”

(Reporting by Elizabeth Culliford in Birmingham, England, additional reporting by Katie Paul in San Francisco; Editing by Tom Brown)

Facebook removes seven million posts for sharing false information on coronavirus

(Reuters) – Facebook Inc. said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.

Facebook released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.

The company said it would invite external experts to independently audit the metrics used in the report, beginning 2021.

The world’s biggest social media company removed about 22.5 million posts containing hate speech on its flagship app in the second quarter, up from 9.6 million in the first quarter. It also deleted 8.7 million posts connected to extremist organizations, compared with 6.3 million in the prior period.

Facebook said it relied more heavily on automation technology for reviewing content during the months of April, May and June as it had fewer reviewers at its offices due to the COVID-19 pandemic.

That resulted in company taking action on fewer pieces of content related to suicide and self-injury, child nudity and sexual exploitation on its platforms, Facebook said in a blog post.

The company said it was expanding its hate speech policy to include “content depicting blackface, or stereotypes about Jewish people controlling the world.”

Some U.S. politicians and public figures have caused controversies by donning blackface, a practice that dates back to 19th century minstrel shows that caricatured slaves. It has long been used to demean African-Americans.

(Reporting by Katie Paul in San Francisco and Munsif Vengattil in Bengaluru; Additional Reporting by Bart Meijer; Editing by Shinjini Ganguli and Anil D’Silva)

U.S. tech giants face hard choices under Hong Kong’s new security law

By Brenda Goh and Pei Li

SHANGHAI/HONG KONG (Reuters) – U.S. tech giants face a reckoning over how Hong Kong’s security law will reshape their businesses, with their suspension of processing government requests for user data a stop-gap measure as they weigh options, people close to the industry say.

While Hong Kong is not a significant market for firms such as Facebook, Google and Twitter, they have used it as a perch to reach deep-pocketed advertisers in mainland China, where many of their services are blocked. But the companies are now in the cross hairs of a national security law that gives China authority to demand that they turn over user data or censor content seen to violate the law – even when posted from abroad.

“These companies have to totally reassess the liability of having a presence in Hong Kong,” Charles Mok, a legislator who represents the technology industry in Hong Kong, told Reuters.

If they refuse to cooperate with government requests, he said, authorities “could go after them and take them to court and fine them, or imprison their principals in Hong Kong”.

Facebook, Google and Twitter have suspended processing government requests for user data in Hong Kong, they said on Monday, following China’s imposition of the new national security law on the semi-autonomous city.

Facebook, which started operating in Hong Kong in 2010, last year opened a big new office in the city.

It sells more than $5 billion a year worth of ad space to Chinese businesses and government agencies looking to promote messages abroad, Reuters reported in January. That makes China Facebook’s biggest country for revenue after the United States.

The U.S. internet firms are no strangers to governments demands regarding content and user information, and generally say they are bound by local laws.

The companies have often used a technique known as “geo-blocking” to restrict content in a particular country without removing it altogether.

But the sweeping language of Hong Kong’s new law could mean such measures won’t be enough. Authorities will no longer need to get court orders before requesting assistance or information, analysts said.

Requests for data about overseas users would put the companies in an especially tough spot.

“It’s a global law … if they comply with national security law in Hong Kong then there is the problem that they may violate laws in other countries,” said Francis Fong Po-kiu, honorary president of Hong Kong’s Information Technology Federation.

CONTENT QUESTION

While the U.S. social media services are blocked in mainland China, they have operated freely in Hong Kong.

Other U.S. internet platforms are also rich with content that is banned in mainland China and may now be judged illegal in Hong Kong.

U.S. video streaming site Netflix, for example, carries “Joshua: Teenager vs. Superpower”, a 2017 documentary on activist Joshua Wong whose books were removed from Hong Kong public libraries last week.

“Ten Years”, a 2015 film that has been criticized by Chinese state media for portraying a dystopian future Hong Kong under Chinese Communist Party control, is also available on its platform.

Netflix declined to comment.

Google’s YouTube is a popular platform for critics of Beijing. New York-based fugitive tycoon Guo Wengui has regularly voiced support for Hong Kong protesters in his videos. Google did not immediately respond to a request for comment.

None of these companies has yet said how they will handle requests from Hong Kong to block or remove content, and the risk of being caught in political crossfire looms large.

“The foreign content players have to rethink what they display in Hong Kong,” said Duncan Clark, chairman at consultancy BDA China.

“The downside is very big if they get U.S. senators on their backs for accommodating. Any move they make will be heavily scrutinized.”

(Reporting by Brenda Goh and Pei Li; Additional reporting by Cate Cadell in Beijing and Anne Marie Roantree in Hong Kong; Editing by Jonathan Weber and Robert Birsel)

Facebook, Twitter suspend processing of government data requests in Hong Kong

By Katie Paul

(Reuters) – Facebook Inc and Twitter Inc have suspended processing government requests for user data in Hong Kong, they said on Monday, following China’s establishment of a new national security law for the semi-autonomous city.

Facebook, which also owns WhatsApp and Instagram, is “pausing” reviews for all of its services “pending further assessment of the National Security Law,” it said in a statement.

Twitter said it had suspended all information requests from Hong Kong authorities immediately after the law went into effect last week, citing “grave concerns” about its implications.

The companies did not specify whether the suspensions would also apply to government requests for removals of user-generated content from its services in Hong Kong.

Social networks often apply localized restrictions to posts that violate local laws but not their own rules for acceptable speech. Facebook restricted 394 such pieces of content in Hong Kong in the second half of 2019, up from eight restrictions in the first half of the year.

Tech companies have long operated freely in Hong Kong, a regional financial hub where internet access has been unaffected by restrictions imposed in mainland China, which blocks Google, Twitter and Facebook.

Last week, China’s parliament passed sweeping new national security legislation for the semi-autonomous city, setting the stage for the most radical changes to the former British colony’s way of life since it returned to Chinese rule 23 years ago.

Some Hong Kong residents said they were reviewing their previous posts on social media related to pro-democracy protests and the security law, and proactively deleting ones they thought would be viewed as sensitive.

The legislation pushed China further along a collision course with the United States, with which it is already in disputes over trade, the South China sea and the coronavirus.

(Reporting by Katie Paul in San Francisco and Akanksha Rana in Bengaluru; Editing by Krishna Chandra Eluri and Richard Chang)

Facebook, Snapchat join chorus of companies condemning George Floyd death, racism

(Reuters) – Facebook Inc and Snap Inc became the latest U.S. companies condemning racial inequality in the United States as violent protests flared up across major cities over the death of George Floyd, an unarmed black man who died while in police custody in Minneapolis last week.

The two tech companies stood with Intel Corp, Netflix Inc and Nike Inc in taking a public stance against Floyd’s death – voicing concerns about discrimination against African-Americans.

“We stand with the Black community – and all those working towards justice in honor of George Floyd, Breonna Taylor, Ahmaud Arbery and far too many others whose names will not be forgotten,” Facebook’s Chief Executive Officer Mark Zuckerberg said in a Facebook post late Sunday.

He said the social network will commit $10 million to organizations that are working on racial justice.

The arrest of Floyd, 46, was captured by an onlooker’s cell phone video that went viral and showed a police officer restraining him while pressing his knee on Floyd’s neck as he moaned: “Please, I can’t breathe.”

His death caused yet another round of outrage across the nation on the treatment of African-Americans by police officers, polarizing the country politically and racially as states begin to ease lockdowns during the COVID-19 pandemic.

“I am heartbroken and enraged by the treatment of black people and people of color in America,” Snapchat Chief Executive Officer Evan Spiegel said in an internal memo.

“We must begin a process to ensure that America’s black community is heard throughout the country.”

On Friday, Nike flipped its iconic slogan to raise awareness about racism.

“For Once, Don’t Do It. Don’t pretend there’s not a problem in America. Don’t turn your back on racism,” the company said in a video that has over six million views and was shared by celebrities and rival Adidas AG.

(Reporting by Neha Malara in Bengaluru; Additional reporting by Uday Sampath; Editing by Sweta Singh, Bernard Orr)

Facebook names first members of oversight board that can overrule Zuckerberg

By Elizabeth Culliford

(Reuters) – Facebook Inc’s new content oversight board will include a former prime minister, a Nobel Peace Prize laureate and several constitutional law experts and rights advocates among its first 20 members, the company announced on Wednesday.

The independent board, which some have dubbed Facebook’s “Supreme Court,” will be able to overturn decisions by the company and Chief Executive Mark Zuckerberg on whether individual pieces of content should be allowed on Facebook and Instagram.

Facebook has long faced criticism for high-profile content moderation issues. They range from temporarily removing a famous Vietnam-era war photo of a naked girl fleeing a napalm attack, to failing to combat hate speech in Myanmar against the Rohingya and other Muslims.

The oversight board will focus on a small slice of challenging content issues including hate speech and harassment and people’s safety.

Facebook said the board’s members have lived in 27 countries and speak at least 29 languages, though a quarter of the group and two of the four co-chairs are from the United States, where the company is headquartered.

The co-chairs, who selected the other members jointly with Facebook, are former U.S. federal circuit judge and religious freedom expert Michael McConnell, constitutional law expert Jamal Greene, Colombian attorney Catalina Botero-Marino and former Danish Prime Minister Helle Thorning-Schmidt.

Among the initial cohort are: former European Court of Human Rights judge András Sajó, Internet Sans Frontières Executive Director Julie Owono, Yemeni activist and Nobel Peace Prize laureate Tawakkol Karman, former editor-in-chief of the Guardian Alan Rusbridger, and Pakistani digital rights advocate Nighat Dad.

Nick Clegg, Facebook’s head of global affairs, told Reuters in a Skype interview the board’s composition was important but that its credibility would be earned over time.

“I don’t expect people to say, ‘Oh hallelujah, these are great people, this is going to be a great success’ – there’s no reason anyone should believe that this is going to be a great success until it really starts hearing difficult cases in the months and indeed years to come,” he said.

The board will start work immediately and Clegg said it would begin hearing cases this summer.

The board, which will grow to about 40 members and which Facebook has pledged $130 million to fund for at least six years, will make public, binding decisions on controversial cases where users have exhausted Facebook’s usual appeals process.

The company can also refer significant decisions to the board, including on ads or on Facebook groups. The board can make policy recommendations to Facebook based on case decisions, to which the company will publicly respond.

Initially, the board will focus on cases where content was removed and Facebook expects it to take on only “dozens” of cases to start, a small percentage of the thousands it expects will be brought to the board.

“We are not the internet police, don’t think of us as sort of a fast-action group that’s going to swoop in and deal with rapidly moving problems,” co-chair McConnell said on a conference call.

The board’s case decisions must be made and implemented within 90 days, though Facebook can ask for a 30-day review for exceptional cases.

“We’re not working for Facebook, we’re trying to pressure Facebook to improve its policies and its processes to better respect human rights. That’s the job,” board member and internet governance researcher Nicolas Suzor told Reuters. “I’m not so naive that I think that that’s going to be a very easy job.”

He said board members had differing views on freedom of expression and when it can legitimately be curtailed.

John Samples, vice president of the libertarian Cato Institute, has praised Facebook’s decision not to remove a doctored video of U.S. House Speaker Nancy Pelosi. Sajó has cautioned against allowing the “offended” to have too much influence in the debate around online expression.

Some free speech and internet governance experts told Reuters they thought the board’s first members were a diverse, impressive group, though some were concerned it was too heavy on U.S. members. Facebook said one reason for that was that some of its hardest decisions or appeals in recent years had begun in America.

“I don’t feel like they made any daring choices,” said Jillian C. York, the Electronic Frontier Foundation’s director of international freedom of expression.

Jes Kaliebe Petersen, CEO of Myanmar tech-focused civil society organization Phandeeyar, said he hoped the board would apply more “depth” to moderation issues, compared with Facebook’s universal set of community standards.

David Kaye, U.N. special rapporteur on freedom of opinion and expression, said the board’s efficacy would be shown when it started hearing cases.

“The big question,” he said, “will be, are they taking questions that might result in decisions, or judgments as this is a court, that go against Facebook’s business interests?”

(Reporting by Elizabeth Culliford in Birmingham, England; Editing by Tom Brown and Matthew Lewis)

Under Europe’s virus lockdown, social media proves a lifeline

By Luke Baker

LONDON (Reuters) – Hundreds of millions of Europeans are getting to grips with weeks of a massively contracted existence under lockdown.

The goal is clear and very serious — reduce the spread of a deadly virus, keep critical medical resources and hospital beds free for the most vulnerable, save lives.

But behind that sobering objective lies a new challenge for many: hours inside the same four walls, no office chatter, no social contact, kids to entertain (if you have them and they are not in school), the lure of the fridge.

The reality of the new reality is that social media has become a near-essential resource. Whether for news, shared experiences, comic relief or a heated discussion, Twitter, Facebook and Instagram have become a lifeline to many.

While in Italy, tenors and the less tuneful have taken to singing songs from their balconies to cheer up neighbors and build solidarity, videos of the performances have entertained millions far beyond Italy on social media.

Chris Martin, lead singer of the band Coldplay, took to Facebook on Monday to put on a live gig for people self-isolating, tagging it #TogetherAtHome. Singer John Legend took up the baton and said he would do the same on Tuesday.

For anyone tracking the ins and outs of the virus, whether infection rates, epidemiological research, or the infection lag between Italy and its neighbors, Twitter is a constant source of information (and, be warned, misinformation).

While European leaders have been holding news conferences or delivering televised addresses, these are at best once a day.

Online, there is a constant stream of news, commentary from experts, graphs analyzing the virus, and videos from people in Italy (which is 10-14 days ahead in terms of the infection spread) recounting what they wished they had known 10 days ago.

As working from home (#WFH on Twitter) becomes the norm, there are tips on how best to do it, where to set up a desk, how to stay focused, and if you don’t have a desk, how an ironing board can double as an excellent, adjustable alternative.

Among the tips for those doing conference calls from home are the obvious — make sure to get out of your pyjamas and brush your hair, even if you don’t necessarily have to be wearing trousers if you’re sitting behind a desk.

On Facebook, home workouts have proved popular, with people posting the best ways of staying fit while confined to a room. One popular video involves a woman doing a routine around a load of toilet rolls, which have been the object of hoarding by consumers worried about the impact of the virus.

As always, pets have proved a hit. Alongside WFH advice, many have been posting pictures of their cats and dogs, some of which look surprised by all the sudden unexpected attention.

For many, the surge in social media use in recent years has been an awful contradiction — rather than making people more friendly, it has tended to cut them off, cause division and fuel anger and resentment, not sociability.

But as Europe adjusts to the reality of self-isolation, there are signs social media can bring out the best in people, not just the boastful or argumentative bits many decry.

On Twitter, alongside advice on working from home or looking after elderly relatives, users are opening their direct messages, allowing anyone to contact them, and inviting those who want to talk or share concerns to get in touch.

(Editing by Alexandra Hudson)

Cover up or be censored: Cambodia orders women not look sexy on Facebook

By Matt Blomberg

PHNOM PENH (Thomson Reuters Foundation) – A crackdown in Cambodia on women who wear provocative clothing while selling goods via Facebook live streams was slammed by women’s rights groups on Wednesday as dangerous and baseless.

Prime Minister Hun Sen said low cut tops were an affront to Cambodian culture and ordered authorities to track down Facebook vendors who wear them to sell items like clothes and beauty products – a popular trend in the conservative country.

“Go to their places and order them to stop live-streaming until they change to proper clothes,” the prime minister told the government’s Cambodian National Council for Women on Monday.

“This is a violation of our culture and tradition,” he said, adding that such behaviour contributed to sexual abuse and violence against women.

While Cambodia’s young population is increasingly educated, many expect women to be submissive and quiet, a legacy of Chbap Srey, an oppressive code of conduct for women in the form of a poem that was on primary school curricula until 2007.

The national police posted a video to Facebook on Wednesday, in which a Cambodian woman makes a public apology for sullying the “tradition and honour of Cambodian women” by wearing “extremely short and sexy clothes” in her online sales pitches.

Facebook was not immediately available to comment.

Interior ministry spokesman Khieu Sopheak confirmed on Wednesday that authorities were “taking action” in line with the prime minister’s orders. He referred further questions to a police spokesman who could not be reached immediately.

Amnesty International regional director Nicholas Bequelin said the prime minister’s comments were a “dangerous instance of victim blaming”.

“This rhetoric only serves to perpetuate violence against women and stigmatise survivors of gender-based violence,” he said in a statement on Wednesday.

One in five Cambodian men said they had raped a woman in a 2013 United Nations survey.

Ros Sopheap, head of the charity Gender and Development for Cambodia, said the government should look at the reasons why women sell goods online instead of dictating what they wear.

“They always talk about culture, culture, culture,” she told the Thomson Reuters Foundation. “What about jobs? What about education? These things are broken in Cambodia. And what about people’s right to make a living?”

Seven Cambodian women’s rights groups pointed out that the women vendors had breached no law.

“There is no evidence-based research that affirms that women’s clothing choice is the root cause of degradation of social morality,” they said in an open letter.

(Editing by Katy Migiro. Please credit Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, women’s rights, trafficking, property rights, and climate change. Visit www.trust.org)

Facebook starts fact-checking partnership with Reuters

(Reuters) – Facebook Inc said on Wednesday it has reached an agreement with news agency Reuters, a unit of Thomson Reuters Corp, to fact-check content posted on the social media platform and its photo-sharing app Instagram.

Under pressure to remove fake news on its platform ahead of the U.S. presidential elections, Facebook started a U.S. pilot program in December to detect misinformation faster.

The move came after U.S. intelligence agencies said that social media platforms were used in a Russian cyber-influence campaign aimed at interfering in the 2016 U.S. election – a claim Moscow has denied.

A newly created unit at Reuters will fact-check user-generated photos, videos, headlines and other content for Facebook’s U.S. audience in both English and Spanish, the news agency said in a statement. Financial terms were not disclosed.

Facebook works with seven other fact-checking partners in the United States, including Associated Press and Agence France-Presse.

(Reporting by Supantha Mukherjee in Bengaluru; editing by Edward Tobin)