Facebook’s ‘double-edged sword’ in Thai carnage

By Patpicha Tanakasempipat

NAKHON RATCHASIMA, Thailand (Reuters) – Facebook celebrity doctor Parkphum Dejhutsadin said his phone suddenly started pinging on Saturday – scores of his two million followers in Thailand were desperate and they needed his help.

With nowhere to turn as they cowered in a shopping mall from a rogue soldier who had already killed more than two dozen people, they looked to Facebook and other social media to send their pleas and to try to find escape.

Parkphum could help – and said for the next 16 hours that’s all he did: living up to his panda-eyed Facebook persona as sleepless doctor “Mor Lab Panda”.

“They told me where they were and sent me pictures of their hiding places. Authorities didn’t know where anybody was hiding. But I knew everything,” said Parkphum. “I didn’t sleep a wink. I didn’t want them to die.”

While social media have been accused of exacerbating or even encouraging mass shootings such as last year’s mosque massacre in Christchurch, New Zealand, in Thailand they were also crucial to pulling off a safe and dramatic rescue from the shopping mall in Nakhon Ratchasima city.

Before 32-year-old killer Jakrapanth Thomma was cornered in a basement and shot dead, Thai commandos managed to coordinate raids into the mall to spring hundreds of people to safety.

“We were communicating on Facebook with the people inside to exchange information,” Pongpipat Siripornwiwat, deputy commander of Nakhon Ratchasima police, told Reuters. “Without it, our work would’ve been very difficult and we wouldn’t have had any idea how many were trapped and what was going on inside.”

FACEBOOK LIFE

The tragedy underscored the extent to which Facebook is the communication platform for daily life in the country of 69 million which has about 56 million active users a month and where the average person spends three hours a day on social media. Most social media activity is on mobile phones.

And it was on Facebook that the killer, apparently angered by a property deal gone sour, first signaled his intentions.

“Do they think they can spend the money in hell?” his post ended, roughly three hours before he opened fire at a house, then moved to an army camp, a temple and then the shopping mall – leaving a trail of murder behind him.

At one point he posted a selfie in front of a fire.

His last message before his Facebook account was shut down – “Should I give up?” – came nearly four hours after the first shot.

But after facing criticism for failing to take down the Christchurch shooter’s livestream quickly and when a Thai father murdered his child on Facebook Live in 2017, the world’s biggest social media company moved faster once it heard what was happening.

It shut his Facebook and Instagram accounts and then worked to remove anything that he had posted and was being shared by others – including by spoof accounts apparently set up in his name by other people after his own was blocked.

“There is no place on Facebook for people who commit this kind of atrocity, nor do we allow people to praise or support this attack,” a Facebook representative said in a statement, adding that it worked closely with Thai authorities to take down content that violated its policies.

“We also responded to emergency requests from the Royal Thai Police to share information related to the shooter to prevent further harm,” it said, without giving further details.

Twitter, where graphic videos of the incident were circulated, said it also took action – a company representative said it monitored its platform to remove video content of the attack and to shield graphic content from view.

But police said the shooter, who killed at least 29 people and wounded 57 before he was stopped, had not only used social media to publicize what he was doing but also to track police movements through online news sites.

“Social media was a double-edged sword. It helped police rescue people, but it also helped him keep up with our movements,” said Pongpipat.

“PANDA EYES”

Parkphum, a medical technologist working for Thailand’s National Blood Center, is so famous he even has his own set of stickers for social media messaging apps with his trademark “panda eyes” and white coat.

“Every message from the people about where they were hiding and how many were with them all turned out to be true when police got there. People were hiding in (fashion store) H&M, Eveandboy (a cosmetics shop), a gym. I now know the entire floor plan of the mall,” he said.

Other Facebook celebrities with millions of followers also stepped in to coordinate and reassure.

“I told them to stay as quiet as possible and mute their phones, to send their locations and phone numbers,” said Witawat Siriprachai, 36, known by Thais as the “Sergeant” of the social commentary page “Drama-addict”.

“I warned them not to livestream from their locations, because the shooter was also using Facebook during the rampage,” said Witawat, who is not a sergeant in real life.

At the shopping mall, 42-year-old Pat said she had just finished a meal when she heard the first shots and ran to hide in a mobile phone store. She said she was still traumatized and did not want to give her full name.

For five hours she said she scrolled through her Facebook newsfeeds to keep up with what was happening. Afraid to make the slightest noise she messaged friends who told her where to contact the police.

“I waited in complete darkness, and then the police replied to ask my exact location,” she said.

Police worked with the information she gave to coordinate an escape route and timing for people on that floor – and when they gave clearance that the shooter was three floors down, everybody just sprinted to the fire exit.

At a crouching run, masked commandos led them to safety.

Just before 11 p.m., she posted to friends that she was safe.

(Editing by Matthew Tostevin)

Britain to United States: We want a trade deal and a digital tax

Britain to United States: We want a trade deal and a digital tax
LONDON (Reuters) – Britain wants a trade deal with the United States but will impose a digital service tax on the revenue of companies such as Google, Facebook and Amazon, business minister Andrea Leadsom said on Thursday.

“The United States and the United Kingdom are committed to entering into a trade deal with each other and we have a very strong relationship that goes back centuries so some of the disagreements that we might have over particular issues don’t in any way damage the excellent and strong and deep relationship between the U.S. and the UK,” Leadsom told Talk Radio.

“There are always tough negotiations and tough talk but I think where the tech tax is concerned it’s absolutely vital that these huge multinationals who are making incredible amounts of income and profit should be taxed and what we want to do is to work internationally with the rest of the world to cover with a proper regime that ensures that they’re paying their fair share.”

Under the British plan, tech companies that generate at least 500 million pounds ($657 million) a year in global revenue will pay a levy of 2% of the money they make from UK users from April 2020.

(Reporting by Elizabeth Howcroft; writing by Guy Faulconbridge; editing by Kate Holton)

Harvey Weinstein jury selection: bias, big data and ‘likes’

By Tom Hals

(Reuters) – When lawyers in the Harvey Weinstein rape trial question potential jurors on Thursday, they may already know who has used the #MeToo hashtag on Twitter or criticized victims of sexual harassment in a Facebook discussion.

The intersection of big data capabilities and prevalence of social media has transformed the business of jury research in the United States, which once meant gleaning information about potential jurors from car bumper stickers or the appearance of a home.

Now, consultants scour Facebook, Twitter, Reddit and other social media platforms for hard-to-find comments or “likes” in discussion groups or even selfies of a juror wearing a potentially biased t-shirt.

“This is a whole new generation of information than we had in the past,” said Jeffrey Frederick, the director of Jury Research Services at the National Legal Research Group Inc.

The techniques seem tailor-made for the Weinstein trial, which has become a focal point for #MeToo, the social media movement that has exposed sexual misconduct by powerful men in business, politics and entertainment.

Weinstein, 67, has pleaded not guilty to charges of assaulting two women. The once-powerful movie producer faces life in prison if convicted on the most serious charge, predatory sexual assault.

On Thursday, the legal teams will begin questioning potential jurors, a process known as voir dire. More than 100 people passed an initial screening and the identities of many of those people have been known publicly for days, allowing for extensive background research.

Mark Geragos, a defense lawyer, said it is almost malpractice to ignore jurors’ online activity, particularly in high-profile cases.

When Geragos was representing Scott Peterson, who was later found guilty of the 2002 murder of his pregnant wife Laci, it came to light that a woman told an internet chatroom she had duped both legal teams to get on the California jury.

“You just never know if someone is telling the truth,” said Geragos.

Weinstein’s lawyer, Donna Rotunno, told Reuters recently that her team was considering hiring a firm to investigate jurors’ social media use to weed out bias.

The Manhattan District Attorney’s office does not use jury consultants and office spokesman Danny Frost declined to comment if prosecutors were reviewing potential jurors’ social media.

Frederick’s firm, which has not been involved in the Weinstein case, creates huge databases of online activity relevant to a case, drilling down into interactions that do not appear in a user’s social media timeline. His firm combs through Facebook news articles about a particular case or topic, cataloging every comment, reply, share as well as emojis or “likes,” in the hopes some were posted by a potential juror.

“The social media aspect can be enormously helpful in looking at people’s political motives,” said defense attorney Michael Bachner. He said Weinstein’s team will probably want to know about a potential juror’s ties to women’s causes, with “#MeToo being the obvious one.”

Consultants only use public information and focus on those with extremist views, said Roy Futterman of consulting firm DOAR.

“You’re looking for the worst juror,” he said.

Julieanne Himelstein, a former federal prosecutor, said the best vetting tool remains a lawyer’s questioning of a potential juror in the courtroom.

“That trumps all the sophisticated intelligence gathering anyone can do,” said Himelstein.

But trial veterans said that potential jurors are reluctant to admit unpopular viewpoints during voir dire, such as skepticism about workplace sexual harassment.

During questioning in a trial involving a drug company, consultant Christina Marinakis recalled a potential juror who said he did not have negative feelings toward pharmaceutical companies.

“We found he had a blog where he was just going off on capitalism and Corporate America and pharmaceutical companies especially,” said Marinakis, the director of jury research for Litigation Insights. The juror was dismissed.

Marinakis said the blog was written under a username, and only came to light by digging through the juror’s social media for references to pseudonyms.

Lawyers can reject an unlimited number of potential jurors if they show bias. Each side can typically use “peremptory” challenges to eliminate up to three potential jurors they believe will be unsympathetic, without providing a reason.

In a Canadian civil trial, jury consulting firm Vijilent discovered that a potential juror who appeared to be a stay-at-home mom with no history of social activism, in fact had been arrested three times for civil disobedience while promoting the causes of indigenous people.

“Unless you got into her social media, you wouldn’t have known that information,” said Vijilent founder Rosanna Garcia.

(Reporting by Tom Hals; additional reporting by Brendan Pierson and Gabriella Borter in New York; Editing by Noeleen Walder and Rosalba O’Brien)

Facebook and eBay pledge to better tackle fake reviews

LONDON (Reuters) – Facebook and eBay have promised to better identify, probe and respond to fake and misleading reviews, Britain’s Competition and Markets Authority (CMA) said on Wednesday after pressing the online platforms to tackle the issue.

Customer reviews have become an integral part of online shopping on several websites and apps but the regulator has expressed concerns that some comments may not be genuine.

Facebook has removed 188 groups and disabled 24 user accounts whilst eBay has permanently banned 140 users since the summer, according to the CMA.

The CMA has also found examples via photo-posting app Instagram which owner Facebook has promised to investigate.

“Millions of people base their shopping decisions on reviews, and if these are misleading or untrue, then shoppers could end up being misled into buying something that isn’t right for them – leaving businesses who play by the rules missing out,” said CMA Chief Executive Andrea Coscelli.

The CMA said neither company was intentionally allowing such content and both had committed to tackle the problem.

“We maintain zero tolerance for fake or misleading reviews and will continue to take action against any seller that breaches our user polices,” said a spokeswoman at eBay.

Facebook said it was working to stop such fraudulent activity, including exploring the use of automated technology to help remove content before it was seen.

“While we have invested heavily to prevent this kind of activity across our services, we know there is more work to do and are working with the CMA to address this issue.”

(Reporting by Costas Pitas, Editing by Paul Sandle)

Facebook to pilot new fact-checking program with community reviewers

(Reuters) – Facebook Inc said on Tuesday it would ask community reviewers to fact check content in a pilot program in the United States, as the social media platform looks to detect misinformation faster.

The company will work with data services provider Appen to source community reviewers.

The social media giant said data company YouGov conducted an independent study of community reviewers and Facebook users, who will be hired as contractors to review content flagged as potentially false through machine learning, before it is sent to Facebook’s third-party fact-checking partners.

Facebook is under pressure to police misinformation on its platform in the United States ahead of the November 2020 presidential election.

The company recently came under fire for its policy of exempting ads run by politicians from fact checking, drawing ire from Democratic presidential candidates Joe Biden and Elizabeth Warren.

(Reporting by Neha Malara in Bengaluru; Editing by Shinjini Ganguli)

Facebook, Instagram experience outage on Thanksgiving Day

(Reuters) – Facebook Inc’s family of apps including Instagram experienced a major outage on Thanksgivings Day, prompting a flurry of tweets on the social media platform.

“We’re aware that some people are currently having trouble accessing Facebook’s family of apps, including Instagram. We’re working to get things back to normal as quickly as possible. #InstagramDown,” Instagram said in a tweet.

According to outage monitoring website DownDetector, about 8,000 Facebook users were affected in various parts of the world including the United States and Britain.

Several users reported issues like not being able to post pictures and videos on their main feeds and an error message saying “Facebook Will Be Back Soon” appeared on log in attempts.

Facebook could not immediately be reached for comment.

(Reporting by Mekhla Raina in Bengaluru; editing by Diane Craft)

Facebook suspends Russian Instagram accounts targeting U.S. voters

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

Facebook suspends Russian Instagram accounts targeting U.S. voters
By Jack Stubbs and Christopher Bing

LONDON/WASHINGTON (Reuters) – Facebook Inc. said on Monday it has suspended a network of Instagram accounts operated from Russia that targeted Americans with divisive political messages ahead of next year’s U.S. presidential election, with operators posing as people within the United States.

Facebook said it also had suspended three separate networks operated from Iran. The Russian network “showed some links” to Russia’s Internet Research Agency (IRA), Facebook said, an organization Washington has said was used by Moscow to meddle in the 2016 U.S. election.

“We see this operation targeting largely U.S. public debate and engaging in the sort of political issues that are challenging and sometimes divisive in the U.S. right now,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.

“Whenever you do that, a piece of what you engage on are topics that are going to matter for the election. But I can’t say exactly what their goal was.”

Facebook also announced new steps to fight foreign interference and misinformation ahead of the November 2020 election, including labeling state-controlled media outlets and adding greater protections for elected officials and candidates who may be vulnerable targets for hacking.

U.S. security officials have warned that Russia, Iran and other countries could attempt to sway the result of next year’s presidential vote. Officials say they are on high alert for signs of foreign influence campaigns on social media.

Moscow and Tehran have repeatedly denied the allegations.

Gleicher said the IRA-linked network used 50 Instagram accounts and one Facebook account to gather 246,000 followers, about 60% of which were in the United States.

The earliest accounts dated to January this year and the operation appeared to be “fairly immature in its development,” he said.

“They were pretty focused on audience-building, which is the thing you do first as you’re sort of trying to set up an operation.”

Ben Nimmo, a researcher with social media analysis company Graphika who Facebook commissioned, said the flagged accounts shared material that could appeal to Republican and Democratic voters alike.

Most of the messages plagiarized material authored by leading conservative and progressive pundits. This included recycling comments initially shared on Twitter that criticized U.S. congresswoman Alexandria Ocasio-Cortez, Democratic presidential candidate Joe Biden and current President Donald Trump.

“What’s interesting in this set is so much of what they were doing is copying and pasting genuine material from actual Americans,” Nimmo told Reuters. “This may be indicative of an effort to hide linguistic deficiencies, which have made them easier to detect in the past.”

Attorneys for Concord Management and Consulting LLC have denied any wrongdoing. U.S. prosecutors say the firm is controlled by Russian catering tycoon Evgeny Prigozhin and helped orchestrate the IRA’s operations.

Gleicher said the separate Iranian network his team identified used more than 100 fake and hacked accounts on Facebook and Instagram to target U.S. users and some French-speaking parts of North Africa. Some accounts also repurposed Iranian state media stories to target users in Latin American countries including Venezuela, Brazil, Argentina, Bolivia, Peru, Ecuador and Mexico.

The activity was connected to an Iranian campaign first identified in August last year, which Reuters showed aimed to direct internet users to a sprawling web of pseudo-news websites which repackaged propaganda from Iranian state media.

The accounts “typically posted about local political news and geopolitics including topics like public figures in the U.S., politics in the U.S. and Israel, support of Palestine and conflict in Yemen,” Facebook said.

(Reporting by Jack Stubbs; Additional reporting by Elizabeth Culliford in San Francisco; Editing by Chris Reese, Tom Brown and David Gregorio)

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader

Martin Luther King’s daughter tells Facebook disinformation helped kill civil rights leader
SAN FRANCISCO (Reuters) – Disinformation campaigns helped lead to the assassination of Martin Luther King, the daughter of the U.S. civil rights champion said on Thursday after the head of Facebook said social media should not factcheck political advertisements.

The comments come as Facebook Inc  is under fire for its approach to political advertisements and speech, which Chief Executive Mark Zuckerberg defended on Thursday in a major speech that twice referenced King, known by his initials MLK.

King’s daughter, Bernice, tweeted that she had heard the speech. “I’d like to help Facebook better understand the challenges #MLK faced from disinformation campaigns launched by politicians. These campaigns created an atmosphere for his assassination,” she wrote from the handle @BerniceKing.

King died of an assassin’s bullet in Memphis, Tennessee, on April 4, 1968.

Zuckerberg argued that his company should give voice to minority views and said that court protection for free speech stemmed in part from a case involving a partially inaccurate advertisement by King supporters. The U.S. Supreme Court protected the supporters from a lawsuit.

“People should decide what is credible, not tech companies,” Zuckerberg said.

“We very much appreciate Ms. King’s offer to meet with us. Her perspective is invaluable and one we deeply respect. We look forward to continuing this important dialogue with her in Menlo Park next week,” a Facebook spokesperson said.

(Reporting by Peter Henderson; Editing by Lisa Shumaker)

Facebook’s Zuckerberg hits pause on China, defends political ads policy

Facebook’s Zuckerberg hits pause on China, defends political ads policy
By David Shepardson and Katie Paul

WASHINGTON (Reuters) – Facebook Inc <FB.O> Chief Executive Mark Zuckerberg on Thursday defended the social media company’s political advertising policies and said it was unable to overcome China’s strict censorship, attempting to position his company as a defender of free speech.

“I wanted our services in China because I believe in connecting the whole world, and I thought maybe we could help creating a more open society,” Zuckerberg said, addressing students at Georgetown University.

“I worked hard on this for a long time, but we could never come to agreement on what it would take for us to operate there,” he said. “They never let us in.”

He did not address what conditions or assurances he would need to enter the Chinese market.

Facebook tried for years to break into China, one of the last great obstacles to Zuckerberg’s vision of connecting the world’s entire population on the company’s apps.

Zuckerberg met with Chinese President Xi Jinping in Beijing and welcomed the country’s top internet regulator to Facebook’s campus. He also learned Mandarin and posted a photo of himself running through Tiananmen Square, which drew a sharp reaction from critics of the country’s restrictive policies.

The company briefly won a license to open an “innovation hub” in Hangzhou last year, but it was later revoked.

Zuckerberg effectively closed that door in March, when he announced his plan to pivot Facebook toward more private forms of communication and pledged not to build data centers in countries “that have a track record of violating human rights like privacy or freedom of expression.”

He repeated his concern about data centers on Thursday, this time specifically naming China.

Zuckerberg also defended the company’s political advertising policies on similar grounds, saying Facebook had at one time considered banning all political ads but decided against it, erring on the side of greater expression.

Facebook has been under fire over its advertising policies, particularly from U.S. Senator Elizabeth Warren, a leading contender for the Democratic presidential nomination.

The company exempts politicians’ ads from fact-checking standards applied to other content on the social network. Zuckerberg said political advertising does not contribute much to the company’s revenues, but that he believed it would be inappropriate for a tech company to censor public figures.

Reuters reported in October 2018, citing sources, that Facebook executives briefly debated banning all political ads, which produce less than 5% of the company’s revenue.

The company rejected that because product managers were loath to leave advertising dollars on the table and policy staffers argued that blocking political ads would favor incumbents and wealthy campaigners who can better afford television and print ads, the sources said.

Facebook has been under scrutiny in recent years for its lax approach to fake news reports and disinformation campaigns, which many believe affected the outcome of the 2016 U.S. presidential election, won by Donald Trump.

Trump has disputed claims that Russia has attempted to interfere in U.S. elections. Russian President Vladimir Putin has denied it.

Warren’s Democratic presidential campaign recently challenged Facebook’s policy that exempts politicians’ ads from fact-checking, running ads on the social media platform containing the false claim that Zuckerberg endorsed Trump’s re-election bid.

(Reporting by David Shepardson; Writing by Katie Paul; Editing by Lisa Shumaker)

Mass shooting rumor in Facebook Group shows private chats are not risk-free

By Bryan Pietsch

WASHINGTON (Reuters) – Ahead of the annual Blueberry Festival in Marshall County, Indiana, in early September, a woman broadcast a warning to her neighbors on Facebook.

“I just heard there’s supposed to be a mass shooting tonight at the fireworks,” the woman, whose name is held to protect her privacy, said in a post in a private Facebook Group with over 5,000 members. “Probably just a rumor or kids trying to scare people, but everyone keep their eyes open,” she said in the post, which was later deleted.

There was no shooting at the Blueberry Festival that night, and the local police said there was no threat.

But the post sparked fear in the community, with some group members canceling their plans to attend, and shows the power of rumors in Facebook Groups, which are often private or closed to outsiders. Groups allow community members to quickly spread information, and possibly misinformation, to users who trust the word of their neighbors.

These groups and other private features, rather than public feeds, are “the future” of social media, Facebook Inc <FB.O> Chief Executive Mark Zuckerberg said in April, revealing their importance to Facebook’s business model.

The threat of misinformation spreading rapidly in Groups shows a potential vulnerability in a key part of the company’s growth strategy. It could push Facebook to invest in expensive human content monitoring at the risk of limiting the ability to post in real time, a central benefit of Groups and Facebook in general that has attracted millions of users to the platform.

When asked if Facebook takes accountability for situations like the one in Indiana, a company spokeswoman said it is committed to maintaining groups as a safe place, and that it encourages people to contact law enforcement if they see a potential threat.

Facebook Groups can also serve as a tool for connecting social communities around the world, such as ethnic groups, university alumni and hobbyists.

Facebook’s WhatsApp messaging platform faced similar but more serious problems in 2018 after false messages about child abductors led to mass beatings of more than a dozen people in India, some of whom have died. WhatsApp later limited message forwards and began labeling forwarded messages to quell the risk of fake news.

FIREWORKS FEAR

The Blueberry Festival post caused chaos in the group, named “Local News Now 2…(Marshall and all surrounding Counties).”

In another post, which garnered over 100 comments of confusion and worry, a different member urged the woman to report the threat to the police. “This isn’t something to joke about or take lightly,” she wrote.

The author of the original post did not respond to repeated requests for comment.

Facebook’s policy is to remove language that “incites or facilitates serious violence,” the company spokeswoman said, adding that it did not remove the post and that it did not violate Facebook’s policies because there “was no threat, praise or support of violence.”

Cheryl Siddall, the founder of the Indiana group, said she would welcome tools from Facebook to give her greater “control” over what people post in the group, such as alerts to page moderators if posts contain certain words or phrases.

But Siddall said, “I’m sorry, but that’s a full-time job to sit and monitor everything that’s going on in the page.”

A Facebook spokeswoman said page administrators have the ability to remove a post if it violates the group’s proprietary rules and that administrators can pre-approve individual posts, as well as turn on post approvals for individual group members.

In a post to its blog, Facebook urged administrators to write “great group rules” to “set the tone for your group and help prevent member conflict,” as well as “provide a feeling of safety for group members.”

David Bacon, chief of police for the Plymouth Police Department in Marshall County, said the threat was investigated and traced back to an exaggerated rumor from children. Nonetheless, he said the post to the Facebook group is “what caused the whole problem.”

“One post grows and people see it, and they take it as the gospel, when in actuality you can throw anything you want out there,” Bacon said.

(Reporting by Bryan Pietsch; Editing by Chris Sanders)