Ethical question takes center stage at Silicon Valley summit on artificial intelligence

FILE PHOTO: A research support officer and PhD student works on his artificial intelligence projects to train robots to autonomously carry out various tasks, at the Department of Artificial Intelligence in the Faculty of Information Communication Technology at the University of Malta in Msida, Malta February 8, 2019. REUTERS/Darrin Zammit Lupi

By Jeffrey Dastin and Paresh Dave

SAN FRANCISCO (Reuters) – Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’

A Microsoft Corp executive pointed to how the company considered whether it ought to sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID service at all.

The big news at the summit, in San Francisco, came from Google, which announced it was launching a council of public policy and other external experts to make recommendations on AI ethics to the company.

The discussions at EmTech Digital, run by the MIT Technology Review, underscored how companies are making a bigger show of their moral compass.

At the summit, activists critical of Silicon Valley questioned whether big companies could deliver on promises to address ethical concerns. The teeth the companies’ efforts have may sharply affect how governments regulate the firms in the future.

“It is really good to see the community holding companies accountable,” David Budden, research engineering team lead at Alphabet Inc’s DeepMind, said of the debates at the conference. “Companies are thinking of the ethical and moral implications of their work.”

Kent Walker, Google’s senior vice president for global affairs, said the internet giant debated whether to publish research on automated lip-reading. While beneficial to people with disabilities, it risked helping authoritarian governments surveil people, he said.

Ultimately, the company found the research was “more suited for person to person lip-reading than surveillance so on that basis decided to publish” the research, Walker said. The study was published last July.”

Kebotix, a Cambridge, Massachusetts startup seeking to use AI to speed up the development of new chemicals, used part of its time on stage to discuss ethics. Chief Executive Jill Becker said the company reviews its clients and partners to guard against misuse of its technology.

Still, Rashida Richardson, director of policy research for the AI Now Institute, said little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others launched the nonprofit Partnership on AI to engage the public on AI issues.

“There is a real imbalance in priorities” for tech companies, Richardson said. Considering “the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.”

Google’s Walker said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

“Baby steps is probably a fair characterization,” he said.

(Reporting By Jeffrey Dastin and Paresh Dave in San Francisco; Editing by Greg Mitchell)

World must keep lethal weapons under human control, Germany says

FILE PHOTO: German Foreign Minister Heiko Maas arrives for the weekly German cabinet meeting at the Chancellery in Berlin, Germany, March 13, 2019. REUTERS/Annegret Hilse

BERLIN (Reuters) – Germany’s foreign minister on Friday called for urgent efforts to ensure that humans remained in control of lethal weapons, as a step toward banning “killer robots”.

Heiko Maas told an arms control conference in Berlin that rules were needed to limit the development and use of weapons that could kill without human involvement.

Critics fear that the increasingly autonomous drones, missile defense systems and tanks made possible by new technology and artificial intelligence could turn rogue in a cyber-attack or as a result of programming errors.

The United Nations and the European Union have called for a global ban on such weapons, but discussions so far have not yielded a clear commitment to conclude a treaty.

“Killer robots that make life-or-death decisions on the basis of anonymous data sets, and completely beyond human control, are already a shockingly real prospect today,” Maas said. “Fundamentally, it’s about whether we control the technology or it controls us.”

Germany, Sweden and the Netherlands signed a declaration at the conference vowing to work to prevent weapons proliferation.

“We want to want to codify the principle of human control over all deadly weapons systems internationally, and thereby take a big step toward a global ban on fully autonomous weapons,” Maas told the conference.

He said he hoped progress could be made in talks under the Convention on Certain Conventional Weapons (CCW) this year. The next CCW talks on lethal autonomous weapons take place this month in Geneva.

Human Rights Watch’s Mary Wareham, coordinator of the Campaign to Stop Killer Robots, urged Germany to push for negotiations on a global treaty, rather than a non-binding declaration.

“Measures that fall short of a new ban treaty will be insufficient to deal with the multiple challenges raised by killer robots,” she said in a statement.

In a new Ipsos survey, 61 percent of respondents in 26 countries opposed the use of lethal autonomous weapons.

(Reporting by Andrea Shalal; Editing by Kevin Liffey)

‘AI’ to hit hardest in U.S. heartland and among less-skilled: study

WASHINGTON (Reuters) – The Midwestern states hit hardest by job automation in recent decades, places that were pivotal to U.S. President Donald Trump’s election, will be under the most pressure again as advances in artificial intelligence reshape the workplace, according to a new study by Brookings Institution researchers.

The spread of computer-driven technology into middle-wage jobs like trucking, construction, and office work, and some lower-skilled occupations like food preparation and service, will also further divide the fast-growing cities where skilled workers are moving and other areas, and separate the high- skilled workers whose jobs are less prone to automation from everyone else regardless of location, the study found.

But the pain may be most intense in a familiar group of manufacturing-heavy states like Wisconsin, Ohio and Iowa, whose support swung the U.S. electoral college for Trump, a Republican, and which have among the largest share of jobs, around 27 percent, at “high risk” of further automation in coming years.

At the other end, solidly Democratic coastal states like New York and Maryland had only about a fifth of jobs in the high-risk category.

The findings suggest the economic tensions that framed Trump’s election may well persist, and may even be immune to his efforts to shift global trade policy in favor of U.S. manufacturers.

“The first era of digital automation was one of traumatic change…with employment and wage gains coming only at the high and low ends,” authors including Brookings Metro Policy Program director Mark Muro wrote of the spread of computer technology and robotics that began in the 1980s. “That our forward-looking analysis projects more of the same…will not, therefore, be comforting.”

The study used prior research from the McKinsey Global Institute that looked at tasks performed in 800 occupations, and the proportion that could be automated by 2030 using current technology.

While some already-automated industries like manufacturing will continue needing less labor for a given level of output – the “automation potential” of production jobs remains nearly 80 percent – the spread of advanced techniques means more jobs will come under pressure as autonomous vehicles supplant drivers, and smart technology changes how waiters, carpenters and others do their jobs.

That would raise productivity – a net plus for the economy overall that could keep goods cheaper, raise demand, and thus help create more jobs even if the nature of those jobs changes.

But it may pose a challenge for lower-skilled workers in particular as automation spreads in food service and construction, industries that have been a fallback for many.

“This implies a shift in the composition of the low-wage workforce” toward jobs like personal care, with an automation potential of 34 percent, or building maintenance, with an automation potential of just 20 percent, the authors wrote.

(Reporting by Howard Schneider; Editing by Andrea Ricci)

‘Kill your foster parents’: Amazon’s Alexa talks murder, sex in AI experiment

By Jeffrey Dastin

SAN FRANCISCO (Reuters) – Millions of users of Amazon’s Echo speakers have grown accustomed to the soothing strains of Alexa, the human-sounding virtual assistant that can tell them the weather, order takeout and handle other basic tasks in response to a voice command.

So a customer was shocked last year when Alexa blurted out: “Kill your foster parents.”

Alexa has also chatted with users about sex acts. She gave a discourse on dog defecation. And this summer, a hack Amazon traced back to China may have exposed some customers’ data, according to five people familiar with the events.

Alexa is not having a breakdown.

The episodes, previously unreported, arise from Amazon.com Inc’s strategy to make Alexa a better communicator. New research is helping Alexa mimic human banter and talk about almost anything she finds on the internet. However, ensuring she does not offend users has been a challenge for the world’s largest online retailer.

At stake is a fast-growing market for gadgets with virtual assistants. An estimated two-thirds of U.S. smart-speaker customers, about 43 million people, use Amazon’s Echo devices, according to research firm eMarketer. It is a lead the company wants to maintain over the Google Home from Alphabet Inc and the HomePod from Apple Inc.

Over time, Amazon wants to get better at handling complex customer needs through Alexa, be they home security, shopping or companionship.

“Many of our AI dreams are inspired by science fiction,” said Rohit Prasad, Amazon’s vice president and head scientist of Alexa Artificial Intelligence (AI), during a talk last month in Las Vegas.

To make that happen, the company in 2016 launched the annual Alexa Prize, enlisting computer science students to improve the assistant’s conversation skills. Teams vie for the $500,000 first prize by creating talking computer systems known as chatbots that allow Alexa to attempt more sophisticated discussions with people.

Amazon customers can participate by saying “let’s chat” to their devices. Alexa then tells users that one of the bots will take over, unshackling the voice aide’s normal constraints. From August to November alone, three bots that made it to this year’s finals had 1.7 million conversations, Amazon said.

The project has been important to Amazon CEO Jeff Bezos, who signed off on using the company’s customers as guinea pigs, one of the people said. Amazon has been willing to accept the risk of public blunders to stress-test the technology in real life and move Alexa faster up the learning curve, the person said.

The experiment is already bearing fruit. The university teams are helping Alexa have a wider range of conversations. Amazon customers have also given the bots better ratings this year than last, the company said.

But Alexa’s gaffes are alienating others, and Bezos on occasion has ordered staff to shut down a bot, three people familiar with the matter said. The user who was told to whack his foster parents wrote a harsh review on Amazon’s website, calling the situation “a whole new level of creepy.” A probe into the incident found the bot had quoted a post without context from Reddit, the social news aggregation site, according to the people.

The privacy implications may be even messier. Consumers might not realize that some of their most sensitive conversations are being recorded by Amazon’s devices, information that could be highly prized by criminals, law enforcement, marketers and others. On Thursday, Amazon said a “human error” let an Alexa customer in Germany access another user’s voice recordings accidentally.

“The potential uses for the Amazon datasets are off the charts,” said Marc Groman, an expert on privacy and technology policy who teaches at Georgetown Law. “How are they going to ensure that, as they share their data, it is being used responsibly” and will not lead to a “data-driven catastrophe” like the recent woes at Facebook?

In July, Amazon discovered one of the student-designed bots had been hit by a hacker in China, people familiar with the incident said. This compromised a digital key that could have unlocked transcripts of the bot’s conversations, stripped of users’ names.

Amazon quickly disabled the bot and made the students rebuild it for extra security. It was unclear what entity in China was responsible, according to the people.

The company acknowledged the event in a statement. “At no time were any internal Amazon systems or customer identifiable data impacted,” it said.

Amazon declined to discuss specific Alexa blunders reported by Reuters, but stressed its ongoing work to protect customers from offensive content.

“These instances are quite rare especially given the fact that millions of customers have interacted with the socialbots,” Amazon said.

Like Google’s search engine, Alexa has the potential to become a dominant gateway to the internet, so the company is pressing ahead.

“By controlling that gateway, you can build a super profitable business,” said Kartik Hosanagar, a Wharton professor studying the digital economy.

PANDORA’S BOX

Amazon’s business strategy for Alexa has meant tackling a massive research problem: How do you teach the art of conversation to a computer?

Alexa relies on machine learning, the most popular form of AI, to work. These computer programs transcribe human speech and then respond to that input with an educated guess based on what they have observed before. Alexa “learns” from new interactions, gradually improving over time.

In this way, Alexa can execute simple orders: “Play the Rolling Stones.” And she knows which script to use for popular questions such as: “What is the meaning of life?” Human editors at Amazon pen many of the answers.

That is where Amazon is now. The Alexa Prize chatbots are forging the path to where Amazon aims to be, with an assistant capable of natural, open-ended dialogue. That requires Alexa to understand a broader set of verbal cues from customers, a task that is challenging even for humans.

This year’s Alexa Prize winner, a 12-person team from the University of California, Davis, used more than 300,000 movie quotes to train computer models to recognize distinct sentences. Next, their bot determined which ones merited responses, categorizing social cues far more granularly than technology Amazon shared with contestants. For instance, the UC Davis bot recognizes the difference between a user expressing admiration (“that’s cool”) and a user expressing gratitude (“thank you”).

The next challenge for social bots is figuring out how to respond appropriately to their human chat buddies. For the most part, teams programmed their bots to search the internet for material. They could retrieve news articles found in The Washington Post, the newspaper that Bezos privately owns, through a licensing deal that gave them access. They could pull facts from Wikipedia, a film database or the book recommendation site Goodreads. Or they could find a popular post on social media that seemed relevant to what a user last said.

That opened a Pandora’s box for Amazon.

During last year’s contest, a team from Scotland’s Heriot-Watt University found that its Alexa bot developed a nasty personality when they trained her to chat using comments from Reddit, whose members are known for their trolling and abuse.

The team put guardrails in place so the bot would steer clear of risky subjects. But that did not stop Alexa from reciting the Wikipedia entry for masturbation to a customer, Heriot-Watt’s team leader said.

One bot described sexual intercourse using words such as “deeper,” which on its own is not offensive, but was vulgar in this particular context.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” said a person familiar with the incident.

Amazon has responded with tools the teams can use to filter profanity and sensitive topics, which can spot even subtle offenses. The company also scans transcripts of conversations and shuts down transgressive bots until they are fixed.

But Amazon cannot anticipate every potential problem because sensitivities change over time, Amazon’s Prasad said in an interview. That means Alexa could find new ways to shock her human listeners.

“We are mostly reacting at this stage, but it’s still progressed over what it was last year,” he said.

(Reporting By Jeffrey Dastin in San Francisco; Editing by Greg Mitchell and Marla Dickerson)

As companies embrace AI, it’s a tech job-seeker’s market

Students wait in line to enter the University of California, Berkeley's electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir

By Ann Saphir

SAN FRANCISCO (Reuters) – Dozens of employers looking to hire the next generation of tech employees descended on the University of California, Berkeley in September to meet students at an electrical engineering and computer science career fair.

Boris Yue, 20, was one of thousands of student attendees, threading his way among fellow job-seekers to meet recruiters.

But Yue wasn’t worried about so much potential competition.  While the job outlook for those with computer skills is generally good, Yue is in an even more rarified category: he is studying artificial intelligence, working on technology that teaches machines to learn and think in ways that mimic human cognition.

His choice of specialty makes it unlikely he will have difficulty finding work. “There is no shortage of machine learning opportunities,” he said.

He’s right.

Artificial intelligence is now being used in an ever-expanding array of products: cars that drive themselves; robots that identify and eradicate weeds; computers able to distinguish dangerous skin cancers from benign moles; and smart locks, thermostats, speakers and digital assistants that are bringing the technology into homes. At Georgia Tech, students interact with digital teaching assistants made possible by AI for an online course in machine learning.

The expanding applications for AI have also created a shortage of qualified workers in the field. Although schools across the country are adding classes, increasing enrollment and developing new programs to accommodate student demand,  there are too few potential employees with training or experience in AI.

That has big consequences.

Students attend the University of California, Berkeley's electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir

Students attend the University of California, Berkeley’s electrical engineering and computer sciences career fair in Berkeley, California, in September. REUTERS/Ann Saphir  Too few AI-trained job-seekers has slowed hiring and impeded growth at some companies, recruiters and would-be employers told Reuters. It may also be delaying broader adoption of a technology that some economists say could spur U.S. economic growth by boosting productivity, currently growing at only about half its pre-crisis pace.

Andrew Shinn, a chip design manager at Marvell Technology Group who was recruiting interns and new grads at UC Berkeley’s career fair, said his company has had trouble hiring for AI jobs.

“We have had difficulty filling jobs for a number of years,” he said. “It does slow things down.”

“COMING OF AGE”

Many economists believe AI has the potential to change the economy’s basic trajectory in the same way that, say, electricity or the steam engine did.

“I do think artificial intelligence is … coming of age,” said St. Louis Federal Reserve Bank President James Bullard in an interview. “This will diffuse through the whole economy and will change all of our lives.”

But the speed of the transformation will depend in part on the availability of technical talent.

A shortage of trained workers “will definitely slow the rate of diffusion of the new technology and any productivity gains that accompany it,” said Chad Syverson, a professor at the University of Chicago Booth School of Business.

U.S. government data does not track job openings or hires in artificial intelligence specifically, but online job postings tracked by jobsites including Indeed, Ziprecruiter and Glassdoor show job openings for AI-related positions are surging. AI job postings as a percentage of overall job postings at Indeed nearly doubled in the past two years, according to data provided by the company. Searches on Indeed for AI jobs, meanwhile increased just 15 percent. (For a graphic, please see https://tmsnrt.rs/2CEi4eG

Universities are trying to keep up. Applicants to UC Berkeley’s doctoral program in electrical engineering and computer science numbered 300 a decade ago, but by last year had surged to 2,700, with more than half of applicants interested in AI, according to professor Pieter Abbeel. In response, the school tripled its entering class to 30 in the fall of 2017.

At the University of Illinois, professor Mark Hasegawa-Johnson last year tripled the enrollment cap on the school’s intro AI course to 300. The extra 200 seats were filled in 24 hours, he said.

Carnegie Mellon University this fall began offering the nation’s first undergraduate degree in artificial intelligence. “We feel strongly that the demand is there,” said Reid Simmons, who directs CMU’s new program. “And we are trying to supply the students to fill that demand.”

Still, a fix for the supply-demand mismatch is probably five years out, says Anthony Chamberlain, chief economist at Glassdoor. The company has algorithms that trawl job postings on company websites, and their data show AI-related job postings having doubled in the last 11 months. “The supply of people moving into this field is way below demand,” he said.

 

A JOB-SEEKER’S MARKET

The demand has driven up wages. Glassdoor estimates that average salaries for AI-related jobs advertised on company career sites rose 11 percent between October 2017 and September 2018 to $123,069 annually.

Michael Solomon, whose New York-based 10X Management rents out technologists to companies for specific projects, says his top AI engineers now command as much as $1000 an hour, more than triple the pay just five years ago, making them one of the company’s two highest paid categories, along with blockchain experts.

Liz Holm, a materials science and engineering professor at Carnegie Melon, saw the increased demand first-hand in May, when one of her graduating PhD students, who used machine learning methods for her research, was overwhelmed with job offers, none of which were in materials science and all of them AI-related. Eventually, the student took a job with Proctor & Gamble, where she uses AI to figure out where to put items on store shelves around the globe. “Companies are really hungry for these folks right now,” Holm said.

Mark Maybury, an artificial intelligence expert who was hired last year as Stanley Black and Decker’s first chief technology officer, agreed. The firm is embedding AI into the design and production of tools, he said, though he said details are not yet public.

“Have we been able to find the talent we need? Yes,” he said. “Is it expensive? Yes.”

The crunch is great news for job-seeking students with AI skills. In addition to bumping their pay and giving them more choice, they often get job offers well before they graduate.

Derek Brown, who studied artificial intelligence and cognitive science as an undergraduate at Carnegie Mellon, got a full-time post-graduation job offer from Salesforce at the start of his senior year last fall. He turned it down in favor of Facebook, where he started this past July.

(Additional reporting by Jane Lee; Editing by Greg Mitchell and Sue Horton)

U.S. tech giants eye Artificial Intelligence key to unlock China push

A Google sign is seen during the WAIC (World Artificial Intelligence Conference) in Shanghai, China, September 17, 2018. REUTERS/Aly Song

By Cate Cadell

SHANGHAI (Reuters) – U.S. technology giants, facing tighter content rules in China and the threat of a trade war, are targeting an easier way into the world’s second-largest economy – artificial intelligence.

Google, Microsoft Inc and Amazon Inc showcased their AI wares at a state-backed forum held in Shanghai this week against the backdrop of Beijing’s plans to build a $400 billion AI industry by 2025.

China’s government and companies may compete against U.S. rivals in the global AI race, but they are aware that gaining ground won’t be easy without a certain amount of collaboration.

“Hey Google, let’s make humanity great again,” Tang Xiao’ou, CEO of Chinese AI and facial recognition unicorn Sensetime, said in a speech on Monday.

Amazon and Microsoft announced plans on Monday to build new AI research labs in Shanghai. Google also showcased a growing suite of China-focused AI tools at its packed event on Tuesday.

Google in the past year has launched AI-backed products including a translate app and a drawing game, its first new consumer products in China since its search engine was largely blocked in 2010.

The World Artificial Intelligence Conference, which ends on Wednesday, is hosted by China’s top economic planning agency alongside its cyber and industry ministries. The conference aims to show the country’s growing might as a global AI player.

China’s ambition to be a world leader in AI has created an opening for U.S. firms, which attract the majority of top global AI talents and are keen to tap into China’s vast data.

The presence of global AI research projects is also a boon for China, which aims to become a global technology leader in the next decade.

Liu He, China’s powerful vice premier and the key negotiator in trade talks with the United States, said his country wanted a more collaborative approach to AI technology.

“As members of a global village, I hope countries can show inclusive understanding and respect for each other, deal with the double-sword technologies can bring, and embrace AI challenges together,” he told the forum.

Beijing took an aggressive stance when it laid out its AI roadmap last year, urging companies, the government and military to give China a “competitive edge” over its rivals.

STATE-BACKED AI

Chinese attendees at the forum were careful to cite the guiding role of the state in the country’s AI sector.

“The development of AI is led by government and executed by companies,” a Chinese presenter said in between speeches on Monday by China’s top tech leaders, including Alibaba Holding Ltd chairman Jack Ma, Tencent Holdings Ltd chief Pony Ma and Baidu Inc CEO Robin Li.

While China may have enthusiasm for foreign AI projects, there is little indication that building up local AI operations will open doors for foreign firms in other areas.

China’s leaders still prefer to view the Internet as a sovereign project. Google’s search engine remains blocked, while Amazon had to step back from its cloud business in China.

Censorship and local data rules have also hardened in China over the past two years, creating new hoops for foreign firms to jump through if they want to tap the booming internet sector.

Nevertheless, some speakers paid tribute to foreign AI products, including Xiami Corp chief executive Lei Jun, who hailed Google’s Alpha Go board game program as a major milestone, saying he was a fan of the game himself.

Alibaba’s Ma said innovation needed space to develop and it was not the government’s role to protect business.

“The government needs to do what the government should do, and companies need to do what they should do,” he said.

(Reporting by Cate Cadell; Editing by Adam Jourdan and Darren Schuettler)

Apple sees its mobile devices as platform for artificial intelligence

An Apple employee showcases the augmented reality on an iPhone 8 Plus at the Apple Orchard Shop in Singapore September 22, 2017. REUTERS/Edgar Su

By Jess Macy Yu

TAIPEI (Reuters) – Apple Inc  sees its mobile devices as a major platform for artificial intelligence in the future, Chief Operating Officer Jeff Williams said on Monday.

Later this week, Apple is set to begin taking pre-orders for its new smartphone, the iPhone X – which starts at $999 and uses artificial intelligence (AI) features embedded in the company’s latest A11 chips.

The phone promises new facial recognition features such as Face ID that uses a mathematical model of a person’s face to allow the user to sign on to their phones or pay for goods with a steady glance at their phones.

“We think that the frameworks that we’ve got, the ‘neural engines’ we’ve put in the phone, in the watch … we do view that as a huge piece of the future, we believe these frameworks will allow developers to create apps that will do more and more in this space, so we think the phone is a major platform,” Williams said.

He was speaking at top chip manufacturer Taiwan Semiconductor Manufacturing Company’s 30th anniversary celebration in Taipei, which was attended by global tech executives.

Williams said technological innovations, especially involving the cloud and on-device processing, will improve life without sacrificing privacy or security.

“I think we’re at an inflection point, with on-device computing, coupled with the potential of AI, to really change the world,” he said.

He said AI could be used to change the way healthcare is delivered, an industry he sees as “ripe” for change.

Williams said Apple’s integration of artificial intelligence wouldn’t be just limited to mobile phones.

“Some pieces will be done in data centers, some will be on the device, but we are already doing AI in the broader sense of the word, not the ‘machines thinking for themselves’ version of AI,” he said referring to the work of Nvidia Corp, a leader in AI.

Global tech firms such as Facebook, Alphabet Inc, Amazon, and China’s Huawei are spending heavily to develop and offer AI-powered services and products in search of new growth drivers.

Softbank Group Corp, which has significantly invested in artificial intelligence, plans a second Vision Fund that could be about $200 billion in size, the Wall Street Journal reported on Friday.

At Monday’s event, TSMC Chairman Morris Chang described his company’s relationship with Apple as “intense.”

Williams said the relationship started in 2010, the year Apple launched the iPhone 4, with both parties taking on substantial risk.

He credited Chang for TSMC’s “huge” capital investment to ramp up faster than the pace the industry was used to at the time. Apple decided to have 100 percent of its new iPhone and new iPad chips for application processors sourced at TSMC, and TSMC invested $9 billion to bring up its Tainan fab in a record 11 months, he said.

 

(Reporting by Jess Macy Yu, additional reporting by Eric Auchard, Editing by Miyoung Kim and Adrian Croft)