Majority of researchers agree that “AI could soon lead to revolutionary societal change” better buckle up

Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • According to a survey conducted by Stanford University’s Institute for Human-Centered AI, 36 percent of researchers believe that AI could cause a “nuclear-level catastrophe.”
  • Only 57 percent of researchers, for example, think that “recent research progress” is paving the way for artificial general intelligence.
  • Those polled did have one notable point of agreement: 73 percent of researchers “feel that AI could soon lead to revolutionary societal change.”
  • So, whether we’re on the way to a nuclear-level catastrophe, or something entirely different, you might want to buckle up.

Read the original article by clicking here.

ChatGPT gets an upgrade; creator saying it’s even more powerful

Revelations 13:14 “…by the signs that it is allowed to work in the presence of  the beast it deceives those who dwell on earth…”

Important Takeaways:

  • ChatGPT 2.0: Creator of AI bot that took world by storm launches even more powerful version called ‘GPT4’ — and admits it’s so advanced it could ‘harm society’
  • It can pass law exams with results in the top 90% – huge jump on its earlier model
  • OpenAI said in a blog post: ‘We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning.
  • ‘GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.
  • The bot can now accept inputs in the form of images as well as text, but still outputs its answers in text, meaning it can offer detailed descriptions of images.
  • The ability to accept images means that users can now prompt ChatGPT with screenshots and other media
  • According to analytics firm SimilarWeb, ChatGPT averaged 13 million users per day in January, making it the fastest-growing internet app of all time.

Read the original article by clicking here.

Big Time Hollywood Director warns of AI

Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • Steven Spielberg warns AI ‘terrifies’ him: ‘It will be the twilight zone’
  • Spielberg brought up the uniqueness of the human soul and how it could not be reproduced by AI.
  • “I think the soul is unimaginable and ineffable,” he explained while on “The Late Show with Stephen Colbert.” “And it cannot be created by any algorithm. It’s just something that exists in all of us.”
  • The filmmaker said he was troubled by the idea of the soul being taken away by robot-made art.
  • “And to lose that because books and movies, and music is being written by machines that we created, and now we’re letting them run with? That terrifies me,”
  • Hollywood and videogame voice actors spoke out against the technology being used to replace their jobs in the industry to tech outlet VICE, after they were asked to sign contracts giving away the rights to their voices for use in generative AI.

Read the original article by clicking here.

Fired Google Engineer warns that AI is already being deployed; consequences are not fully understood

Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience
  • Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google’s Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient…
  • According to Webster’s Dictionary Sentient means responsive to or conscious of sense impressions
  • He’s contending that a machine’s ability to break from its training as a result of some kind of stressor is reason enough to conclude that the machine has achieved some level of sentience. A machine saying that it’s stressed out is one thing — but acting stressed, he says, is another.
  • …Lemoine on another point. Regardless of sentience, AI is getting both advanced and unpredictable — sure, they’re exciting and impressive, but also quite dangerous.
  • “I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb,” “In my view, this technology has the ability to reshape the world.”
  • “I can’t tell you specifically what harms will happen,”… when a culture-changing piece of technology is put into the world before the potential consequences of that technology can be fully understood.
  • “I can simply observe that there’s a very powerful technology that I believe has not been sufficiently tested and is not sufficiently well understood, being deployed at a large scale, in a critical role of information dissemination.”

Read the original article by clicking here.

Pandemic opens the door for AI

Daniel 12:4 “But you, Daniel, shut up the words and seal the book, until the time of the end. Many shall run to and fro, and knowledge shall increase.”

Important Takeaways:

  • Troubling Trend: Great Resignation Versus AI, Robotics And Automation
  • Across the country, it’s quitting time. Hiring continues to sputter, with payrolls rising by only 199,000 across the United States
  • December Jobs Report shows that unemployment fell to just 3.9%
  • A record number of people quit their jobs in 2021 – over 40 million
  • Jay Jacobs, Senior VP for the fund. “We think we’re going to be in the golden era of robotics adoption for the United States”
  • Dataquest predicts the following trends for robotics in 2022, in an article outlining the “robot revolution”:
    • The Advancement of “Cobots” – according to Dataquest, these robots are designed to work alongside people in a safe and effective manner.
    • AI (Artificial Intelligence) Creating Contact-free Experiences
    • Smart Factories – manned by robots, featuring low-cost sensors and an unstoppable immunity to COVID, Dataquest says “previously impossible things may become extremely realistic.”
    • The Rise of RPA (robot-process-automation): “In the healthcare industry, for example, RPA is utilized to automate many activities like as billing, inventory management, and appointment scheduling.

Read the original article by clicking here.

Amazon to use AI tech in its warehouses to enforce social distancing

(Reuters) – Amazon.com Inc on Tuesday launched an artificial intelligence-based tracking system to enforce social distancing at its offices and warehouses to help reduce any risk of contracting the new coronavirus among its workers.

The unveiling comes as the world’s largest online retailer faces intensifying scrutiny from U.S. lawmakers and unions over whether it is doing enough to protect staff from the pandemic.

Monitors set up in the company’s warehouses will highlight workers keeping a safe distance in green circles, while workers who are closer will be highlighted in red circles, Amazon said.

The system, called Distance Assistant, uses camera footage in Amazon’s buildings to also help identify high-traffic areas.

Amazon, which will open source the technology behind the system, is not the first company to turn to AI to track compliance with social distancing.

Several firms have told Reuters that AI camera-based software will be crucial to staying open, as it will allow them to show not only workers and customers, but also insurers and regulators, that they are monitoring and enforcing safe practices.

However, privacy activists have raised concerns about increasingly detailed tracking of people and have urged businesses to limit use of AI to the pandemic.

The system is live at a handful of buildings, Amazon said on Tuesday, adding that it planned to deploy hundreds of such units over the next few weeks.

(Reporting by Munsif Vengattil in Bengaluru; Editing by Ramakrishnan M. and Sriraj Kalluvila)

Study finds Google system could improve breast cancer detection

By Julie Steenhuysen

CHICAGO (Reuters) – A Google artificial intelligence system proved as good as expert radiologists at predicting which women would develop breast cancer based on screening mammograms and showed promise at reducing errors, researchers in the United States and Britain reported.

The study, published in the journal Nature on Wednesday, is the latest to show that artificial intelligence (AI) has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally.

Radiologists miss about 20% of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result.

The findings of the study, developed with Alphabet’s DeepMind AI unit which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago, said.

The team, which included researchers at Imperial College London and Britain’s National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.

They then compared its predictions to the actual results from a set of 25,856 mammograms in the United Kingdom and 3,097 from the United States.

The study showed the AI system could identify cancers with a similar degree of accuracy to expert radiologists, while reducing the number of false positive results by 5.7% in the U.S.-based group and by 1.2% in the British-based group.

It also cut the number of false negatives, where tests are wrongly classified as normal, by 9.4% in the U.S. group, and by 2.7% in the British group.

These differences reflect the ways in which mammograms are read. In the United States, only one radiologist reads the results and the tests are done every one to two years. In Britain, the tests are done every three years, and each is read by two radiologists. When they disagree, a third is consulted.

‘SUBTLE CUES’

In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately predicting breast cancers.

Connie Lehman, chief of the breast imaging department at Harvard’s Massachusetts General Hospital, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.

The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not improved performance in clinical practice.

The issue, Lehman said, is that current CAD programs were trained to identify things human radiologists can see, whereas with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.

This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” Lehman added.

Although computers have not been “super helpful” so far, “what we’ve shown at least in tens of thousands of mammograms is the tool can actually make a very well-informed decision,” Etemadi said.

The study has some limitations. Most of the tests were done using the same type of imaging equipment, and the U.S. group contained a lot of patients with confirmed breast cancers.

More studies will be needed to show that when used by radiologists, the tool improves patient care, and it will require regulatory approval, which could take several years.

(Reporting by Julie Steenhuysen; Editing by Alexander Smith)

Despite robot efficiency, human skills still matter at work

Despite robot efficiency, human skills still matter at work
By Caroline Monahan

NEW YORK (Reuters) – Artificial intelligence is approaching critical mass at the office, but humans are still likely to be necessary, according to a new study by executive development firm, Future Workplace, in partnership with Oracle.

Future Workplace found an 18% jump over last year in the number of workers who use AI in some facet of their jobs, representing more than half of those surveyed.

Reuters spoke with Dan Schawbel, the research director at Future Workplace and bestselling author of “Back to Human,” about the study’s key findings and the future of work.

Q: You found that 64% of people trust a robot more than their manager. What can robots do better than managers and what can managers do better than robots?

A: What managers can do better are soft skills: understanding employees’ feelings, coaching employees, creating a work culture – things that are hard to measure, but affect someone’s workday.

The things robots can do better are hard skills: providing unbiased information, maintaining work schedules, problem solving and maintaining a budget.

Q: Is AI advancing to take over soft skills?

A: Right now, we’re not seeing that. I think the future of work is that human resources is going to be managing the human workforce, whereas information technology is going to be managing the robot workforce. There is no doubt that humans and robots will be working side by side.

Q: Are we properly preparing the next generation to work alongside AI?

A: I think technology is making people more antisocial as they grow up because they’re getting it earlier. Yet the demand right now is for a lot of hard skills that are going to be automated. So eventually, when the hard skills are automated and the soft skills are more in demand, the next generation is in big trouble.

Q: Which countries are using AI the most?

A: India and China, and then Singapore. The countries that are gaining more power and prominence in the world are using AI at work.

Q: If AI does all the easy tasks, will managers be mentally drained with only difficult tasks left to do?

A: I think it’s very possible. I always do tasks that require the most thought in the beginning of my day. After 5 or 6 o’clock, I’m exhausted mentally. But if administrative tasks are automated, potentially, the work day becomes consolidated.

That would free us to do more personal things. We have to see if our workday gets shorter if AI eliminates those tasks. If it doesn’t, the burnout culture will increase dramatically.

Q: Seventy percent of your survey respondents were concerned about AI collecting data on them at work. Is that concern legitimate?

A: Yes. You’re seeing more and more technology vendors enabling companies to monitor employees’ use of their computers.

If we collect data on employees in the workplace and make the employees suffer consequences for not being focused for eight hours a day, that’s going to be a huge problem. No one can focus for that long. It’s going to accelerate our burnout epidemic.

Q: How is AI changing hiring practices?

A: One example is Unilever. The first half of their entry-level recruiting process is really AI-centric. You do a video interview and the AI collects data on you and matches it against successful employees. That lowers the pool of candidates. Then candidates spend a day at Unilever doing interviews, and a percentage get a job offer. That’s machines and humans working side-by-side.

(Editing by Beth Pinsker and Bernadette Baum)

China’s robot censors crank up as Tiananmen anniversary nears

People take pictures of paramilitary officers marching in formation in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Pe

By Cate Cadell

BEIJING (Reuters) – It’s the most sensitive day of the year for China’s internet, the anniversary of the bloody June 4 crackdown on pro-democracy protests at Tiananmen Square, and with under two weeks to go, China’s robot censors are working overtime.

Censors at Chinese internet companies say tools to detect and block content related to the 1989 crackdown have reached unprecedented levels of accuracy, aided by machine learning and voice and image recognition.

“We sometimes say that the artificial intelligence is a scalpel, and a human is a machete,” said one content screening employee at Beijing Bytedance Co Ltd, who asked not to be identified because they are not authorized to speak to media.

Two employees at the firm said censorship of the Tiananmen crackdown, along with other highly sensitive issues including Taiwan and Tibet, is now largely automated.

Posts that allude to dates, images and names associated with the protests are automatically rejected.

“When I first began this kind of work four years ago there was opportunity to remove the images of Tiananmen, but now the artificial intelligence is very accurate,” one of the people said.

Four censors, working across Bytedance, Weibo Corp and Baidu Inc apps said they censor between 5,000-10,000 pieces of information a day, or five to seven pieces a minute, most of which they said were pornographic or violent content.

Despite advances in AI censorship, current-day tourist snaps in the square are sometimes unintentionally blocked, one of the censors said.

Bytedance and Baidu declined to comment, while Weibo did not respond to request for comment.

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

A woman takes pictures in Tiananmen Square in Beijing, China May 16, 2019. REUTERS/Thomas Peter

SENSITIVE PERIOD

The Tiananmen crackdown is a taboo subject in China 30 years after the government sent tanks to quell student-led protests calling for democratic reforms. Beijing has never released a death toll but estimates from human rights groups and witnesses range from several hundred to several thousand.

June 4th itself is marked by a cat-and-mouse game as people use more and more obscure references on social media sites, with obvious allusions blocked immediately. In some years, even the word “today” has been scrubbed.

In 2012, China’s most-watched stock index fell 64.89 points on the anniversary day, echoing the date of the original event in what analysts said was likely a strange coincidence rather than a deliberate reference.

Still, censors blocked access to the term “Shanghai stock market” and to the index numbers themselves on microblogs, along with other obscure references to sensitive issues.

While companies censorship tools are becoming more refined, analysts, academics and users say heavy-handed policies mean sensitive periods before anniversaries and political events have become catch-alls for a wide range of sensitive content.

In the lead-up to this year’s Tiananmen Square anniversary, censorship on social media has targeted LGBT groups, labor and environment activists and NGOs, they say.

Upgrades to censorship tech have been urged on by new policies introduced by the Cyberspace Administration of China (CAC). The group was set up – and officially led – by President Xi Jinping, whose tenure has been defined by increasingly strict ideological control of the internet.

The CAC did not respond to a request for comment.

Last November, the CAC introduced new rules aimed at quashing dissent online in China, where “falsifying the history of the Communist Party” on the internet is a punishable offence for both platforms and individuals.

The new rules require assessment reports and site visits for any internet platform that could be used to “socially mobilize” or lead to “major changes in public opinion”, including access to real names, network addresses, times of use, chat logs and call logs.

One official who works for CAC told Reuters the recent boost in online censorship is “very likely” linked to the upcoming anniversary.

“There is constant communication with the companies during this time,” said the official, who declined to directly talk about the Tiananmen, instead referring to the “the sensitive period in June”.

Companies, which are largely responsible for their own censorship, receive little in the way of directives from the CAC, but are responsible for creating guidelines in their own “internal ethical and party units”, the official said.

SECRET FACTS

With Xi’s tightening grip on the internet, the flow of information has been centralized under the Communist Party’s Propaganda Department and state media network. Censors and company staff say this reduces the pressure of censoring some events, including major political news, natural disasters and diplomatic visits.

“When it comes to news, the rule is simple… If it is not from state media first, it is not authorized, especially regarding the leaders and political items,” said one Baidu staffer.

“We have a basic list of keywords which include the 1989 details, but (AI) can more easily select those.”

Punishment for failing to properly censor content can be severe.

In the past six weeks, popular services including a Netease Inc news app, Tencent Holdings Ltd’s news app TianTian, and Sina Corp have all been hit with suspensions ranging from days to weeks, according to the CAC, meaning services are made temporarily unavailable on apps stores and online.

For internet users and activists, penalties can range from fines to jail time for spreading information about sensitive events online.

In China, social media accounts are linked to real names and national ID numbers by law, and companies are legally compelled to offer user information to authorities when requested.

“It has become normal to know things and also understand that they can’t be shared,” said one user, Andrew Hu. “They’re secret facts.”

In 2015, Hu spent three days in detention in his home region of Inner Mongolia after posting a comment about air pollution onto an unrelated image that alluded to the Tiananmen crackdown on Twitter-like social media site Weibo.

Hu, who declined to use his full Chinese name to avoid further run-ins with the law, said when police officers came to his parents house while he was on leave from his job in Beijing he was surprised, but not frightened.

“The responsible authorities and the internet users are equally confused,” said Hu. “Even if the enforcement is irregular, they know the simple option is to increase pressure.”

(Reporting by Cate Cadell. Editing by Lincoln Feast.)

AI must be accountable, EU says as it sets ethical guidelines

FILE PHOTO: An activist from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', protests at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse/File Photo

By Foo Yun Chee

BRUSSELS (Reuters) – Companies working with artificial intelligence need to install accountability mechanisms to prevent its being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse.

AI projects should be transparent, have human oversight and secure and reliable algorithms, and they must be subject to privacy and data protection rules, the commission said, among other recommendations.

The European Union initiative taps in to a global debate about when or whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation.

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” the Commission digital chief, Andrus Ansip, said in a statement.

AI can help detect fraud and cybersecurity threats, improve healthcare and financial risk management and cope with climate change. But it can also be used to support unscrupulous business practices and authoritarian governments.

The EU executive last year enlisted the help of 52 experts from academia, industry bodies and companies including Google, SAP, Santander and Bayer to help it draft the principles.

Companies and organizations can sign up to a pilot phase in June, after which the experts will review the results and the Commission decide on the next steps.

IBM Europe Chairman Martin Jetter, who was part of the group of experts, said guidelines “set a global standard for efforts to advance AI that is ethical and responsible.”

The guidelines should not hold Europe back, said Achim Berg, president of BITKOM, Germany’s Federal Association of Information Technology, Telecommunications, and New Media.

“We must ensure in Germany and Europe that we do not only discuss AI but also make AI,” he said.

(Reporting by Foo Yun Chee, additional reporting by Georgina Prodhan in London; editing by John Stonestreet, Larry King)