New study by State Department calls for emergency power to control AI to prevent ‘extinction level’ threat

Great-AI-Divide

Important Takeaways:

  • A new US State Department-funded study calls for a temporary ban on the creation of advanced AI passed a certain threshold of computational power.
  • The tech, its authors claim, poses an ‘extinction-level threat to the human species.’
  • The study, commissioned as part of a $250,000 federal contract, also calls for ‘defining emergency powers’ for the American government’s executive branch ‘to respond to dangerous and fast-moving AI-related incidents’ — like ‘swarm robotics.’
  • The study authors, a four-person AI consultancy called firm Gladstone AI run by brothers Jérémie and Edouard Harris, told TIME that their earlier presentations on AI risks frequently were heard by government officials with no authority to act.
  • That’s changed with the US State Department, they told the magazine, because it’s Bureau of International Security and Nonproliferation is specifically tasked with curbing the spread of cataclysmic new weapons.
  • …advanced AI, they write, ‘could potentially be used to design and even execute catastrophic biological, chemical, or cyber-attacks, or enable unprecedented weaponized applications in swarm robotics.’
  • There is, they write, ‘reason to believe that they [weaponized AI] may be uncontrollable if they are developed using current techniques, and could behave adversarial to human beings by default.’
  • In other words, the machines may decide for themselves that humanity (or some subset of humanity) is simply an enemy to be eradicated for good.
  • Gladstone AI’s CEO, Jérémie Harris, also presented similarly grave scenarios before hearings held by the Standing Committee on Industry and Technology within Canada’s House of Commons last year, on December 5, 2023.
  • ‘Publicly and privately, frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years,’ according to IT World Canada’s coverage of his remarks.
  • ‘Our own research,’ he said, ‘suggests this is a reasonable assessment.’
  • The PAC, dubbed Americans for AI Safety, launched on this Monday with the stated hope of ‘passing AI safety legislation by the end of 2024.’

Read the original article by clicking here.