Artificial General Intelligence advancements are moving fast, maybe too fast, some calling it ‘Reckless’ race for dominance

OpenAI-Insiders-Warn-of-a-‘Reckless-Race-for-Dominance-750x375

Important Takeaways:

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • The group published an open letter on Tuesday calling for leading A.I. companies, including OpenAI, to establish greater transparency and more protections for whistle-blowers.
  • Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.
  • In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Eventually, Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly” as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process,” Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”

Read the original article by clicking here.

Facebook and Open AI on the brink of new AI models capable of ‘reasoning and planning’

OpenAI-and-Facebook

Important Takeaways:

  • OpenAI and Meta are on the brink of releasing new artificial intelligence models that they say will be capable of reasoning and planning, critical steps towards achieving superhuman cognition in machines.
  • Executives at OpenAI and Meta both signaled this week that they were preparing to launch the next versions of their large language models, the systems that power generative AI applications such as ChatGPT.
  • Meta said it would begin rolling out Llama 3 in the coming weeks, while Microsoft-backed OpenAI indicated that its next model, expected to be called GPT-5, was coming “soon”.
  • Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.
  • Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.
  • This is a “big missing piece that we are working on to get machines to get to the next level of intelligence”, he added.

Read the original article by clicking here.

From Orbs to Open AI: CEO Sam Altman is enabling universal access to the global economy by scanning your eyeball

Important Takeaways:

  • Behold, the Orb: I can’t stop thinking about Sam Altman’s dystopian eyeball-scanning device
  • You might think sobbing at a sold-out showing of the “Barbie” movie last weekend was enough proof you’re human. In 2023, people are also apparently getting their retinas scanned by shining chrome orbs to prove it.
  • And yes, there are pictures.
  • That’s because OpenAI’s CEO Sam Altman trumpeted Worldcoin’s launch on Monday, an effort he co-founded years earlier with Alex Blania to “enable universal access to the global economy,” according to its website.
  • The orbs, shiny sculptural spheres that scan the eyeballs of new members, seem to have become the company’s dystopian symbol. They help to provide users with “World IDs,” records proving a person signing up is human and not AI, with the goal of moving through the internet more easily and accessing digital currency, according to the company.
  • Misgivings or not, some two million people have signed up already, according to Worldcoin, which has articulated a goal to address economic inequality through AI. (The company says it plans to give out a “new digital token freely to billions of people.”).
  • It’s not clear yet if the tech can solve yet another complex, systemic problem in the blink of an eye. But the orbs are here now, beckoning to us all.

Read the original article by clicking here.

CEO of OpenAI and creator of ChatGPT tells Senate Committee Regulation of Artificial Intelligence is needed

Sam Altman

Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • Sam Altman, CEO of OpenAI, calls for US to regulate artificial intelligence
  • The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI).
  • Altman said a new agency should be formed to license AI companies.
  • He has not shied away from addressing the ethical questions that AI raises, and has pushed for more regulation.
  • “There will be an impact on jobs. We try to be very clear about that,” he said, adding that the government will “need to figure out how we want to mitigate that”.
  • Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections – a prospect he said is among his “areas of greatest concerns”.
  • The technology is moving so fast that legislators also wondered whether such an agency would be capable of keeping up.

Read the original article by clicking here.