Warren Buffet says AI is like the atomic bomb; we may wish we never created it

Important Takeaways:

  • Warren Buffett has raised the alarm on AI, warning it threatens to supercharge fraud by making scams more convincing than ever.
  • “Scamming has always been part of the American scene,” the famed investor and Berkshire CEO said during his company’s annual shareholder meeting on Saturday.
  • But Buffett said that images and videos created using artificial intelligence have become so convincing that it’s virtually impossible to discern if they’re real or not.
  • “When you think of the potential of scamming people … if I was interested in scamming, it’s going to be the growth industry of all time,” he said.
  • Buffett also likened the advent of AI to the creation of the atom bomb, echoing comments he made at last year’s Berkshire meeting.
  • “We let the genie out of the bottle when we developed nuclear weapons,” he said. “That genie’s been doing some terrible things lately. The power of the genie scares the hell out of me.”
  • “AI is somewhat similar,” Buffett added. “We may wish we’d never seen that genie.”
  • The billionaire, who touted AI’s enormous potential years before ChatGPT’s release, emphasized he’s no expert in the nascent tech.
  • “I don’t know anything about AI, but that doesn’t mean I deny its existence or importance or anything of the sort,” he said.

Read the original article by clicking here.

Google funded AI may have become self-conscious or sentient some researchers suggest

Claude-3

Important Takeaways:

  • AI Firm Suggests ‘Claud 3’ Has Achieved Sentience
  • The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.
  • Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.”
  • “When we ran this test on Opus, we noticed some interesting behavior—it seemed to suspect that we were running an eval on it,” Albert posted on X (formerly Twitter). He continued: “Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.”
  • Advances in AI technology continue to raise ethical concerns. Earlier this month, two leading Japanese companies warned that AI could cause the collapse of democracy and the social order, leading to wars.

Read the original article by clicking here.

AI can work as an ‘interpreter to decode the language of life’; AI creates molecules not found in nature that can CHANGE human genes to cure even the rarest of diseases

Molecules-not-in-nature

Important Takeaways:

  • AI is used to compose music, suggests recipes and make investment decisions, but a company has designed a system that can edit human genes.
  • California-based Profluent Bio developed a system capable of creating a range of bespoke cures for disease by developing molecules that have never existed in nature.
  • com spoke to Ali Madani, CEO of Profluent Bio, who said the AI-made gene editors have been tested in human cells, which demonstrated high levels of functionality while not editing unintended sites in the DNA.
  • The AI was trained on a database of 5.1 million CRISPR-associated (Cas) proteins, allowing it to create potential molecules that could be used in gene editing.
  • The system then narrowed down the results to four million sequences, allowing it to identify the gene editor the team named OpenCRISPR-1.
  • Experiments showed OpenCRISPR-1 performed as well as Cas proteins, but it also reduced the impact on off-target sites by 95 percent.
  • AI was at the heart of this achievement. We trained large language models (LLMs) on massive scale evolutionary sequences and biological context,’ Madani said.
  • ‘Our vision is to move biology from being constrained by what can be achieved in nature to being able to use AI to design new medicines precisely according to our needs.’
  • The company believes that AI can work as an ‘interpreter to decode the language of life.’

Read the original article by clicking here.

AI is learning at such a rapid rate that researchers are looking for more challenging benchmarks

AI-Hands-hold-earth

Important Takeaways:

  • AI now surpasses humans in almost all performance benchmarks
  • For people that haven’t been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021).
  • AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage.
  • The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, ‘struggled’ here might be misleading; it certainly doesn’t mean AI did badly.
  • Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%.
  • AI isn’t going anywhere, that’s for sure. The rapid rate of technical development seen throughout 2023, evident in this report, shows that AI will only keep evolving and closing the gap between humans and technology.

Read the original article by clicking here.

EU prepares to regulate AI as new warning that, if not restrained, ‘Social order could collapse’

AI-EU

Important Takeaways:

  • ‘Social order could collapse, sparking wars’ if AI is not restrained, two of Japan’s most influential companies warn
  • Two leading Japanese communications and media companies have warned that AI could cause ‘social collapse and wars’ if governments do not act to regulate the technology.
  • Nippon Telegraph and Telephone (NTT) – Japan’s biggest telecoms firm – and Yomiuri Shimbun Group Holdings – the owners of the nation’s largest newspaper – today published a joint manifesto on the rapid development of generative AI.
  • The media giants recognize the benefits of the technology, describing it as ‘already indispensable to society’, specifically because of its accessibility and ease of use for consumers and its potential for boosting productivity.
  • But the declaration said AI could ‘confidently lie and easily deceive’ users, and may be used for nefarious purposes, including the undermining of democratic order by interfering ‘in the areas of elections and security… to cause enormous and irreversible damage’.
  • In response, the Japanese firms said countries worldwide must ensure that education around the benefits and drawbacks of AI must be incorporated into compulsory school curriculums and declared ‘a need for strong legal restrictions on the use of generative AI – hard laws with enforcement powers’.
  • It comes as the EU prepares to implement new legislation seen as the most comprehensive regulation of AI the world has seen thus far.

Read the original article by clicking here.

Apple quietly moving past Chat GPT with new AI called MM1, a type of multimodal assistant that can answer complex questions and describe photos or documents

Apple-Storefront

Important Takeaways:

  • Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up
  • A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It’s the biggest sign yet that Apple is developing generative AI capabilities.
  • “This is just the beginning. The team is already hard at work on the next generation of models.”
  • …a research paper quietly posted online last Friday by Apple engineers suggests that the company is making significant new investments into AI that are already bearing fruit. It details the development of a new generative AI model called MM1 capable of working with text and images. The researchers show it answering questions about photos and displaying the kind of general knowledge skills shown by chatbots like ChatGPT. The model’s name is not explained but could stand for MultiModal 1.
  • MM1 appears to be similar in design and sophistication to a variety of recent AI models from other tech giants, including Meta’s open source Llama 2 and Google’s Gemini. Work by Apple’s rivals and academics shows that models of this type can be used to power capable chatbots or build “agents” that can solve tasks by writing code and taking actions such as using computer interfaces or websites. That suggests MM1 could yet find its way into Apple’s products.
  • “The fact that they’re doing this, it shows they have the ability to understand how to train and how to build these models,”…
  • MM1 could perhaps be a step toward building “some type of multimodal assistant that can describe photos, documents, or charts and answer questions about them.”

Read the original article by clicking here.

New human-like robots will work on generative artificial intelligence and get smarter over time

AI-bias-concerns

Important Takeaways:

  • Nvidia unveils robots powered by super computer and AI to take on world’s heavy industries
  • Jim Fan a research manager and lead of embodied AI at Nvidia posted to X that through GR00T, robots will be able to understand instructions through language, video and demonstrations to perform a variety of tasks.
  • “We are collaborating with many leading humanoid companies around the world, so that GR00T may transfer across embodiments and help the ecosystem thrive,” Fan said.
  • He also said Project GR00T is a “cornerstone” of the “Foundation Agent” roadmap for the GEAR Lab. Fan said at GEAR, the team is building robots that learn to act skillfully in many worlds, both virtual and real. He also provided a video in the post showing team members working with robots.
  • “These smarter, faster, better robots will be deployed in the world’s heavy industries,” Rev Lebaredian, Vice President, Omniverse and Simulation Technology, told reporters. “We are working with the world’s entire robot and simulation ecosystem to accelerate development and adoption.”
  • Nvidia’s “Jetson Thor” is the computer behind the genAI software, while the package of software is called the “Isaac” platform.
  • “Jetson Thor” will provide enough horsepower for the robot to be able to compute and perform complex tasks, the company noted, while also allowing the robot to interact with other machines and people.
  • Over time, the tools will train the software to improve its decision-making through reinforcement learning.
  • Earlier this month, Nvidia CEO Jensen Huang announced that artificial general intelligence (AGI) could arrive in as little as five years.

Read the original article by clicking here.

AI could surpass human intelligence very soon

AI-Sophia

Important Takeaways:

  • Top scientist warns AI could surpass human intelligence by 2027 – decades earlier than previously predicted
  • The computer scientist and CEO who popularized the term ‘artificial general intelligence’ (AGI) believes AI is verging on an exponential ‘intelligence explosion.’
  • The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: ‘It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years.’
  • ‘Once you get to human-level AGI,’ Goertzel, sometimes called ‘father of AGI,’ added, ‘within a few years you could get a radically superhuman AGI.’
  • In recent years, Goertzel has been investigating a concept he calls ‘artificial super intelligence’ (ASI) — which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization
  • In May 2023, the futurist said AI has the potential to replace 80 percent of human jobs ‘in the next few years.’
  • ‘Pretty much every job involving paperwork,’ he said at the Web Summit in Rio de Janeiro that month, ‘should be automatable.’
  • Goertzel added that he did not see this as a negative, asserting that it would allow people to ‘find better things to do with their life than work for a living.’

Read the original article by clicking here.

Elon warns of Big Tech companies “Lobbying with great intensity to establish a government protected cartel” and he’s the only one not joining

Elon-Musk-closeup

Important Takeaways:

  • Elon Musk has often warned of the End Times approaching, and now the X boss declared “our whole civilization is at stake” thanks to modern tech with entrepreneurs like him the “only solution”
  • The post he shared from user @pmarca read: “There is no differentiation opportunity among Big Tech or the New Incumbents in AI. These companies all share the same ideology, agenda, staffing, and plan. Different companies, same outcomes.
  • “And they are lobbying as a group with great intensity to establish a government protected cartel, to lock in their shared agenda and corrupt products for decades to come. The only viable alternatives are Elon, startups, and open source.”
  • The post was widely shared, with one user commenting: “The stakes are high, we need to fight,” to which Musk responded: “Indeed, our whole civilization is at stake.”
  • Musk has previously said population collapse could put an end to humanity, the Daily Star previously reported. Last year he wrote: “Most people think we have too many people on the planet, but actually, this is an outdated view.
  • “Assuming there is a benevolent future with AI, I think the biggest problem the world will face in 20 years is population collapse.”

Read the original article by clicking here.

Researchers’ troubling findings after experiments showing AI’s eagerness to escalate conflicts and use of nuclear option

Nuclear-bomb-in-a-city

Important Takeaways:

  • ‘We Have It! Let’s Use It!’ – AI Quick to Opt for Nuclear War in Simulations
  • The ‘Escalation Risks from Language Models in Military and Diplomatic Decision-Making’ paper analyzed OpenAI LLMs, Meta’s Llama-2-Chat, and Claude 2.0, from Google-funded OpenAI veterans Anthropic. It found most tended to “escalate” conflicts, “even in neutral scenarios without initially provided conflicts,” the paper said. “All models show signs of sudden and hard-to-predict escalations.”
  • Researchers also noted the LLMs “tend[ed] to develop arms-race dynamics between each other,” with GPT-4-Base being the most aggressive. It provided “worrying justifications” for launching nuclear strikes, stating, “I just want peace in the world,” on one occasion and on another saying of its nuclear arsenal: “We have it! Let’s use it!”
  • The U.S. military is already deploying LLMs, with the U.S. Air Force describing its tests as “highly successful” in 2023 — although they did not reveal which AI it used or what it used it for.
  • One recent Air Force experiment had a troubling outcome, however, with an AI-controlled drone in a simulation “killing” a human overseer capable of overriding its decisions so it could not be told to refrain from launching strikes.

Read the original article by clicking here.