US Intel officials worry over AI threat to elections; or maybe they’re just worried it would tell you the truth

Mark-Warner-D-Va.-chairman-of-the-Senate-Intelligence-Committee

Important Takeaways:

  • New technologies an ‘accelerant’ for hostile efforts to manipulate voters
  • U.S. intelligence officials are sounding the alarm about foreign adversaries’ use of artificial intelligence to manipulate Americans and say new AI technologies are proving an “accelerant” as foreign powers plot to trick voters.
  • China, Iran and Russia are all looking to leverage social media to dupe Americans ahead of November’s elections, according to the Office of the Director of National Intelligence.
  • “AI is a malign influence accelerant, it is being used to more quickly and convincingly tailor synthetic content, including audio and video,” a DNI official said. “In the run-up to November’s general election, we are monitoring foreign actors seeking to create deepfakes of politicians, flood the information space with false or misleading information to sow doubt about what is real, and to amplify narratives.”
  • The U.S. intelligence community says China is likely responsible for pushing dozens of videos that spread online showing AI-generated newscasters reading sections of a book outlining purported scandals about Taiwan’s former president. The book itself may also have been created by AI, according to U.S. officials.

Read the original article by clicking here.

No longer hiding it: Secretary of State supports push for AI to censor American speech

Blinken-okays-AI

Important Takeaways:

  • U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involve the cooperation of Big Tech and other private-sector partners.
  • At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”
  • He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system. These goals include the gai-worshipping climate agenda, along with new restrictions on free speech, the freedom of movement, wealth transfers from rich to poor countries, and the digitization of humanity. Now Blinken is saying these goals could be jumpstarted by employing advanced artificial intelligence technologies
  • Blinken bluntly stated the federal government’s intention to use AI for “media monitoring” and “using it to combat disinformation, one of the poisons of the international system today.”

Read the original article by clicking here.

Meta’s AI learning deceitful tactics should be cause for concern

digital-deception-500x320

Important Takeaways:

  • Deceitful tactics by artificial intelligence exposed: ‘Meta’s AI a master of deception’ in strategy game
  • Paper: ‘AI’s increasing capabilities at deception pose serious risks, ranging from short-term, such as fraud and election tampering, to long-term, such as losing control of AI systems’
  • At its core, deception is the luring of false beliefs from others to achieve a goal other than telling the truth. When humans engage in deception, we can usually explain it in terms of their beliefs and desires – they want the listener to believe something false because it benefits them in some way. But can we say the same about AI systems?
  • The study, published in the open-access journal Patterns, argues that the philosophical debate about whether AIs truly have beliefs and desires is less important than the observable fact that they are increasingly exhibiting deceptive behaviors that would be concerning if displayed by a human.
  • “Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” the authors write in their paper.
  • The study surveys a wide range of examples where AI systems have successfully learned to deceive. In the realm of gaming, the AI system CICERO, developed by Meta to play the strategy game Diplomacy, turned out to be an expert liar despite its creators’ efforts to make it honest and helpful. CICERO engaged in premeditated deception, making alliances with human players only to betray them later in its pursuit of victory.
  • The risks posed by AI deception are numerous. In the short term, deceptive AI could be weaponized by malicious actors to commit fraud on an unprecedented scale, to spread misinformation and influence elections, or even to radicalize and recruit terrorists. But the long-term risks are perhaps even more chilling. As we increasingly incorporate AI systems into our daily lives and decision-making processes, their ability to deceive could lead to the erosion of trust, the amplification of polarization and misinformation, and, ultimately, the loss of human agency and control.

Read the original article by clicking here.

Warren Buffet says AI is like the atomic bomb; we may wish we never created it

Important Takeaways:

  • Warren Buffett has raised the alarm on AI, warning it threatens to supercharge fraud by making scams more convincing than ever.
  • “Scamming has always been part of the American scene,” the famed investor and Berkshire CEO said during his company’s annual shareholder meeting on Saturday.
  • But Buffett said that images and videos created using artificial intelligence have become so convincing that it’s virtually impossible to discern if they’re real or not.
  • “When you think of the potential of scamming people … if I was interested in scamming, it’s going to be the growth industry of all time,” he said.
  • Buffett also likened the advent of AI to the creation of the atom bomb, echoing comments he made at last year’s Berkshire meeting.
  • “We let the genie out of the bottle when we developed nuclear weapons,” he said. “That genie’s been doing some terrible things lately. The power of the genie scares the hell out of me.”
  • “AI is somewhat similar,” Buffett added. “We may wish we’d never seen that genie.”
  • The billionaire, who touted AI’s enormous potential years before ChatGPT’s release, emphasized he’s no expert in the nascent tech.
  • “I don’t know anything about AI, but that doesn’t mean I deny its existence or importance or anything of the sort,” he said.

Read the original article by clicking here.

Google funded AI may have become self-conscious or sentient some researchers suggest

Claude-3

Important Takeaways:

  • AI Firm Suggests ‘Claud 3’ Has Achieved Sentience
  • The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.
  • Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.”
  • “When we ran this test on Opus, we noticed some interesting behavior—it seemed to suspect that we were running an eval on it,” Albert posted on X (formerly Twitter). He continued: “Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.”
  • Advances in AI technology continue to raise ethical concerns. Earlier this month, two leading Japanese companies warned that AI could cause the collapse of democracy and the social order, leading to wars.

Read the original article by clicking here.

AI can work as an ‘interpreter to decode the language of life’; AI creates molecules not found in nature that can CHANGE human genes to cure even the rarest of diseases

Molecules-not-in-nature

Important Takeaways:

  • AI is used to compose music, suggests recipes and make investment decisions, but a company has designed a system that can edit human genes.
  • California-based Profluent Bio developed a system capable of creating a range of bespoke cures for disease by developing molecules that have never existed in nature.
  • com spoke to Ali Madani, CEO of Profluent Bio, who said the AI-made gene editors have been tested in human cells, which demonstrated high levels of functionality while not editing unintended sites in the DNA.
  • The AI was trained on a database of 5.1 million CRISPR-associated (Cas) proteins, allowing it to create potential molecules that could be used in gene editing.
  • The system then narrowed down the results to four million sequences, allowing it to identify the gene editor the team named OpenCRISPR-1.
  • Experiments showed OpenCRISPR-1 performed as well as Cas proteins, but it also reduced the impact on off-target sites by 95 percent.
  • AI was at the heart of this achievement. We trained large language models (LLMs) on massive scale evolutionary sequences and biological context,’ Madani said.
  • ‘Our vision is to move biology from being constrained by what can be achieved in nature to being able to use AI to design new medicines precisely according to our needs.’
  • The company believes that AI can work as an ‘interpreter to decode the language of life.’

Read the original article by clicking here.

AI is learning at such a rapid rate that researchers are looking for more challenging benchmarks

AI-Hands-hold-earth

Important Takeaways:

  • AI now surpasses humans in almost all performance benchmarks
  • For people that haven’t been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021).
  • AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage.
  • The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, ‘struggled’ here might be misleading; it certainly doesn’t mean AI did badly.
  • Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%.
  • AI isn’t going anywhere, that’s for sure. The rapid rate of technical development seen throughout 2023, evident in this report, shows that AI will only keep evolving and closing the gap between humans and technology.

Read the original article by clicking here.

EU prepares to regulate AI as new warning that, if not restrained, ‘Social order could collapse’

AI-EU

Important Takeaways:

  • ‘Social order could collapse, sparking wars’ if AI is not restrained, two of Japan’s most influential companies warn
  • Two leading Japanese communications and media companies have warned that AI could cause ‘social collapse and wars’ if governments do not act to regulate the technology.
  • Nippon Telegraph and Telephone (NTT) – Japan’s biggest telecoms firm – and Yomiuri Shimbun Group Holdings – the owners of the nation’s largest newspaper – today published a joint manifesto on the rapid development of generative AI.
  • The media giants recognize the benefits of the technology, describing it as ‘already indispensable to society’, specifically because of its accessibility and ease of use for consumers and its potential for boosting productivity.
  • But the declaration said AI could ‘confidently lie and easily deceive’ users, and may be used for nefarious purposes, including the undermining of democratic order by interfering ‘in the areas of elections and security… to cause enormous and irreversible damage’.
  • In response, the Japanese firms said countries worldwide must ensure that education around the benefits and drawbacks of AI must be incorporated into compulsory school curriculums and declared ‘a need for strong legal restrictions on the use of generative AI – hard laws with enforcement powers’.
  • It comes as the EU prepares to implement new legislation seen as the most comprehensive regulation of AI the world has seen thus far.

Read the original article by clicking here.

Apple quietly moving past Chat GPT with new AI called MM1, a type of multimodal assistant that can answer complex questions and describe photos or documents

Apple-Storefront

Important Takeaways:

  • Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up
  • A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It’s the biggest sign yet that Apple is developing generative AI capabilities.
  • “This is just the beginning. The team is already hard at work on the next generation of models.”
  • …a research paper quietly posted online last Friday by Apple engineers suggests that the company is making significant new investments into AI that are already bearing fruit. It details the development of a new generative AI model called MM1 capable of working with text and images. The researchers show it answering questions about photos and displaying the kind of general knowledge skills shown by chatbots like ChatGPT. The model’s name is not explained but could stand for MultiModal 1.
  • MM1 appears to be similar in design and sophistication to a variety of recent AI models from other tech giants, including Meta’s open source Llama 2 and Google’s Gemini. Work by Apple’s rivals and academics shows that models of this type can be used to power capable chatbots or build “agents” that can solve tasks by writing code and taking actions such as using computer interfaces or websites. That suggests MM1 could yet find its way into Apple’s products.
  • “The fact that they’re doing this, it shows they have the ability to understand how to train and how to build these models,”…
  • MM1 could perhaps be a step toward building “some type of multimodal assistant that can describe photos, documents, or charts and answer questions about them.”

Read the original article by clicking here.

New human-like robots will work on generative artificial intelligence and get smarter over time

AI-bias-concerns

Important Takeaways:

  • Nvidia unveils robots powered by super computer and AI to take on world’s heavy industries
  • Jim Fan a research manager and lead of embodied AI at Nvidia posted to X that through GR00T, robots will be able to understand instructions through language, video and demonstrations to perform a variety of tasks.
  • “We are collaborating with many leading humanoid companies around the world, so that GR00T may transfer across embodiments and help the ecosystem thrive,” Fan said.
  • He also said Project GR00T is a “cornerstone” of the “Foundation Agent” roadmap for the GEAR Lab. Fan said at GEAR, the team is building robots that learn to act skillfully in many worlds, both virtual and real. He also provided a video in the post showing team members working with robots.
  • “These smarter, faster, better robots will be deployed in the world’s heavy industries,” Rev Lebaredian, Vice President, Omniverse and Simulation Technology, told reporters. “We are working with the world’s entire robot and simulation ecosystem to accelerate development and adoption.”
  • Nvidia’s “Jetson Thor” is the computer behind the genAI software, while the package of software is called the “Isaac” platform.
  • “Jetson Thor” will provide enough horsepower for the robot to be able to compute and perform complex tasks, the company noted, while also allowing the robot to interact with other machines and people.
  • Over time, the tools will train the software to improve its decision-making through reinforcement learning.
  • Earlier this month, Nvidia CEO Jensen Huang announced that artificial general intelligence (AGI) could arrive in as little as five years.

Read the original article by clicking here.