Important Takeaways:
- I must admit upfront that this article is personally troublesome. This bothers me deeply. Why? Because I know what I am about to talk about is something the general population is going to embrace with a great deal of enthusiasm and excitement. This technology will be something people will rave about, which will cause even more people to jump on the bandwagon. However, my deep concern is that this new technology is going to cause people to eventually be placed into bondage.
- I ponder verses such as 1 John 4:1 and 2 Corinthians 11:3. First John 4 warns us to “not believe every spirit but test the spirits to see whether they are from God because many false prophets have gone out into the world.”
- Second Corinthians 11 warns, “As the serpent deceived Eve by his craftiness, your minds will be led astray from the simplicity and purity of devotion to Christ.”
- Jesus warned us in Matthew 24 to “See to it that no one misleads you.”
- Federal agencies use Login.gov to verify people’s identities when logging in to access government benefits and services. This site already has over 100 million users across over 50 federal and state agencies. This is how future users will have to verify their identity to access information and benefits.
- Login.gov, a secure sign-in and identity verification service for US government services, has announced the rollout of facial recognition services to streamline access. In a public statement, a GSA Administrator said:
- Proving your identity is a critical step in receiving many government benefits and services, and we want to ensure we are making that as easy and secure as possible for members of the public. After months of testing and delays in 2023, users will now be able to verify their identity using a “proven facial matching technology” approved by the General Services Administration, which will follow the National Institute of Standards and Technology (NIST) and will rely on “best-in-class facial matching algorithms.”
- Did you catch that? “Proving your identity is a critical step in receiving many government benefits and services, and we want to ensure we are making that as easy and secure as possible for members of the public.” They are not hiding what their plans are. You will be controlled. You will have no personal freedom. If you need a hospital, if you wish to attend a university, if you want to get a marriage or driver’s license, or if you want to travel, you will need to abide by these rules.
Read the original article by clicking here.
Important Takeaways:
- Newsom said “the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
- Google in an emailed statement Sunday thanked Newsom “for helping California continue to lead in building responsible AI tools” and said it looked forward to “working with the Governor’s responsible AI initiative and the federal government on creating appropriate safeguards and developing tools that help everyone.”
- OpenAI said in an emailed statement Sunday that the company appreciated Newsom’s “commitment to maintaining California’s role as a global leader in AI innovation, and look forward to working with him and state lawmakers in well-defined areas of public interest such as deepfakes, child safety, and AI literacy.”
- Scott Wiener, a state senator from San Francisco who authored the bill in California’s Senate, said in a statement Sunday the veto represented a “missed opportunity for California to once again lead on innovative tech regulation — just as we did around data privacy and net neutrality — and we are all less safe as a result.”
- Nonprofit Accountable Tech in an emailed statement said “This veto will not ’empower innovation’ — it only further entrenches the status quo where Big Tech monopolies are allowed to rake in profits without regard for our safety, even as their AI tools are already threatening democracy, civil rights, and the environment with unknown potential for other catastrophic harms,” it added.
Read the original article by clicking here.
Important Takeaways:
- Russia and Iran are using artificial intelligence to influence the American election, U.S. intelligence officials said on Monday.
- “Foreign actors are using AI to more quickly and convincingly tailor synthetic content,” an official with the Office of the Director of National Intelligence said. “The IC (intelligence community) considers AI a malign influence, accelerant, not yet a revolutionary influence tool.”
- Officials saw AI being used in overseas elections, but it has now made its way to American elections, according to intelligence officials, who says there is evidence Russian manipulated Vice President Kamala Harris’ speeches.
- Russia “has generated the most AI content related to the election, and has done so across all four mediums, text, images, audio and video,” an ODNI official said.
- “Russia is a much more sophisticated actor in the influence space in general, and they have a better understanding of how U.S. elections work and what states to target,” an ODNI official said.
- Iran has also used AI in its election influence efforts, including help in writing fake social media posts and news articles to further Iran’s objectives, which are to denigrate the former President Donald Trump’s candidacy, the official said.
- Officials have previously assessed Iran prefers that Vice President Harris win the 2024 election.
- The intelligence community assesses that AI is an “accelerant” to influence operations, but the risk to U.S. elections depends the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected, develop their own sophisticated models, and strategically target and disseminate such content.
Read the original article by clicking here.
Important Takeaways:
- Microsoft AI Needs So Much Power It’s Tapping Site of US Nuclear Meltdown
- The owner of the shuttered Three Mile Island nuclear plant in Pennsylvania will invest $1.6 billion to revive it, agreeing to sell all the output to Microsoft Corp. as the tech titan seeks carbon-free electricity for data centers to power the artificial intelligence boom.
- Constellation Energy Corp., the biggest US operator of reactors, expects Three Mile Island to go back into service in 2028, according to a statement Friday.
- While one of the site’s two units permanently closed almost a half-century ago after the worst US nuclear accident, Constellation is planning to reopen the other reactor, which shut in 2019 because it couldn’t compete economically.
- Microsoft has agreed to purchase the energy for two decades and declined to disclose financial terms.
- This is the first time Microsoft has secured a dedicated, 100% nuclear facility for its use.
- The decision is the latest sign of surging interest in the nuclear industry as power demand for AI soars.
- “There’s no version of the future of this country that doesn’t rely on these nuclear assets.”
- Wind and solar power outputs can vary, while a nuclear plant generally runs constantly and requires a customer that can take all of that electricity
- That makes tech companies selling cloud computing an ideal option.
Read the original article by clicking here.
Important Takeaways:
- North Korean leader Kim Jong-un oversaw suicide drone tests over the weekend, an emerging vector of arms development for the fortress state, state media reported Monday.
- Drones represent a near-perfect weapons solution for North Korea in its war of nerves against the South, experts say. They are an economical means of destroying expensive manned fighting platforms, have proven ability to penetrate air-defense nets and take advantage of Seoul’s innate geographical vulnerability.
- Given that one of the drones shown resembles Russia’s Lancet, the new Unmanned Aerial Vehicles may be the fruits of a defense agreement the isolated state signed with Russia in June – an agreement that Seoul, Tokyo and Washington have lambasted, but have been unable to impact.
- “It is necessary to develop and produce more suicide drones of various types to be used in tactical infantry and special operation units, as well as strategic reconnaissance and multi-purpose attack drones,” Mr. Kim was quoted as saying during Saturday’s tests by the state-run Korean Central News Agency.
Read the original article by clicking here.
Important Takeaways:
- Despite tech conglomerate Cisco posting $10.3 billion in profits last year, it’s still laying off 5,500 workers as part of an effort to invest more in AI, SFGATE reports.
- It joins a litany of other companies like Microsoft and Intuit, the maker of TurboTax, that have used AI as justification for the mass culling of its workforce.
- The layoffs at Cisco came to light in a notice posted with the Securities and Exchange Commission this week, affecting seven percent of its staff.
- In a short statement, CEO Chuck Robbins used the term “AI” five times, highlighting the company’s efforts to keep up in the ongoing AI race.
- Earlier this year, Cisco also laid off 4,000 or five percent of it staff, saying that the company wanted to “realign the organization and enable further investment in key priority areas.”
Read the original article by clicking here.
Important Takeaways:
- New technologies an ‘accelerant’ for hostile efforts to manipulate voters
- U.S. intelligence officials are sounding the alarm about foreign adversaries’ use of artificial intelligence to manipulate Americans and say new AI technologies are proving an “accelerant” as foreign powers plot to trick voters.
- China, Iran and Russia are all looking to leverage social media to dupe Americans ahead of November’s elections, according to the Office of the Director of National Intelligence.
- “AI is a malign influence accelerant, it is being used to more quickly and convincingly tailor synthetic content, including audio and video,” a DNI official said. “In the run-up to November’s general election, we are monitoring foreign actors seeking to create deepfakes of politicians, flood the information space with false or misleading information to sow doubt about what is real, and to amplify narratives.”
- The U.S. intelligence community says China is likely responsible for pushing dozens of videos that spread online showing AI-generated newscasters reading sections of a book outlining purported scandals about Taiwan’s former president. The book itself may also have been created by AI, according to U.S. officials.
Read the original article by clicking here.
Important Takeaways:
- U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involve the cooperation of Big Tech and other private-sector partners.
- At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”
- He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system. These goals include the gai-worshipping climate agenda, along with new restrictions on free speech, the freedom of movement, wealth transfers from rich to poor countries, and the digitization of humanity. Now Blinken is saying these goals could be jumpstarted by employing advanced artificial intelligence technologies
- Blinken bluntly stated the federal government’s intention to use AI for “media monitoring” and “using it to combat disinformation, one of the poisons of the international system today.”
Read the original article by clicking here.
Important Takeaways:
- Deceitful tactics by artificial intelligence exposed: ‘Meta’s AI a master of deception’ in strategy game
- Paper: ‘AI’s increasing capabilities at deception pose serious risks, ranging from short-term, such as fraud and election tampering, to long-term, such as losing control of AI systems’
- At its core, deception is the luring of false beliefs from others to achieve a goal other than telling the truth. When humans engage in deception, we can usually explain it in terms of their beliefs and desires – they want the listener to believe something false because it benefits them in some way. But can we say the same about AI systems?
- The study, published in the open-access journal Patterns, argues that the philosophical debate about whether AIs truly have beliefs and desires is less important than the observable fact that they are increasingly exhibiting deceptive behaviors that would be concerning if displayed by a human.
- “Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” the authors write in their paper.
- The study surveys a wide range of examples where AI systems have successfully learned to deceive. In the realm of gaming, the AI system CICERO, developed by Meta to play the strategy game Diplomacy, turned out to be an expert liar despite its creators’ efforts to make it honest and helpful. CICERO engaged in premeditated deception, making alliances with human players only to betray them later in its pursuit of victory.
- The risks posed by AI deception are numerous. In the short term, deceptive AI could be weaponized by malicious actors to commit fraud on an unprecedented scale, to spread misinformation and influence elections, or even to radicalize and recruit terrorists. But the long-term risks are perhaps even more chilling. As we increasingly incorporate AI systems into our daily lives and decision-making processes, their ability to deceive could lead to the erosion of trust, the amplification of polarization and misinformation, and, ultimately, the loss of human agency and control.
Read the original article by clicking here.
Important Takeaways:
- Warren Buffett has raised the alarm on AI, warning it threatens to supercharge fraud by making scams more convincing than ever.
- “Scamming has always been part of the American scene,” the famed investor and Berkshire CEO said during his company’s annual shareholder meeting on Saturday.
- But Buffett said that images and videos created using artificial intelligence have become so convincing that it’s virtually impossible to discern if they’re real or not.
- “When you think of the potential of scamming people … if I was interested in scamming, it’s going to be the growth industry of all time,” he said.
- Buffett also likened the advent of AI to the creation of the atom bomb, echoing comments he made at last year’s Berkshire meeting.
- “We let the genie out of the bottle when we developed nuclear weapons,” he said. “That genie’s been doing some terrible things lately. The power of the genie scares the hell out of me.”
- “AI is somewhat similar,” Buffett added. “We may wish we’d never seen that genie.”
- The billionaire, who touted AI’s enormous potential years before ChatGPT’s release, emphasized he’s no expert in the nascent tech.
- “I don’t know anything about AI, but that doesn’t mean I deny its existence or importance or anything of the sort,” he said.
Read the original article by clicking here.