(NewsNation) — Google has updated its ethical policies on artificial intelligence, eliminating a pledge not to use AI technology for weapons development and surveillance.
According to a now-archived version of Google’s AI principles seen on the digital archive Wayback Machine and reported by NewsNation partner The Hill, the section titled “Applications we will not pursue” included weapons and other technology aimed at injuring people, along with technologies that “gather or use information for surveillance.”
As of Tuesday, the section was no longer listed on Google’s AI principles page.
Trump wants US to take ownership over Gaza
“Since we first published our AI Principles in 2018, the technology has evolved rapidly. Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers,” according to Google’s published 2024 report.
As Washington increasingly embraces the use of AI, some policymakers have expressed concerns the technology could be used for harm when in the hands of bad actors.
The Defense Department announced late last year a new office focused on accelerating and adopting AI technology for the military to deploy autonomous weapons in the near future.
NewsNation partner The Hill contributed to this report.