Google ending AI arms ban incredibly concerning, campaigners say

2025-02-06 02:17:00

Abstract: Alphabet lifted its AI weapons ban, sparking ethics concerns. Critics cite risks like accountability issues on the battlefield and autonomous weapons.

Human rights organizations have expressed "extreme concern" over Google's parent company, Alphabet, lifting a long-standing ban that restricted the use of artificial intelligence (AI) for developing weapons and surveillance tools. This shift has sparked widespread discussion about the ethics and potential risks of AI applications in the military domain.

Alphabet modified its AI usage guidelines, removing clauses that previously prohibited applications likely to cause harm. Human Rights Watch criticized the decision, telling the BBC that AI could "complicate accountability for battlefield decisions" that "can have life-and-death consequences."

Google defended the change in a blog post, arguing that businesses and democratic governments need to collaborate on AI that "supports national security." Experts say that AI could be widely deployed on the battlefield, but there are also concerns about its use, especially in autonomous weapon systems. "For a global industry leader to abandon a red line it set for itself is a worrying shift at a time when we need responsible leadership in the field of AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch.

Bacciarelli added that this "unilateral" decision also shows "why voluntary principles are not an adequate substitute for regulation and binding laws." Alphabet stated in its blog that democracies should play a leading role in AI development, guided by its so-called "core values" such as freedom, equality, and respect for human rights. "We believe that companies, governments and organizations with these values should work together to create AI that protects people, promotes global growth and supports national security."

The blog post, authored by Senior Vice President James Manyika and Sir Demis Hassabis, who leads the AI lab Google DeepMind, stated that the company's AI principles, initially published in 2018, needed updating as the technology has evolved. In recent years, there has been increasing focus on the military potential of AI. In January, UK MPs argued that the conflict in Ukraine showed the technology "offers important military advantages on the battlefield."

As AI becomes more pervasive and sophisticated, it will "change how defence operates, from back office to front line," wrote UK MP Emma Lewell-Buck, who recently chaired a report on the use of AI by the UK military. Alongside debate among AI experts and professionals about how this powerful new technology should be managed in a broad sense, controversy also exists surrounding AI's use in battlefield and surveillance technologies.

Chief among the concerns is the potential for AI weapons capable of taking lethal action autonomously, which activists argue urgently need to be controlled. The "Doomsday Clock" cited this concern in its latest assessment of the dangers facing humanity. "In Ukraine and the Middle East, systems incorporating AI for military targeting have already been used, and several nations are working to integrate AI into their militaries," it said. "These efforts raise questions about how much machines will be allowed to make military decisions—even decisions involving possible mass killings."

Long before there was intense interest in the ethics of AI, Google founders Sergey Brin and Larry Page said their company's motto was "Don't be evil." When the company was reorganized under Alphabet Inc. in 2015, the parent company changed it to "Do the right thing." Since then, Google employees have sometimes objected to practices adopted by its executives.

In 2018, the company did not renew a contract to work with the US Pentagon on AI after resignations and a petition signed by thousands of employees. They were concerned that "Project Maven" was the first step toward using AI for lethal purposes. The blog post was published ahead of Alphabet's year-end financial report, which revealed weaker-than-expected results and dampened its share price.

Despite a 10% increase in digital advertising revenue, its largest source of income, boosted by US election spending. In its earnings report, the company said it would spend $75 billion (£60 billion) on AI projects this year, 29% more than Wall Street analysts had expected. The company is investing in the infrastructure to run AI, AI research, and AI-powered applications like search.