Google has updated its publicly available AI ethics policy, removing previous clauses that pledged not to use the technology for weapons and surveillance applications. This move has sparked discussions about the ethical responsibilities of technology companies in the development of artificial intelligence, raising concerns about potential misuse.
According to previous versions on the Internet Archive's Wayback Machine reviewed by CNN, Google's principles explicitly listed areas the company would not pursue, including weapons or other technologies intended to harm people, and surveillance technologies that exceeded international norms. However, these statements have disappeared in the new version of the principles page, signaling a shift in the company's stance.
Since OpenAI launched the chatbot ChatGPT in 2022, competition in the field of artificial intelligence has developed at an unprecedented rate. Despite the booming development of AI applications, legislation and regulation regarding AI transparency and ethics have not kept pace, and Google appears to be relaxing its self-imposed restrictions, further complicating the ethical landscape.
James Manyika, Google’s Senior Vice President of Research, Technology & Society, and Demis Hassabis, head of Google DeepMind, stated in a blog post on Tuesday that AI frameworks released by democratic nations have deepened Google’s understanding of the “potential and risks of AI.” They also stated: “A global AI leadership race is unfolding in an increasingly complex geopolitical landscape. We believe that democracies should lead the way in developing AI, guided by core values like freedom, equality, and respect for human rights.”
The article continued, “We believe that companies, governments, and organizations that share these values should work together to create AI that protects people, advances global growth, and supports national security.” Google first released its AI principles in 2018, when AI technology was far from as widespread as it is today. This update represents a significant reversal of values compared to the originally released principles. In 2018, Google dropped its bid for a $16 billion Pentagon cloud computing contract, stating at the time that "we couldn't guarantee it would align with our AI principles." That year, more than 4,000 employees signed a petition demanding "a clear policy stating that neither Google nor its contractors will ever build warfare technology," and about a dozen employees resigned in protest. CNN has contacted Google for comment.