Google has removed its explicit commitment to avoiding AI applications for weapons and surveillance in an update to its AI Principles. Previously, the company pledged not to develop AI technologies that could cause harm, including those used for military or surveillance purposes that violate international norms.
However, in its revised policy announced Tuesday, Google stated that it remains committed to pursuing AI “responsibly” while aligning with “widely accepted principles of international law and human rights.” The omission of its previous stance on weapons and surveillance marks a significant shift in the company’s approach to AI ethics.