The Electronic Frontier Foundation’s Peter Eckersley writes: Yesterday, The New York Times reported that there is widespread unrest amongst Google’s employees about the company’s work on a U.S. military project called “Project Maven.” Google has claimed that its work on Maven is for “non-offensive uses only,” but it seems that the company is building computer vision systems to flag objects and people seen by military drones for human review. This may in some cases lead to subsequent targeting by missile strikes. EFF has been mulling the ethical implications of such contracts, and we have some advice for Google and other tech companies that are considering building military AI systems. The EFF lists several “starting points” any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking: 1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies’ and military agencies’ decisions?
3.Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet’s AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
Read more of this story at Slashdot.
Source: Slashdot – EFF: Google Should Not Help the US Military Build Unaccountable AI Systems
