Google Promises Ethical Principles To Guide Development of Military AI
Thursday May 31, 2018. 03:00 PM , from Slashdot
An anonymous reader quotes a report from The Verge: Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should 'avoid at ALL COSTS any mention or implication of AI' when announcing the Pentagon contract. 'Weaponized AI is probably one of the most sensitized topics of AI -- if not THE most. This is red meat to the media to find all ways to damage Google,' said Li. But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it's helping to build for the Pentagon simply 'flags images for human review' and is for 'non-offensive uses only.' The contract is also small by industry standards -- worth just $9 million to Google, according to the Times.
Read more of this story at Slashdot.
Oct, Fri 19 - 16:27 CEST