Navigation
Search
|
The third annual AI Now report: 10 more ways to make AI safe for human flourishing
Thursday December 6, 2018. 08:27 PM , from BoingBoing
Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others.
This year's recommendations come in the wake of a string of worsening scandals for AI tools, including their implication in genocidal violence in Myanmar. They include: sector-by-sector regulation of AI by appropriate regulators; strong regulation of facial recognition; broad, accountable oversight for AI development incorporating a cross-section of stakeholders; limits on trade secrecy and other barriers to auditability and transparency for AI systems that impact public service provision; corporate whistleblower protection for AI researchers in the tech sector; a 'truth-in-advertising' standard for AI products; a much deeper approach to inclusivity and diversity in the tech sector; 'full stack' evaluations of AI that incorporate everything from labor displacement to energy consumption and beyond; funding for community litigation for AI accountability; and an expansion of university AI programs beyond Computer Science departments. 4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements. 5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest. 10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations. AI Now Report 2018 [Meredith Whittaker, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Myers West, Rashida Richardson, Jason Schultz and Oscar Schwartz/AI Now Institute] After a Year of Tech Scandals, Our 10 Recommendations for AI [AI Now/Medium] (Thanks, Meredith!) (Image: Cryteria, CC-BY)
https://boingboing.net/2018/12/06/no-more-black-boxes.html
|
25 sources
Current Date
Nov, Thu 21 - 16:14 CET
|