Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical
Saturday January 19, 2019. 01:45 AM , from Slashdot
An anonymous reader quotes a report from MIT Technology Review: Algorithms are increasingly being used to make ethical decisions. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers' lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like 'freedom' and 'well-being,' a satisfactory mathematical solution doesn't always exist. 'We as humans want multiple incompatible things,' says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. 'There are many high-stakes situations where it's actually inappropriate -- perhaps dangerous -- to program in a single objective function that tries to describe your ethics.' These solutionless dilemmas aren't specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms?
Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.
Read more of this story at Slashdot.
Jun, Tue 18 - 05:47 CEST