Navigation
Search
|
Can We Build Ethics Into Automated Decision-Making?
Monday March 25, 2019. 02:34 AM , from Slashdot
'Machines will need to make ethical decisions, and we will be responsible for those decisions,' argues Mike Loukides, O'Reilly Media's vice president of content strategy:
We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I've suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated... The sheer number of decisions that need to be made means that we can't expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision... Ethical problems arise when a company's interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via 'engagement'; in recommendations that steer customers to Amazon's own products, rather than other products on their platform. The customer's interest must always come before the company's. That applies to recommendations in a news feed or on a shopping site, but also how the customer's data is used and where it's shipped. Facebook believes deeply that 'bringing the world closer together' is a social good but, as Mary Gray said on Twitter, when we say that something is a 'social good,' we need to ask: 'good for whom?' Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren't all the same, and depend deeply on who's connected and how.... It's time to start building the systems that will truly assist us to manage our data. The article argues that spam filters provide a surprisingly good set of first design principles. They work in the background without interfering with users, but always allow users to revoke their decisions, and proactively seek out user input in ambiguous or unclear situations. But in the real world beyond our inboxes, 'machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule.' Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/pr-_Rl8Sn3Y/can-we-build-ethics-into-automated-decision-mak
|
25 sources
Current Date
Nov, Sat 23 - 00:46 CET
|