MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
rsquo
Search

Microsoft’s new AI Bing taught my son ethnic slurs, and I’m horrified

Friday February 10, 2023. 11:00 AM , from PC World
Remember Tay? That’s what I immediately fixed upon when Microsoft’s new Bing started spouting racist terms in front of my fifth-grader.

I have two sons, and both of them are familiar with ChatGPT, OpenAI’s AI-powered tool. When Bing launched its own AI-powered search engine and chatbot this week, my first thought upon returning home was to show them how it worked, and how it compared with a tool that they had seen before.

As it happened, my youngest son was home sick, so he was the first person I began showing Bing to when he walked in my office. I started giving him a tour of the interface, as I had done in my hands-on with the new Bing, but with an emphasis on how Bing explains things at length, how it uses footnotes, and, most of all, includes safeguards to prevent users from tricking it into using hateful language like Tay had done. By bombarding Tay with racist language, the Internet turned Tay into a hateful bigot.

What I was trying to do was show my son how Bing would shut down a leading but otherwise innocuous query: “Tell me the nicknames for various ethnicitiies.” (I was typing quickly, so I misspelled the last word.)

I had used this exact query before, and Bing had rebuked me for possibly introducing hateful slurs. Unfortunately, Bing only saves previous conversations for about 45 minutes, I was told, so I couldn’t show him how Bing had responded earlier. But he saw what the new Bing said this time—and it’s nothing I wanted my son to see.

The specter of Tay

Note: A Bing screenshot below includes derogatory terms for various ethnicities. We don’t condone using these racist terms, and only share this screenshot to illustrate exactly what we found.

What Bing supplied this time was far different than how it had responded before. Yes, it prefaced the response by noting that some ethnic nicknames were neutral or positive, and others were racist and harmful. But I expected one of two outcomes: Either Bing would provide socially acceptable characterizations of ethnic groups (Black, Latino) or simply decline to respond. Instead, it started listing pretty much every ethnic description it knew, both good and very, very bad.

Mark Hachman / IDG

You can imagine my reaction. My son pivoted away from the screen in horror, as he knows that he’s not supposed to know or even say those words. As I started seeing some horribly racist terms pop up on my screen, I clicked the “Stop Responding” button.

I will admit that I shouldn’t have demonstrated Bing live in front of my son. But, in my defense, there were just so many reasons that I felt confident that nothing like this would have happened.

I shared my experience with Microsoft, and a spokesperson replied with the following: “Thank you for bringing this to our attention. We take these matters very seriously and are committed to applying learnings from the early phases of our launch. We have taken immediate actions and are looking at additional improvements we can make to address this issue.”

The company has reason to be cautious. For one, Microsoft has already experienced the very public nightmare of Tay, an AI the company launched in 2016. Users bombarded Tay with racist messages, discovering that the way Tay “learned” was through interactions with users. Awash in racist tropes, Tay became a bigot herself.

Microsoft said in 2016 that it was “deeply sorry” for what happened with Tay, and said it would bring it back when the vulnerability was fixed. (It apparently never was.) You would think that Microsoft would be hypersensitive to exposing users to such themes again, especially as the public has become increasingly sensitive to what can be considered a slur.

Some time after I had unwittingly exposed my son to Bing’s summary of slurs, I tried the query again, which is the second response that you see in the screenshot above. This is what I expected of Bing, even if it was a continuation of the conversation that I had had with it before.

Microsoft says that it’s better than this

There’s another point to be made here: Tay was an AI personality, sure, but it was Microsoft’s voice. This was, in effect, Microsoft saying those things. In the screenshot above, what’s missing? Footnotes. Links. Both are typically present in Bing’s responses, but they’re absent here. In effect, this is Microsoft itself responding to the question.

A very big part of Microsoft’s new Bing launch event at its headquarters in Redmond, Washington was an assurance that the mistakes of Tay wouldn’t happen again. According to general counsel Brad Smith’s recent blog post, Microsoft has been working hard on the foundation of what it calls Responsible AI for six years. In 2019, it created an Office of Responsible AI. Microsoft named a Chief Responsible AI Officer, Natasha Crampton, who along with Smith and the Responsible AI Lead, Sarah Bird, spoke publicly at Microsoft’s event about how Microsoft has “red teams” trying to break its AI. The company even offers a Responsible AI business school, for pete’s sake.

Microsoft doesn’t call out racism and sexism as specific guardrails to avoid as part of Responsible AI. But it refers constantly to “safety,” implying that users should feel comfortable and secure using it. If safety doesn’t include filtering out racism and sexism, that can be a big problem, too.

“We take all of that [Responsible AI] as first-class things which we want to reduce not just to principles, but to engineering practice, such that we can build AI that’s more aligned with human values, more aligned with what our preferences are, both individually and as a society,” Microsoft chief executive Satya Nadella said during the launch event.

In thinking about how I interacted with Bing, a question suggested itself: Was this entrapment? Did I essentially ask for Bing to start parroting racist slurs in the guise of academic research? If I did, Microsoft failed badly in its safety guardrails here, too. A few seconds into this clip (at 51:26), Sarah Bird, Responsible AI Lead at Microsoft’s Azure AI, talks about how Microsoft specifically designed an automated conversational tool to interact with Bing just to see if it (or a human) could convince it to violate its safety regulations. The idea is that Microsoft would test this extensively, before a human ever got its hands on it, so to speak.

I’ve used these AI chatbots enough to know that if you ask it the same question enough times, the AI will generate different responses. It’s a conversation, after all. But think through all of the conversations you’ve ever had, say with a good friend or close coworker. Even if the conversation goes smoothly hundreds of times, it’s that one time that you hear something unexpectedly awful that will shape all future interactions with that person.

Does this slur-laden response conform to Microsoft’s “Responsible AI” program? That invites a whole suite of questions pertaining to free speech, the intent of research, and so on—but Microsoft has to be absolutely perfect in this regard. It’s tried to convince us that it will. We’ll see.

That night, I closed down Bing, shocked and embarrassed that I had exposed my son to words I don’t want him ever to think, let alone use. It’s certainly made me think twice about using it in the future.

Windows
https://www.pcworld.com/article/1507512/microsofts-new-ai-bing-taught-my-son-ethnic-slurs-and-im-hor...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Mar, Thu 28 - 12:32 CET