MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
microsoft
Search

Microsoft Details How It's Developing AI Responsibly

Sunday May 5, 2024. 09:34 AM , from Slashdot
Thursday the Verge reported that a new report from Microsoft 'outlines the steps the company took to release responsible AI platforms last year.'

Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that 'We work with our engineering teams from the earliest stages of conceiving of new features that they are building.'

'The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

'When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report.'

They also said ' it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated,' and 'there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure.'

Read more of this story at Slashdot.
https://slashdot.org/story/24/05/05/0521206/microsoft-details-how-its-developing-ai-responsibly?utm_...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Tue 5 - 09:25 CET