MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
security
Search

Microsoft enlists AI to supercharge PC security and hinder hackers

Tuesday March 28, 2023. 05:30 PM , from PC World
There’s a long list of tools that AI has been applied to: AI art, AI chatbots, even AI assistants to control your home. Now, Microsoft is adding AI security to the list, too, with its new Security Copilot feature.

If you’ve ever managed (or, likelier, probably haven’t) your own security on your PC, you’re probably aware of little more than making sure your antivirus is up to date. If your PC is hacked, though, it’s an entirely different story. Suddenly, you’re thrust into an unfamiliar world which requires several stressful, immediate decisions that you have to make correctly.

In such a situation, it would be extremely helpful to have someone or something to walk you through what happened, what to do, and how to prevent it from happening again to you or someone else. Security Copilot is the tool that Microsoft is making available for the job, using OpenAI’s new GPT-4 chat interface to help enterprise IT workers navigate through the maze of potential responses.

Yes, enterprise IT workers. For now, Microsoft is only making its Security Copilot available to its enterprise customers. But we can certainly hope that Microsoft may release something similar for consumers, too. Here’s how it works.

First, this isn’t the same large language model (LLM) that drives consumer applications like Bing Chat. Microsoft developed Security Copilot with a security-specific model, tuned with the knowledge and terminology that security professionals use. Microsoft also has already connected it to its own security products — and eventually, Microsoft says, Security Copilot will be able to integrate with third-party security solutions as well.

Finally, it’s up to date — while most consumer implementations of GPT-4 (Bing Chat and ChatGPT, among a few others) only “know” facts up to 2021, Microsoft says Security Copilot is constantly being fed with the 65 trillion threat signals that Microsoft sees every day. Microsoft said in a blog post that it has a “growing list of unique skills and prompts” that security teams can take advantage of.

Microsoft Security Copilot shows how the tool could be used to help react to a potential attack. Microsoft

Microsoft says that the advantage of using Security Copilot is that its language model can be used to detect otherwise imperceptible signals that an attack is taking place, and is constantly learning to improve those skills. The company showed off how Security Copilot could be used as an assistant to identify problems and mitigate them.

Anyone who’s familiar with ChatGPT, Bing Chat, Google’s Bard, or other AI chatbots understands, however, that AI sometimes “hallucinates” facts that may not be true — not a critical issue when a user correctly understands that Abraham Lincoln was not voted out of office in 2000, say. But it’s much more of a concern when a security professional isn’t quite sure that a department or user or email may have suspicious activity attached to their email or shared files.

Microsoft’s response isn’t entirely convincing. “Security Copilot doesn’t always get everything right,” Microsoft said. “AI-generated content can contain mistakes. But Security Copilot is a closed-loop learning system, which means it’s continually learning from users and giving them the opportunity to give explicit feedback with the feedback feature that is built directly into the tool. As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers.”

Microsoft

Microsoft

Of course, Microsoft doesn’t address what could happen in a world of AI versus AI, where AI-assisted attackers tailor AI-designed phishing emails and other attacks on both consumers and businesses alike. That “smart war” would hopefully take place behind the scenes.

What this implies, however, is that enterprises may potentially test Security Copilot on a small segment of its users. That’s not necessarily bad news, for them or for you. If AI really becomes the tool Microsoft wants it to be, consumers and businesses alike are going to have to trust it knows what it’s talking about, and can do the job of a human. If the world’s top businesses eventually trust Security Copilot, that may assure you that AI can help secure your PC, too.

Professional Software, Security
https://www.pcworld.com/article/1674223/microsoft-enlists-security-copilot-ai-to-fight-hackers.html
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Wed 24 - 13:20 CEST