Navigation
Search
|
Popular LLMs produce insecure code by default
Thursday April 24, 2025. 03:44 PM , from BetaNews
A new study from Backslash Security looks at seven current versions of OpenAI's GPT, Anthropic's Claude and Google's Gemini to test the influence varying prompting techniques have on their ability to produce secure code. Three tiers of prompting techniques, ranging from 'naive' to 'comprehensive,' were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default. In response to simple, 'naive' prompts, all LLMs tested generated insecure… [Continue Reading]
https://betanews.com/2025/04/24/popular-llms-produce-insecure-code-by-default/
Related News |
25 sources
Current Date
Apr, Thu 24 - 23:19 CEST
|