Navigation
Search
|
8 vendors bringing AI to devsecops and application security
Thursday August 28, 2025. 11:00 AM , from InfoWorld
At Black Hat USA 2025 and DEF CON 33, the mood among application security vendors was equal parts optimism and urgency. Across the show floors and presentations, one theme stood out: AI is no longer just a buzzword or a bolt-on feature; it’s becoming the foundation of modern software security.
From autonomous vulnerability remediation to AI governance, these startups and established players are embedding intelligence into every layer of the devops and devsecops pipeline. Vendors waxed enthusiastic about AI’s promise to accelerate software delivery and security and warned of the equally real risks of getting it wrong. As Snyk’s head of developer and security relations, Randall Degges, said, “Wouldn’t it be cool if security could just be an immediate part of coding, something developers never even think about?” For some, that means using large language models to uncover “shadow-patched” vulnerabilities that never receive CVEs. “Even if you do absolutely nothing wrong, your app can still be vulnerable because of the open-source supply chain,” said Mackenzie Jackson, developer advocate at Aikido Software, which has found hundreds of such hidden flaws. Others focus on cleaning the foundation itself. Chainguard’s “farm-to-table” approach to hardened base images ensures that certain CVEs “never show up on scans,” according to Dustin Kirkland, Chainguard VP of engineering. Several vendors are exploring how AI can make security both faster and more trustworthy. Checkmarx, for example, is embedding AI agents directly into AI-native IDEs like Windsurf and Cursor to give developers real-time secure coding guidance. At the same time, app security vendors warn of the dangers of AI-generated code and push for governance and visibility into the models themselves. “Shadow AI is the new shadow IT,” said Mitchell Johnson, chief product development officer at Sonatype. Across the board, the sentiment is clear. Security isn’t just about finding problems anymore; it’s about fixing them in the fastest, least disruptive way possible. Together, these eight companies point to where application security and software supply chain security are headed. Aikido Security Aikido Security addresses the risks of what the company calls “shadow-patched” vulnerabilities in open-source supply chains. Whereas most AppSec programs rely on NVD and CVE disclosures, Aikido notes that a sizable fraction of vulnerabilities never make it to those databases, leaving enterprises exposed to risks they can neither patch nor track. To close that gap, Aikido uses large language models to mine commit histories and code diffs across millions of open source projects. The models flag suspicious commits that resemble security fixes, even when no CVE exists. Human analysts then validate these findings before ingesting them into Aikido Intel, the company’s open-source threat feed. Since its 2024 launch, the company says it has uncovered 511 previously unknown vulnerabilities including critical flaws in projects like Craft CMS, etcd, and LangChain. More than half of those critical bugs never received CVEs, meaning that organizations that rely on “official” feeds miss them entirely. By monitoring 30,000+ new package versions daily across npm, PyPI, and other ecosystems, Aikido’s AI-driven threat detection system looks for hidden payloads, credential stealers, and obfuscated malware. One recent discovery included a malicious fork of a cryptocurrency SDK used by exchanges, while another exposed heavily obfuscated malware strains buried in popular libraries. To push protection directly into developer workflows, Aikido introduced Safe Chain, an open-source wrapper around the npm cli, npx, yarn, pnpm, and pnpx that automatically cross-checks packages against Aikido’s malware database before installation. In the words of Aikido, Safe Chain provides “frictionless guardrails” in an environment where shadow patches, undisclosed vulnerabilities, and supply chain malware increasingly erode trust in open source. Chainguard Chainguard, founded by former Google engineers with deep experience in Linux distributions and supply chain security, is a provider of hardened, continuously updated, “zero-CVE” open-source software packages, from base operating system images to minimal container images, language libraries, and virtual machine appliances. The company focuses on devsecops teams, with solutions designed to give both developers and security architects a more trustworthy foundation for building and running software. The flagship offering is a rolling Linux distribution backed by security SLAs: seven days for critical vulnerabilities and 14 for others, though the average fix time is under 48 hours, according to the company. Chainguard says it maintains a growing catalog of more than 1,600 container images, expanding by about 100 per month, each built directly from upstream source rather than derived from another distribution. This “farm-to-table” approach ensures the entire tool chain, including compilers, runtimes, and dependencies, is rebuilt, retested, and re-released within hours of an upstream update. Chainguard Libraries are secure builds of widely used Java and Python packages, with Node.js libraries next on the roadmap. Chainguard says that building libraries from source addresses a common gap, where developers fetch third-party code directly from the internet without the protections of a packaged distribution. A third product line, Chainguard Virtual Machines, applies the same minimal, hardened philosophy to purpose-built VM appliances, often used as Kubernetes worker nodes or in scale-out cloud deployments. In many cases, container images from the Chainguard catalog can be rendered as bootable VM appliances for workloads that require full OS-level access to hardware resources. Chainguard continuously monitors upstream projects for new versions or vulnerabilities, triggering rebuilds, integration tests, and publishing to customer registries. For security teams, Chainguard says starting with a clean, verified base image means certain CVEs “never show up on scans” because they’re eliminated entirely before deployment. When issues do emerge, remediation is measured in hours, not weeks, the company says. Checkmarx At Black Hat 2025, Checkmarx, which provides a suite of application security tools in its Checkmarx One platform, announced Checkmarx One Developer Assist, the first in a portfolio of AI-driven security agents designed for AI-native IDEs such as Windsurf, Cursor, and GitHub Copilot. Developer Assist brings secure coding guidance directly into the developer workflow, helping software developers address vulnerabilities as they write code instead of after the fact. In addition, Checkmarx previewed two upcoming Assist agents, which are expected to arrive later this year. The Policy Assist agent finds and fixes vulnerabilities as code moves through the CI/CD pipeline, while the Insights Assist agent provides real-time visibility into risk posture. The company continues to offer its AI Secure Coding Assistant (ASCA) for traditional integrated development environments (IDEs) including Visual Studio Code, Visual Studio, and JetBrains IDEs, alongside the newer extensions for Windsurf and Cursor. A key differentiator of the Checkmarx platform is the breadth of testing approaches. With static application security testing (SAST), software composition analysis (SCA), and security scanning/testing for APIs, container images, and infrastructure-as-code (IaC), as well as application security posture management (ASPM) capabilities, the platform provides software development organizations with a consolidated view of software risk. By collecting vulnerability findings, risk insights, and remediation guidance into a single view, Checkmarx One helps teams identify issues sooner and address them faster. GitHub GitHub, the home of most of the world’s open-source projects, has evolved from a source code management system into a full collaboration platform—first for developers, then for developers and security teams, and now for developers working alongside AI agents. The company’s security philosophy centers on going beyond vulnerability detection to enabling efficient, large-scale remediation, particularly in an era where AI-generated code is rapidly increasing development output. First, GitHub makes it easier for development teams to deal with issues by catching them early and baking the fixes right into the normal development workflow. Second, the platform can flag a vulnerable library once and push fixes or advisories across every repo and team using it—potentially millions of them. GitHub recently unveiled enhancements to its security campaigns feature. Security campaigns allow security teams to filter, prioritize, and assign vulnerabilities directly within the GitHub workflow, eliminating the need for developers to leave their environment to perform security-related tasks. GitHub has worked to incorporate context from production environments into prioritization workflows, through integration with Microsoft Defender for Cloud, for example, though specific runtime-based prioritization within campaigns has not been formally detailed in public documentation. The goal is to provide developers with prioritized, context-rich issues, augmented with “autofixes,” which GitHub says can be validated through additional checks before being proposed. With GitHub Copilot and Copilot agents integrated into this process, coding agents can iterate on fixes automatically, while keeping the developer in the loop for final approval, mitigating risks from AI-generated changes. The goal, according to GitHub, is to help make security an integrated part of development workflows, from code creation to pull request review, rather than a separate afterthought, while maintaining an open ecosystem and human oversight. JFrog JFrog provides a devsecops platform aimed at uniting software supply chain security with continuous delivery. Its focus is on giving developers and security teams a consolidated view across source code, binaries, containers, and runtime environments—what the company refers to as a “single source of truth.” A key emphasis is context. Rather than flagging every CVE, JFrog correlates vulnerabilities with the code that’s actually running in production, helping teams prioritize real risks over theoretical ones. This approach also extends to zero-day vulnerabilities, where organizations can determine not just whether an affected package exists somewhere in the pipeline, but whether it’s actively deployed and exploitable. Security scanning for open-source dependencies, secrets detection, and container analysis are integrated with Jfrog’s artifact and release management, tying issues directly back to the builds and deployments that introduced them. The company says this helps cut down on noise from low-priority issues while accelerating remediation for those that matter. Secrets management remains a growth area. JFrog highlights expanded coverage of credential patterns and support for custom detection rules. The company characterizes this as offering “360-degree security on secrets,” with visibility spanning source code, build artifacts, and other points in the software factory. It is applying AI models to identifying risk patterns, improving correlation across security tools, and strengthening automation in remediation workflows. Legit Security Legit Security describes its offering as an AI-powered application security posture management (ASPM) platform with deep roots in software supply chain security. Originally founded to secure the “software factory” itself, including CI/CD pipelines, source control systems, and developer collaboration tools, Legit has expanded into vulnerability management, with a strong emphasis on business context and root-cause remediation. The platform ingests data from its own scanners, covering SAST, SCA, secrets detection, and pipeline security, and from third-party tools. This data is overlaid with application context such as business criticality, data sensitivity, internet exposure, and material code changes. Legit’s goal is to filter the overwhelming number of findings down to the small set of vulnerabilities that are both exploitable and impactful. Legit uses AI to help classify and prioritize results, cutting down false positives by an order of magnitude, while ensuring developers stay in control of the final decisions. Legit’s root-cause correlation engine is a notable differentiator. Instead of leaving developers with dozens of separate tickets for the same underlying issue, spread across SCA scans, container scans, and runtime findings, Legit consolidates them into a single, fix-once task. A developer updating one version of a vulnerable dependency, for example, might automatically resolve 70 separate vulnerability alerts and their corresponding Jira tickets, according to Legit. Recent innovations include AI-powered remediation suggestions, tailored to the developer’s code base and environment, with human-in-the-loop review before merging. In June the company released a Model Context Protocol (MCP) server, enabling real-time feedback on security issues as developers generate code in AI-enabled IDEs, with future plans for proactive misconfiguration detection in context. Lastly, AI discovery and governance capabilities allow devsecops teams to inventory all AI models in use, whether adopted officially or introduced by developers, to support secure adoption and policy enforcement. Snyk Snyk provides tools that help programmers write and maintain secure code, whether created manually, or generated by AI. The platform offers both static and dynamic scanning, detecting vulnerabilities in containers, infrastructure-as-code files, open-source dependencies, and source code. Snyk integrates AI capabilities to both identify and remediate issues in real time, often without requiring explicit developer action. At Black Hat 2025, Snyk unveiled three MCP-related advancements. First, it introduced a Model Context Protocol server that enables its scanning tools to plug into modern AI-powered coding environments. Second, it introduced a free MCP scanning tool that detects “toxic flows,” where the combination of otherwise safe MCP server functions could create exploitable conditions. Third, the company extended its AI Bill of Materials (AI BoM) feature to include visibility into MCP components. AI BoM uses the CycloneDX standard to catalog every AI tool and model within an application for compliance and governance purposes. Snyk differentiates itself through a hybrid approach to AI. For vulnerability detection, the company relies on symbolic AI and custom rule sets built from its proprietary vulnerability intelligence, ensuring high accuracy without relying on the often unpredictable outputs of large language models. For autonomous remediation, Snyk fine-tunes coding models with its curated security data set, issuing automated fixes only when internal testing shows a 95% or higher success rate. Deep integrations with developer tools, from Visual Studio Code to modern AI coding assistants, further the company’s goal of embedding security seamlessly into everyday development workflows. Sonatype Sonatype focuses on helping enterprise software development teams make safe and productive use of open source and AI. The Sonatype platform delivers deep intelligence about open-source components, helping organizations identify available components, assess their risks and quality, and integrate that insight directly into developer workflows. This enables teams to make informed, automated decisions about the open-source libraries and AI models they incorporate into their software supply chains. Sonatype says its core differentiator is the breadth and accuracy of its data. The company maintains large databases of open-source components and open-source malware. The company strives detect intentionally malicious packages, beyond simply vulnerable ones, and employs a “human in the loop” approach, pairing AI/ML-powered analysis pipelines with a “world-class” open-source security research team. As stewards of Maven Central and inventors of Nexus Repository, Sonatype holds a unique position in monitoring Java ecosystem activity and managing binary artifacts at scale. At Black Hat 2025, Sonatype highlighted new capabilities for detecting and governing AI usage in the software supply chain. Built into the flagship Sonatype Lifecycle product, the company’s AI software composition analysis can identify AI model integrations, including “shadow AI” such as derivative models retrained by developers, providing visibility and policy controls for safe AI adoption. Another recent innovation is “golden versions,” where Sonatype analyzes both direct and transitive dependencies and recommends upgrades that are backward-compatible, allowing developers to upgrade without fear of breaking builds or introducing risk. With increasing volumes of code being generated by AI, Sonatype’s accurate data sets, automation, and extensive integrations aim to help enterprises streamline development while maintaining security and compliance. Clearly, application and supply chain security platforms are evolving quickly under the influence of AI. From autonomous remediation and context-driven prioritization to governance over AI models themselves, the common thread is clear: security is no longer being bolted on at the end but is increasingly being built in from the start. As devops and devsecops practices mature, the role of AI in software development is expanding. AI isn’t just an accelerator; it’s becoming the foundation for how modern software is secured.
https://www.infoworld.com/article/4047160/8-vendors-bringing-ai-to-devsecops-and-application-securit...
Related News |
25 sources
Current Date
Aug, Fri 29 - 01:53 CEST
|