By: Steven Zimmerman, sr. solutions manager, Black Duck
Across industries and localities, each year brings about innovations in software development and application security. Many advancements are embraced, others embargoed. Often, for those tasked with application security and DevSecOps, this determination comes down to how naturally new technologies and processes fit into existing workflows.
Black Duck’s ‘2024 Global State of DevSecOps‘ report examines the perspectives of security teams, developers, DevOps professionals, and executives in the areas of DevSecOps efficiency, security testing quality, and the rapid emergence of AI-assisted development. This year, despite a continued momentum in the consolidation of AppSec tools, a notable 48% of organizations continue to use between eleven and twenty security testing tools. While automated mechanisms for adding applications to the test queue are increasingly popular (38% of respondents claim all projects are added automatically), an alarmingly large constituency (29%) claim all projects are added manually.
In prior years, this fact would have implications solely on workflow efficiency amid a shift to DevSecOps. But in 2024, AI-generated code and AI-assisted development widened the gap between security and development teams. Now, software development pipelines are flowing faster than security testing pipelines, and something’s about to burst.
Over 90% of respondents use AI tools in some capacity for software development. Because AI tools may generate insecure code or reference vulnerable third-party libraries, this metric may be indicative of an expanding attack surface; one which is obscured by developers’ lack of insight into the structure and rationale of the AI-generated code.
An optimistic interpretation of Black Duck’s research may reveal that 43% of respondents permit only certain developers or teams to use AI tools to write code, perhaps representing a methodical and phased adoption as AppSec teams prepare their security gates accordingly. Yet this year’s data alludes to the “better to ask for forgiveness than to beg for permission” adage, with 21% of respondents acknowledging developers’ use of AI tools despite an organizational embargo on such tools.
Examining AI-assisted development through a security lens, 85% of respondents attest that they have some measures in place to address the challenges posed by AI-generated code. While encouraging, when asked about their confidence in their preparedness, less than a quarter (24%) were very confident in their policies and processes for testing such code. Furthermore, 20% were only slightly confident, and 6% not at all confident, in their preparedness to manage and secure AI-generated code.
The most concerning insight comes from those who permit all developers to use AI tools while claiming slight confidence (18%) or no confidence (4%) in their ability to secure AI-generated code. This group, totaling 6% of all respondents, seemingly prioritize development speed over application security. A deeper breakdown of this data, comparing respondents’ permission and use of AI-assisted development tools against their confidence security measures, is available in the complete report.
As organizations proceed to sculpt their DevSecOps programs with AI-assisted development in mind, it’s important to emphasize both testing coverage and actionability of results. After all, faster development schedules and more-frequent code pushes means the task of fixing detected issues must also be abbreviated without sacrificing efficacy.
This year’s respondents illustrate that the oft-used metric of testing efficiency may be a superficial one, artificially bolstered by reducing the test workload. A dispiriting 5% of respondents only claim up to 20% of their projects and repositories being included in their testing queue. Additionally, 65% of respondents include up to 60% of these artifacts in their testing queue, while only 9% can claim up to 100% coverage. As AI accelerates development, it’s clear that AppSec teams have room to improve, extending their increased focus on test automation across each pipeline, repo, and project.
Despite a heavy skew toward lower security testing coverage, 20% of respondents find the results of such tests to be extremely easy to understand and to act upon, strengthening the confidence in such organizations’ ability to secure AI-generated code at a speed commensurate with development. In fact, 72% find it at least somewhat easy to understand and to act upon security test results.
Action, however easy it may be, cannot be effective if based on spurious data. This is the “noise” factor, the static in the system, and the distracting falsities, duplications, and conflicts in the potentially thousands of security risks flagged by AppSec tests.
This year, 78% of respondents reported over 20% of their results to be noise, a characteristic which ultimately impacts both the efficiency and efficacy of triage and remediation. And while a potentially pessimistic 3% assert that up to 100% of their findings may be noise, it’s clear that the 75% who find up to 60% of results to be noise can benefit from greater correlation of results, refined AppSec policies, and automated issue verification.
All things considered, Black Duck’s Global State of DevSecOps report for 2024 illustrates that the plurality of organizations have laid the foundation of DevSecOps and must now refine their initiatives to expand coverage and reduce distraction for all those involved. As AI-generated code floods pipelines, automation and closed feedback loops between security and development teams are key to building secure, high-quality software faster and accelerating security to match the speed of AI.