As artificial intelligence (AI) plays a larger and larger role with every new generative development in the software arena, the cybersecurity battlefield is rapidly changing. By 2025, projections estimate that AI will be responsible for creating over 50 percent of all new code. While this shift comes with benefits, it poses troubling questions about security vulnerabilities and the risk of exploitation. With recent legislative actions and the breakneck pace of technological advancement, there is a new urgency to ensure strong security for the future. Given these new realities, organizations need to take a different approach to software security.
AI-generated code, while efficient, poses unique risks. When AI generates code without the context of the finished product or knowledge of overarching security expertise, it is prone to creating security holes in an application. Rubi Arbel, the chief executive of Scribe Security, emphasizes this concern, stating, “AI multiplies risk when it generates code without product context or security expertise. When you feed AI with signed, contextual evidence and you automate it into the SDLC, it can multiply the speed of remediation.”
The Evolving Role of AI in Software Development
Beyond efficiency, the integration of AI into the software development life cycle provides tremendous new opportunities for improved security. With attackers capable of exploiting new vulnerabilities within hours rather than weeks, it is crucial for organizations to adopt proactive measures. For too long, software security has been mostly backwards-facing, waiting for a breach to act. By using AI to automate risk and security workflows, you can address potential risks before they turn into a crisis.
Today’s development pipelines are incredibly complex, frequently spanning multiple repositories, cloud environments, and third-party components. This structural complexity requires constant and automatable demonstration of evidence at each phase of development. In the absence of these types of measures, their organizations will never really know what’s actually in their production systems. Comprehensive, automatic evidence collection is clear that it needs to be continuous and automatic at every step. Without it, no one can credibly claim to know what’s going on in their production systems.
The independent global cybersecurity market reflects an intense desire for accountability and security. Industry experts predict it will bust through the $350 billion ceiling by 2030. Regulatory frameworks are likewise catching up to these paradigm shifts. In the United States, those same federal contractors by law need to develop Software Bills of Materials (SBOMs). These documents lay out the specific components they leverage in their software. In much the same way, Japan’s recent legislative efforts are further increasing pressure for accountability in software supply chains.
Legislative Measures Driving Accountability
At the same time, newer legislative efforts at the state, county, and city level are calling for stronger accountability and transparency in digital infrastructure creation. The United States’ Executive Order 14028 and the European Union’s Cyber Resilience Act mandate verifiable software lineage, which requires organizations to maintain detailed records of their software components and their origins. Japan’s Active Cyberdefense Law continues this trend, stating that it’s important to adopt a proactive approach to cybersecurity.
These four measures signal a growing recognition of the need for transparency in software development. As organizations worldwide are subjected to greater scrutiny over their cybersecurity measures, compliance with these regulations is becoming a necessity. Ignoring these obligations can result in hefty fines. Without it, you run the very real risk of incurring costly litigation and reputational damages, and eroding trust.
Arbel underscores the shifting landscape of trust in cybersecurity, stating, “Trust is no longer a matter of branding or reputation. It is a matter of cryptographic proof collected as development happens.” This view sheds light on the importance for organizations to embed strong detection practices at every stage of their creation cycle.
Addressing the Skills Gap in Cybersecurity
As organizations strive to navigate the complexities of AI-generated code and evolving cybersecurity threats, they must contend with a significant workforce challenge. According to ISACA, there are about 3.4 million cybersecurity positions still open across the globe. This skills gap creates a daunting barrier to the adoption of truly effective cybersecurity practices.
By tapping into AI-driven workflows to fuel these initiatives, organizations can augment their application security (AppSec) initiatives and better secure the ever-growing application landscape. Orchestrating automation of routine tasks creates more efficient workflows. By embedding AI capabilities, development teams can navigate the challenges and demands of accelerated application development.
