A recent survey shows that untested software releases, rampant pushing of unvetted and uncontrolled AI-derived code, and bad developer security are all culminating to seriously expand security risks across software development.
Add in the explosion of low-code/no-code development and economic headwinds that are pressuring developers to deliver features with less support, and the AppSec world is in for a perfect storm in 2024.
Mature organizations recognize they need to mature their AppSec approach to keep pace with modern development and release practices.
Here's why a more holistic AppSec approach is key.
It has to do with the security hygiene practiced by developers daily.
About three-quarters of developers admit to circumventing security measures by doing things like disabling multi-factor authentication, or doing an end-around on VPN to speed up their work.
This report points to a huge need for security support in creating developer guardrails that are embedded in the CI/CD pipeline, so that developers can still move quickly but so so safely.
Nir Valtman, founder of the software security firm Arnica, said minimizing the attack surface by reducing the permissions to source code, the place where the problem starts, was key.
A big part of this holistic approach to curbing bad operational security is visibility.
Ideally, security should get buy-in with their approach.
Tools like GitHub Copilot and ChatGPT stand to greatly accelerate developer productivity, but utilizing code produced through GenAI adds more to the risk equation.
In a recent Security Table Podcast, longtime Sppsec veteran Jim Manico, founder of Manicode Security, explains the scenario succinctly.
To use AI as a developer is necessary because if you don't your productivity is going to be one-third to a fourth of your peers.
If you're using AI without security review, you're screwed in a bad way.
The Developers Behaving Badly report found that most developers are failing to do that review.
Holistic AppSec programs are going to need the policies, developer education, tooling, and security guardrails necessary to meet these AI risks head-on, as it is inevitable that generative AI is embedded into developer processes given tools like GitHub Copilot.
Speaking of inevitability, another huge one is the looming risks that are coming for organizations with regard to low-code/no-code development environments - both for professional developers and citizen developers.
This is a looming issue that didn't make it into the Developers Behaving Badly survey but which when combined with generative AI, is poised to cause the number of applications needing security scrutiny to mushroom.
Just like with the rest of development environments, the modern AppSec team will need to start building automated guardrails and testing into low-code/no-code development in order to attain holistic AppSec.
This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Ericka Chickowski.
This Cyber News was published on securityboulevard.com. Publication date: Thu, 07 Dec 2023 13:43:04 +0000