In 2022, a surge of AI-based coding assistants revolutionized the software development landscape.
Even though organizations everywhere are using AI-based coding, there remains a tug-of-war within organizations between the benefits and security fears concerning AI-based software development.
At the same time, developers as 59% of respondents said they are concerned that AI tools will introduce security vulnerabilities into their code.
The two sentiments appear contradictory, but they accurately reflect the current tension introduced by this new generation of coding assistants.
It's almost as if, despite the tangible benefits of AI in coding, software developers, engineering and security teams still don't fully trust AI tools.
Although developers fear the potential vulnerabilities of AI-based coding tools, previous research has not conclusively proven or disproven that AI coding assistants introduce security flaws.
In an August 2022 study, New York University researchers found that using LLMs for coding assistance did not introduce significantly more errors among student programmers than traditional coding methods.
Researchers looked at the hourly coding output of over 8,000 professional GitHub users in Italy and other European countries and found that the output of Italian developers decreased by around 50% in the first two business days after the ban but rebounded to previous levels shortly after.
AI-generated and human-generated are processed in the same way by a computer, so all security tooling and approaches can work equally well with AI and human code.
Automate: AI coding assistants such as CodeWhisperer and Copilot can speed up development.
To keep up with the increased pace, they should be paired with comprehensive security automation tools to ensure that code flowing through the development process remains secure.
Early studies have shown that developers using these tools may increase their code output by as much as 50%. This will push AppSec teams beyond their capacity and means the only viable option is automation.
Putting human eyes on the code is critical for security review and spotting problems that LLMs may introduce.
If a team uses AI to increase efficiency and ship more code more frequently, that should free up additional resources for more frequent and intense code audits.
Granted, code audits are not a favorite task of many developers, but the increased review process should have the secondary benefit of shifting security left naturally.
Educate: Have a plan to teach developers about secure coding with the new AI tools.
As the Stanford study showed, better prompts focused on secure coding practices can yield more secure code.
AI introduces layers of abstraction, which potentially masks code issues.
Making AI-enabled coding more secure is a manageable problem.
The way to more secure code is through more automation and more human-in-the-loop code review, and continuing to shift code security left.
This Cyber News was published on securityboulevard.com. Publication date: Tue, 19 Dec 2023 13:43:05 +0000