At best, such inadvertent errors will require extra time for developer teams to fix.
If organizations want to take advantage of AI to optimize the software development lifecycle, they must first give their teams suitable training to manage the risk of something going wrong.
While generative AI and large language models could be a productivity boon for stretched developer teams, the technology has also been seized on by those with nefarious intent.
Their developers claim these, and other tools can help write malware, create hacking tools, find vulnerabilities and craft grammatically convincing phishing emails.
Even after the AI was explicitly requested to correct the code, it did so as directed in only seven cases.
The researchers posited that, if a hacker did the same probing, they could create an actual open-source project with the same name as the hallucinated responses - directing unwitting users to malicious code.
In other words, the code and data used to train the model was of poor quality in the first place.
It's proof if any were needed that many developers produce vulnerable code.
Better training is required so that teams relying on generative AI are more capable of spotting these kinds of mistakes.
If done well, it would also arm them with the knowledge needed to be able to use AI models more effectively.
Training programs should be universally taught to everyone who has a role to play in the SDLC, including QA, UX and project management teams, as well as developers.
They should have a focus on rewarding excellence, so that security champions emerge who can organically influence others.
One of the most effective ways to use AI to produce secure results is by minimizing the task you give it.
When we ask AI to help us write code it should be for very small tasks that are easy for us to understand and quickly evaluate for security.
Someday AI might be able to write code for us, but today it works much better as a reference to help us when we are stuck rather than a tool that can produce secure code for us.
Yes, it can be a useful resource, but only if treated as the fallible coding partner it often is.
Faster coding isn't better if it comes with bugs.
Michael is an Ex-Army Green Beret turned application security engineer.
In his civilian career, he is the Director of Application Security and content team lead for Security Journey, a SaaS-based application security training platform.
He leverages his security knowledge and experience as a developer to educate and challenge other developers to be a part of the security team.
This Cyber News was published on www.cyberdefensemagazine.com. Publication date: Wed, 03 Jan 2024 07:13:06 +0000