Google is making moves to help developers ensure that their code is secure.
The IT giant this week said it is donating $1 million to the Rust Foundation to improve interoperability between the Rust programming language and legacy C++ codebase in hopes of getting more developers make the shift to Rust.
The donation supports the foundation's new Interop Initiative to expand interoperability between the languages and make it easier for programmers to adopt Rust, one of a number of new languages - like Go, Python, and C# - that protect memory to reduce the number of vulnerabilities in software.
The donation to the Rust Foundation comes a week after Google said it was releasing its AI-based fuzzing framework as an open source resource.
The tool uses large-language models to help developers more quickly find vulnerabilities in their C and C++ projects.
In the fuzzing framework announcement, members of Google's security teams wrote that they also would show developers and researchers how they are using AI to accelerate the processing of patching those vulnerabilities.
CISA in December urged software makers to adopt newer memory-safe languages like Rust and create roadmaps for moving away from C and C++. In a report, the agency said such a shift would not only eliminate many of the most common vulnerabilities in languages but also migrate the responsibility for software security from users to developers, which CISA is promoting.
Google joined the foundation in 2021, by which time the language was being used with Android and other Google products, Bergstrom wrote in a blog post, stressing the need for memory-safe security.
Fuzzing is an automated process to test software for vulnerabilities and Google has been using its OSS-Fuzz tool since 2016.
Google used LLMs to write code specific to projects to boost coverage and find more vulnerabilities, the security team members wrote.
Google has used LLMs in more than 300 OSS-Fuzz C and C++ projects, which grew coverage across project codebases, and improved prompt generation and build pipelines, which further increased code line coverage by up to 29% in 160 projects.
Now Google is turning AI onto bug fixing, recently announcing an experiment that included building an automated pipeline that takes in vulnerabilities - including those found by fuzzing - and prompting LLMs to generate fixes and test them before choosing the best one to be reviewed by humans.
AI-powered patching fixed 15% of the bugs, which translated into significant time savings for engineers, according to Google, adding that the technology's benefits should benefit most steps throughout the software development process.
The open sourcing of the fuzzing framework means that any researcher or developer can use their own prompts to test how well fuzz targets generated by LLMs - including Google's VertexAI - fare.
Those interested in the use of LLMs to patch bugs can read Google's paper about it.
This Cyber News was published on securityboulevard.com. Publication date: Wed, 07 Feb 2024 19:13:04 +0000