The AI era is set to be a time of significant change for technological and information security.
To guide the development and deployment of AI tools in a way that embraces their benefits while safeguarding against potential risks, the US government has outlined a set of voluntary commitments they are asking companies to make.
Develop tools to identify if content is AI-generated and prioritize research on ways AI could be harmful at a societal level to mitigate those harms.
Responsible AI development and deployment will require close work between industry leaders and the government.
To advance that goal, Google, along with several other organizations, partnered to host a forum in October to discuss AI and security.
As part of the October AI security forum, we discussed a new Google report focused on AI in the US public sector: Building a Secure Foundation for American Leadership in AI. This whitepaper highlights how Google has already worked with government organizations to improve outcomes, accessibility, and efficiency.
The report advocates for a holistic approach to security and explains the opportunities a secure AI foundation will provide to the public sector.
The Potential of Secure AI Security can often feel like a race, as technology providers need to consider the risks and vulnerabilities of new developments before attacks occur.
Since we are still early in the era of public availability of AI tools, organizations can establish safeguards and defenses before AI-enhanced threats become widespread. However, that window of opportunity won't last forever.
The potential use of AI to power social engineering attacks and to create manipulated images and video for malicious purposes is a threat that will only become more pressing as technology advances, which is why AI developers must prioritize the trust tools outlined as part of the White House's voluntary commitments.
AI is already transforming how people learn and build new skills, and the responsible use of AI tools in both the public and private sectors can significantly improve worker efficiency and the outcomes for the end user.
Google has been working with US government agencies and related organizations to securely deploy AI in ways that advance key national priorities.
Three Key Building Blocks for Secure AI At the October AI forum, Google presented three key organizational building blocks to maximize the benefits of AI tools in the US. First, it's essential to understand how threat actors currently use AI capabilities and how those uses are likely to evolve.
Second, organizations should deploy secure AI systems.
This can be achieved by following guidelines such as the White House's recommendations and Google's Secure AI Framework.
The SAIF includes six core elements, including deploying automated security measures and creating faster feedback loops for AI development.
Finally, security leaders should take advantage of all the ways AI can help enhance and supercharge security.
AI technologies can simplify security tools and controls while also making them faster and more effective, all of which will help defend against the potential increase in adversarial attacks AI systems may enable.
These three building blocks can form the basis for the secure, effective implementation of AI technologies across American society.
By encouraging AI development leaders and government officials to keep working together, we will all benefit from the enhancements that safe and trustworthy AI systems will bring to the public and private sectors.
This Cyber News was published on www.darkreading.com. Publication date: Wed, 20 Dec 2023 12:55:05 +0000