Given how dangerous the gold rush was and how long it took to incorporate safety measures, the time is now for organizations using GenAI to follow secure-by-design principles and follow CISA's example.
Beyond writing faux movie scripts and passing school exams, GenAI is projected to generate as much as $4.4 trillion annually into the global economy.
The hype surrounding its potential is real, but it has also created a problematic environment where go-to-market timelines and cost efficiency seemingly precede safety and security.
A Fastly research report found more than two-thirds of IT decision-makers believe GenAI will open new attack avenues, while nearly half are concerned about their inability to defend against AI-enabled threats.
Nation-state adversaries could use GenAI to target U.S. critical infrastructure sites, such as electric grids, water treatment plants and healthcare facilities, putting lives at risk.
We're in a race against the clock to put stronger parameters in place that facilitate secure AI systems and foster a safer future.
Facilitate the adoption of secure-by-design principles to drive safe AI software development and implementation across the public and private sectors.
Coordinate with international partners to advance global AI security best practices, and ideate effective policy approaches for the U.S. government's national AI strategy.
There isn't a straightforward solution to executing them at scale, but it starts by ensuring AI system developers weigh security objectives and business objectives as equal.
The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer.
Although AI software systems might differ from traditional forms of software, fundamental security practices still apply.
As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle.
Implemented during the early stages of product development, secure-by-design principles help reduce an application's exploit surface before it is made available for broad use - promoting the security of the customer as a core business requirement rather than a technical feature.
The larger challenge is that, in addition to assuring AI systems, we also must protect everything AI is capable of touching - critical infrastructure and private networks alike.
Secure by design must be implemented through the lens of AI alignment, ensuring systems are built to uphold fundamental human values and ethical boundaries.
Beyond danger to human life, failing to prioritize safe and secure AI systems could have legal consequences for AI system developers.
We're seeing a similar trend across cybersecurity amid new federal regulations, with the Security and Exchange Commission recently issuing fraud charges against SolarWinds and its CISO for allegedly concealing cyber-risk from investors and customers.
The rise of GenAI in 2023 showed how much can change in a year.
While we can't predict where the AI era is headed, a steadfast commitment to facilitating safe and secure systems is paramount to navigating it safely.
By following CISA's roadmap and blending secure by design with AI alignment throughout the development lifecycle, we can take proactive steps to ensure AI remains a force for good.
This Cyber News was published on www.techtarget.com. Publication date: Fri, 02 Feb 2024 20:43:05 +0000