CAMBRIDGE, MASS. - As AI tools and systems have proliferated across enterprises, organizations are increasingly questioning the value of these tools compared with the security risks they might pose.
At the 2024 MIT Sloan CIO Symposium this week, industry leaders discussed the challenge of balancing AI's benefits with its security risks.
Increasingly, this involves AI integration, which can introduce both business value and unforeseen risks.
On the security front, AI tools can quickly analyze and detect potential risks, Wheatman said.
Despite these reservations about generative AI in particular, Siddiqui noted, many cybersecurity tools currently in use already incorporate some type of machine learning.
Due to the technology's various benefits, businesses are increasingly using AI - including generative AI - in their workflows, often in the form of third-party or open source tools.
The alternatives - custom LLMs and other generative AI tools - are currently less widely adopted among enterprises.
Regardless of whether an organization chooses a custom or third-party option, AI tools introduce new risk profiles and potential attack vectors, such as data poisoning, prompt injection and insider threats.
The widespread availability of AI tools also means that external bad actors can use AI in unanticipated and harmful ways.
Threats from bad actors are even more concerning when cybersecurity teams aren't well versed in AI - one of the many AI-related risks that organizations are starting to address.
As AI becomes integral to business operations, the key is instead to deploy it in a way that balances benefits with acceptable risk levels.
Developing a plan for AI cyber resilience in the enterprise requires comprehensive risk evaluation, cross-team collaboration, internal policy frameworks and responsible AI training.
Organizations should evaluate the value that a new AI tool or system could offer the business, then compare that value with the potential risks.
In particular, prioritizing tangible risks over more theoretical threats can help companies efficiently assess their situation and move forward.
Brown raised a similar point, explaining that teams across a wide range of functions - from cybersecurity to risk management to finance and HR - need to participate in risk evaluation.
Bringing together these different aspects of AI workflows can shore up organizational defenses and ensure that everyone is aware of what AI tools and systems are brought into the organization.
Companies need strict plans in place to educate their employees and other users on how to use AI responsibly: with a healthy dose of skepticism and a strong understanding of the ethical issues raised by AI tools.
In practice, models can propagate those biases, resulting in harmful outcomes for marginalized communities and adding a new dimension to an AI tool's risk profile.
Organizations need to train their employees and technology users to always check a tool's output and be skeptical in any AI use, rather than relying solely on an AI system.
When businesses put in the necessary time, effort and budget to protect against AI cybersecurity risk, they'll be better positioned to reap the technology's rewards.
This Cyber News was published on www.techtarget.com. Publication date: Sat, 18 May 2024 08:43:05 +0000