Today's chief information security officers face new cybersecurity challenges because of the increasing use of artificial intelligence, particularly generative AI. This is not a surprise given the growing use of GenAI in the workplace, with fully two-thirds of organizations last year reporting that they were already beginning to use it and only 3% of enterprises not planning to adopt it.
AI has become a double-edged sword for cybersecurity.
The recent Executive Order on AI signed by President Biden noted the potential for AI to enable nation-state offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyberattacks.
A would-be malicious cyber actor no longer needs any programming skills using GenAI because large language model AI tools can be used to write malware.
GenAI can dramatically increase the sophistication of spear-phishing attacks, elevating them above the boilerplate content and spelling errors or awkward grammar that organizations often teach users to look for.
AI-driven data analytics have given malicious cyber actors new tools for exploitation that make new classes of data attractive targets.
A decade ago, only nation-states had the data centers and computing power to make it possible to exploit large data sets.
The AI-driven revolution in data mining and the growth of pay-as-you-go computing power and storage mean that massive data sets have become exploitable and attractive targets for criminal actors and nation-states.
Sensors linked in a common architecture allow network operators and defenders to generate data in real time, and increasingly powerful AI and ML can make sense of it in real time.
Malicious cyber actors seldom succeed the first time they attack a target-even using AI-but rely on their failed attacks being missed in the deluge of alerts flooding into an enterprise security operations center each shift.
AI helps spot anomalous activity, determine which anomalies are attacks, generate a real-time response to block the attack and inoculate the rest of the organization's digital assets against further attacks.
Remember, AI and ML are fueled by data-and the more data they have to train on and work with, the more effective they are.
Generally, those who operate and defend an enterprise environment are better positioned to have such data than those seeking to break into the network.
As empowering as AI is for CISOs, enterprises face other challenges relating to using AI in the workplace.
A key concern is that data contained in GenAI queries becomes part of the large language model dataset used by these models.
Use retrieval-augmented generation that uses validated external data to fine-tune the accuracy of foundational models without feeding them additional training data.
Run data loss prevention as a filter on input into public LMM. Talk to your GenAI provider and tailor your use cases with data security in mind.
Use privacy-enhancing technologies with data obfuscation, encrypted data processing, federated/distributed analytics on centralized housed data, and data accountability tools.
The more data you provide, the greater the likelihood of leakage.
Fortinet can help your organization figure out how to leverage this power to enhance visibility, connectivity, and cyber-response capabilities.
This Cyber News was published on feeds.fortinet.com. Publication date: Thu, 01 Feb 2024 17:13:04 +0000