Security professionals should regard AI in the same way as any other significant technology development.
Generative AI tools such as ChatGPT are being used for rudimentary purposes, such as assisting scammers to create convincing phishing emails, but it's the less known uses that should concern CISOs.
It's the same with AI. Whether it's a developer looking for an AI algorithm that can help to solve a coding problem, or a marketer who needs assistance with creating content, a simple Google search will deliver a link to multiple AI-enabled tools that could give them a solution in moments.
If we impose a blanket ban on employees using these tools, they will just find a way to access them covertly, and that introduces greater risk.
The issue for CISOs is how they can endorse the use of AI without making the company, its employees, customers, and other stakeholders vulnerable.
If we start by assuming AI will be used, we can then construct guardrails to mitigate risk.
One of the most common and accessible AI tools are large language models such as ChatGPT from OpenAI, LLaMA from Meta and Google's PaLM2.
In the wrong hands, LLMs can deliver bad advice to users, encourage them to expose sensitive information, create vulnerable code or leak passwords.
While a seasoned CISO might recognize that the output from ChatGPT in response to a simple security question is malicious, it's less likely that another member of staff will have the same antenna for risk.
Without regulations in place, any employee could be inadvertently stealing another company's or person's intellectual property, or they could be delivering their own company's IP into an adversary's hands.
There is nothing that the original developer can do to control this because the LLM was used to help create the code, making it highly unlikely that they can prove ownership of it.
These are just some of the security risks that enterprises face from AI, but they can be mitigated with the right approach, allowing for all the advantages of AI to be fully optimized.
While the security team can provide guidance about certain risks - the dangers, for example, of downloading consumer-focused LLMs onto their personal laptops to carry out company business - feedback from employees on how they can benefit from AI tools will help all parties to agree on ground rules.
Security teams have much greater depth of knowledge as to the threats these tools pose and can pass this insight on in the form of a training program or workshops, to raise awareness.
Providing real-life examples, such as how a failure to validate outputs from AI-generated content led to legal action, will resonate.
Where employees utilize these learnings to good effect, their successes should be championed and highlighted internally.
A positive security approach with the focus on assisting rather than preventing employees should be standard now, but when it comes to AI, employees should be able to submit their requests to use tools on a case-by-case basis, with appropriate modifications being made to the security policy each time.
The guardrails that CISOs set in agreement with the broader organization will undoubtedly change as AI begins to play a bigger role in enterprise life.
We are currently working in relatively unknown territory, but regulations are being considered by governments around the world in consultation with security professionals.
With each innovation comes both opportunity and risk, but we are also better positioned than ever to assess the risks and take advantage of the opportunities that AI affords.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Mon, 04 Dec 2023 05:43:04 +0000