ChatGPT has emerged as a shining light in this regard.
Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support.
The bad: The risks surrounding ChatGPT. Of course, there's always two sides to the same coin, and reasons for hesitancy around ChatGPT remain.
From security to data loss, several challenges are prevalent on the platform.
Further, the ability to verify and rely on the accuracy of data and subsequent outcomes that ChatGPT provides isn't certain.
ChatGPT is a learning platform - if it's fed bad data, it will produce bad data.
It's also important to recognise that ChatGPT itself already suffered a breach in 2023 due to a bug in an open-source library.
If an attacker is successful in infiltrating ChatGPT - something that can be achieved through potentially hidden vulnerabilities - they may in turn serve some malicious code through it, possibly affecting millions of users.
According to a survey of IT professionals from Blackberry, more than seven in 10 feel that foreign states are likely already to be using ChatGPT for malicious purposes against other nations.
There are a variety of ways in which adversaries can tap into intelligent AI platforms.
In the same way that customer service professionals may leverage the platform, threat actors can use it can make their phishing lures look more official and coherent.
In this sense, ChatGPT could democratise cybercrime in the same way that ransomware-as-a-service would - a reality that would lead to a massive spike in the volume of attacks we witness globally.
Of course, that's been hard to do - many people didn't even know what ChatGPT was at the start of the year.
ChatGPT will be key in unlocking user productivity and creativity.
OpenAI itself has recognised the importance of addressing security concerns in order to fulfil the platform's potential.
The company recently announced the rollout of ChatGPT enterprise, offering capabilities such as data encryption, and the promise that customer prompts and company data will not be used for training OpenAI models.
To effectively combat all risks, organisations should look to embracing a diverse suite of security tools to maximise protection.
Isolation can record data from sessions, allowing organisations to keep track of end-user policy violations in web logs on platforms like ChatGPT, such as the submitting of sensitive data.
Isolation is a key component capable of ensuring that ChatGPT is used in a secure manner, making it easier to enact key policies and processes.
Having worked for over 15 years for various tier 1 vendors who specialise in detection of inbound threats across web and email as well as data loss prevention, Brett joined Menlo Security in 2016 and discovered how isolation provides a new approach to solving the problems that detection-based systems continue to struggle with.
This Cyber News was published on www.cyberdefensemagazine.com. Publication date: Wed, 20 Dec 2023 06:13:04 +0000