As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data.
There are real, tangible risks businesses must address today, as AI/AGI is a relatively immature technology actively making its way into the corporate environment.
Specific to ChatGPT, there are many unknowns regarding its ongoing evolution and how it impacts data and information security.
Even if an organization secures its connectivity to OpenAI, it is challenging to ensure data protection, particularly granting the tremendous data troves gathered by ChatGPT. In late March, OpenAI disclosed a data breach that exposed portions of user chat history as well as personal user information including names, email/payment addresses, and portions of credit card data over a nine-hour window.
Samsung employees also leaked sensitive data into the ChatGPT program; as a result, Samsung lost control of some of its intellectual property.
These issues highlight the vulnerability of the product and raise serious concerns about the security of sensitive information that businesses, knowingly or unknowingly, entrust to ChatGPT. As with all third parties, these platforms must be vetted and their vendors contractually bound to protect the data to your organization's standards before being permitted access to it.
The security issues also underscore the legal obligations of organizations to secure their own and their clients' data.
Law firms with attorney-client privilege and those subject to regulations such as the Health Insurance Portability and Accountability Act and the EU's General Data Protection Regulation are particularly affected.
Organizations must ensure the security and privacy of their information.
The lack of clarity and transparency around how data is being handled creates a real risk for businesses using ChatGPT. Yet, lacking direct action by IT or security teams to impose controls, users can easily copy and paste data of any level of corporate sensitivity into the platform, without their organization's knowledge or consent.
Fortinet, Palo Alto Networks, Cisco, and other security vendors have not yet created holistic lists that include all the OpenAI and ChatGPT options available.
To mitigate the risks of AI tools, organizations need to take a proactive approach.
They should conduct thorough risk assessments to understand their exposure and ensure that appropriate security measures are in place, such as encryption, access controls, data leakage protection, and active monitoring.
Though powerful and seemingly useful, organizations must not allow ChatGPT and similar tools access to their systems and data until they can clearly understand the risk inherent in them and can control against or accept those risks.
As AI and technologies like ChatGPT and Bard are evolving at a lightning pace, continuously securing these iterations will certainly provide new challenges for both organizational IT and security researchers.
There continues to be much debate about the risk vs. reward of AI/AGI in enterprise settings.
Clearly, a tool that produces instant data, content, and analysis provides value; whether the risks can be contained, controlled, and managed to a sufficient degree to justify these rewards will be tested over time.
While the fear of AI evolving into Terminator or Skynet is certainly fun to hypothesize, the immediate risk is to today's data and customers' networks.
It is essential to prioritize data security to protect our organizations and the clients we serve.
He has over 25 years of expertise as an information technology consultant, with a focus on aligning IT strategies to current and future organizational goals, developing cloud migration and security strategies, and helping services businesses get laser focused on the security and efficiency needs of their clients.
This Cyber News was published on www.cyberdefensemagazine.com. Publication date: Thu, 14 Dec 2023 06:13:05 +0000