Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work.
Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and inadvertently leaking sensitive data - without their employers having any knowledge of them doing so.
Companies need a gatekeeper, and Metomic aims to be one: The data security software company today released its new browser plugin Metomic for ChatGPT, which tracks user activity in OpenAI's powerful large language model.
Research has shown that 15% of employees regularly paste company data into ChatGPT - the leading types being source code, internal business information and personally identifiable information.
The top departments importing data into the model include R&D, financing and sales and marketing.
One of the most significant data exposures comes from customer chat transcripts, said Vibert.
Customer support teams are increasingly turning to ChatGPT to summarize all this, but it is rife with sensitive data including not only names and email addresses but credit card numbers and other financial information.
Beyond inadvertent leaks from unsuspecting users, other employees who may be departing a company can use gen AI tools in an attempt to take data with them.
While some enterprises have moved to outright block the use of ChatGPT and other rival platforms among their workers, Vibert says this simply isn't a viable option.
Metomic's ChatGPT integration sits within a browser, identifying when an employee logs into the platform and performing real-time scanning of the data being uploaded.
If sensitive data such as PII, security credentials or IP is detected, human users are notified in the browser or other platform - such as Slack - and they can redact or strip out sensitive data or respond to prompts such as 'remind me tomorrow' or 'that's not sensitive.
Security teams can also receive alerts when employees upload sensitive data.
Vibert emphasized that the platform does not block activities or tools, instead providing enterprises visibility and control over how they are being used to minimize their risk exposure.
Today's enterprises are using a multitude of SaaS tools: A staggering 991 by one estimate - yet just a quarter of those are connected.
Metomic's platform connects to other SaaS tools across the business environment and is pre-built with 150 data classifiers to recognize common critical data risks based on context such as industry or geography-specific regulation.
Enterprises can also create data classifiers to identify their most vulnerable information.
They can determine how a marketing team is using ChatGPT and compare that to use in other apps such as Slack or Notion.
The platform can determine if data is in the wrong place or accessible to non-relevant people.
He pointed out that there's not only a browser version of ChatGPT - many apps simply have the model built in.
Data can be imported to Slack and may end up in ChatGPT one way or another along the way.
This Cyber News was published on venturebeat.com. Publication date: Tue, 06 Feb 2024 00:43:04 +0000