Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks

Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work.
Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and inadvertently leaking sensitive data - without their employers having any knowledge of them doing so.
Companies need a gatekeeper, and Metomic aims to be one: The data security software company today released its new browser plugin Metomic for ChatGPT, which tracks user activity in OpenAI's powerful large language model.
Research has shown that 15% of employees regularly paste company data into ChatGPT - the leading types being source code, internal business information and personally identifiable information.
The top departments importing data into the model include R&D, financing and sales and marketing.
One of the most significant data exposures comes from customer chat transcripts, said Vibert.
Customer support teams are increasingly turning to ChatGPT to summarize all this, but it is rife with sensitive data including not only names and email addresses but credit card numbers and other financial information.
Beyond inadvertent leaks from unsuspecting users, other employees who may be departing a company can use gen AI tools in an attempt to take data with them.
While some enterprises have moved to outright block the use of ChatGPT and other rival platforms among their workers, Vibert says this simply isn't a viable option.
Metomic's ChatGPT integration sits within a browser, identifying when an employee logs into the platform and performing real-time scanning of the data being uploaded.
If sensitive data such as PII, security credentials or IP is detected, human users are notified in the browser or other platform - such as Slack - and they can redact or strip out sensitive data or respond to prompts such as 'remind me tomorrow' or 'that's not sensitive.
Security teams can also receive alerts when employees upload sensitive data.
Vibert emphasized that the platform does not block activities or tools, instead providing enterprises visibility and control over how they are being used to minimize their risk exposure.
Today's enterprises are using a multitude of SaaS tools: A staggering 991 by one estimate - yet just a quarter of those are connected.
Metomic's platform connects to other SaaS tools across the business environment and is pre-built with 150 data classifiers to recognize common critical data risks based on context such as industry or geography-specific regulation.
Enterprises can also create data classifiers to identify their most vulnerable information.
They can determine how a marketing team is using ChatGPT and compare that to use in other apps such as Slack or Notion.
The platform can determine if data is in the wrong place or accessible to non-relevant people.
He pointed out that there's not only a browser version of ChatGPT - many apps simply have the model built in.
Data can be imported to Slack and may end up in ChatGPT one way or another along the way.


This Cyber News was published on venturebeat.com. Publication date: Tue, 06 Feb 2024 00:43:04 +0000


Cyber News related to Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks

Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
4 months ago Venturebeat.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
4 months ago Imperva.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
5 months ago Venturebeat.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
5 months ago Cysecurity.news
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
6 months ago Cyberdefensemagazine.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
3 months ago Itsecurityguru.org
Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
7 months ago Darkreading.com
How to lock a file or folder in MacOS Finder - Of course, when you have those types of sensitive documents, you'd want them stored more securely than within a locked file. If the files are less sensitive yet you still don't want anyone monkeying with them, MacOS Finder has a feature that can help ...
5 months ago Zdnet.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
7 months ago 404media.co
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training - Both OpenAI and Google have released guidance for website owners who do not want the two companies using the content of their sites to train the company's large language models. We've long been supporters of the right to scrape websites-the process ...
6 months ago Eff.org
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
Why I Chose Google Bard to Help Write Security Policies - COMMENTARY. Ever since large language models like ChatGPT burst onto the scene a year ago, there have been a flurry of use cases for leveraging them in enterprise security environments. From the operational, such as analyzing logs, to assisting ...
6 months ago Darkreading.com
Less is more: Conquer your digital clutter before it conquers you - In case you missed it, last week was Data Privacy Week, an awareness campaign to remind everybody that any of our online activities creates a trail of data and that we need to better manage our personal information online. Increasingly, we live our ...
1 year ago Welivesecurity.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
6 months ago Cyberdefensemagazine.com
How to use Bitwarden Identities to secure your most sensitive data - Bitwarden allows you to store all types of information. There's another type of entry you can add to the Bitwarden password manager and it's one you probably didn't even know you needed to use. That entry is called an Identity, which can include a ...
6 months ago Zdnet.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
6 months ago Bleepingcomputer.com
What is Word Unscrambler In Gaming? - Are you tired of getting stuck on those tricky word puzzles in your favourite mobile game? Have you ever wished for a tool to help unscramble those seemingly impossible words? Look no further because the word unscrambler is here to save the day! This ...
1 year ago Hackread.com
Smashing Security podcast #307: ChatGPT and the Minister for Foreign Affairs Graham Cluley - Could a senior Latvian politician really be responsible for scamming hundreds of "Mothers-of-two" in the UK? And should we be getting worried about the AI wonder that is ChatGPT? All this and more is discussed in the latest edition of the "Smashing ...
1 year ago Grahamcluley.com
The dark side of Optimize Mac Storage: What you need to know if you rely on it - During the course of the past few days, it's become clear to me that there is a serious architectural problem with how Apple manages files on the Mac with iCloud, and that design flaw can lead to extensive data loss. If you have more data in your ...
1 year ago Zdnet.com
What Is the Android Files Safe Folder and How Do You Use It? - The Android Files safe folder is a great way to ensure that your files and data remain safe and secure on your Android device. The Files safe folder is a feature of the Android Files app, a part of the Google Files suite of app. This folder ...
1 year ago Zdnet.com
OpenAIS ChatGPT is a Polymorphic Malware: How to Protect Yourself - Internet security is an important concern in the modern digital age. With the emergence of new threats such as ransomware, Trojans, and sophisticated variants of Polymorphic Malware, it is essential that users take the necessary steps to protect ...
1 year ago Hackread.com
Google Researchers Find ChatGPT Queries Collect Personal Data - The LLMs are evolving rapidly with continuous advancements in their research and applications. Recently, cybersecurity researchers at Google discovered how threat actors can exploit ChatGPT queries to collect personal data. StorageGuard scans, ...
7 months ago Cybersecuritynews.com
Firm offers protection layer preventing sensitive data uploaded to ChatGPT - In the ongoing discourse surrounding the impact of ChatGPT on our economic and business landscape, both positive and negative opinions have surfaced. A recent development introduces a unique perspective, shedding light on data security in relation to ...
4 months ago Cybersecurity-insiders.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)