While providers utilize encryption to protect user interactions, new research raises questions about how secure AI assistants may be.
According to a study, an attack that can predict AI assistant reactions with startling accuracy has been discovered.
This method uses big language models to refine results and takes advantage of a side channel present in most major AI assistants, except for Google Gemini.
According to Offensive AI Research Lab, a passive adversary can identify the precise subject of more than half of all recorded responses by intercepting data packets sent back and forth between the user and the AI assistant.
This attack is centered around a side channel that is integrated within the tokens that AI assistants use.
Real-time response transmission is facilitated via tokens, which are encoded-word representations.
Researchers use a token inference attack to refine intercepted data by using LLMs to convert token sequences into comprehensible language.
By using publicly accessible conversation data to train LLMs, researchers can decrypt responses with remarkably high accuracy.
This technique leverages the predictability of AI assistant replies to enable contextual decryption of encrypted content, similar to a known plaintext attack.
AI chatbots use tokens as the basic building blocks for text processing, which direct the creation and interpretation of conversation.
To learn patterns and probabilities, LLMs examine large datasets of tokenized text during training.
According to Ars Technica, tokens enable real-time communication between users and AI helpers, allowing users to customize their responses depending on environmental cues.
An important vulnerability is the real-time token transmission, which allows attackers to deduce response content based on packet length.
Sequential delivery reveals answer data, while batch transmission hides individual token lengths.
Reevaluating token transmission mechanisms is necessary to mitigate this risk and reduce susceptibility to passive adversaries.
Protecting user privacy is still critical as AI helpers develop.
Reducing security threats requires implementing strong encryption techniques and improving token delivery mechanisms.
By fixing flaws and improving data security protocols, providers can maintain users' faith and trust in AI technologies.
Providers need to give data security and privacy top priority as vulnerabilities are found by researchers.
Hackers are out there; the next thing we know, they're giving other businesses access to our private chats.
This Cyber News was published on www.cysecurity.news. Publication date: Sun, 17 Mar 2024 16:28:05 +0000