DeBackdoor - Framework to Detect Backdoor Attacks on Deep Models

In an era where deep learning models increasingly power critical systems from self-driving cars to medical devices, security researchers have unveiled DeBackdoor, an innovative framework designed to detect stealthy backdoor attacks before deployment. Researchers Dorde Popovic, Amin Sadeghi, Ting Yu, Sanjay Chawla, and Issa Khalil from Qatar Computing Research Institute and Mohamed bin Zayed University of Artificial Intelligence noted that most existing backdoor detection techniques make assumptions incompatible with practical scenarios. Backdoor attacks, among the most effective and covert threats to deep learning, involve injecting hidden triggers that cause models to behave maliciously when specific patterns appear in input data, while functioning normally otherwise. The framework represents a significant advancement in deep learning security, enabling developers to confidently deploy models in safety-critical applications by first verifying their integrity against backdoor vulnerabilities. The framework functions in pre-deployment scenarios with limited data access, works with single-instance models, and requires only black-box access – making it applicable in situations where developers obtain models from potentially untrusted third parties. Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news. Unlike gradient-based techniques that require internal model access, DeBackdoor employs Simulated Annealing, a robust optimization algorithm that excels in non-convex search spaces. Extensive evaluations across diverse attacks, models, and datasets demonstrate DeBackdoor’s exceptional performance, consistently outperforming baseline methods.

This Cyber News was published on cybersecuritynews.com. Publication date: Sat, 29 Mar 2025 07:55:06 +0000


Cyber News related to DeBackdoor - Framework to Detect Backdoor Attacks on Deep Models

DeBackdoor - Framework to Detect Backdoor Attacks on Deep Models - In an era where deep learning models increasingly power critical systems from self-driving cars to medical devices, security researchers have unveiled DeBackdoor, an innovative framework designed to detect stealthy backdoor attacks before deployment. ...
3 months ago Cybersecuritynews.com
BianLian GOs for PowerShell After TeamCity Exploitation - In conjunction with GuidePoint's DFIR team, we responded to an incident that began with the exploitation of a TeamCity server which resulted in the deployment of a PowerShell implementation of BianLian's GO backdoor. The threat actor identified a ...
1 year ago Securityboulevard.com CVE-2024-27198 CVE-2023-42793 BianLian
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
1 year ago Feedpress.me
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
9 months ago Securelist.com
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
1 year ago Go.theregister.com Hunters
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
1 year ago Paloaltonetworks.com
Zero Trust Security Framework: Implementing Trust in Business - The Zero Trust security framework is an effective approach to enhancing security by challenging traditional notions of trust. Zero Trust Security represents a significant shift in the cybersecurity approach, challenging the conventional concept of ...
1 year ago Securityzap.com
OpenAI says Deep Research is coming to ChatGPT free "very soon" - As pointed out by Tibor Blaho on X,  while discussing the Deep Research feature, Isa Fulford, Member of Technical Staff at OpenAI, confirmed that the company is testing Deep Research for free customers and will share more details soon. ...
3 months ago Bleepingcomputer.com
9 Best DDoS Protection Service Providers for 2024 - eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More. One of the most powerful defenses an organization can employ against distributed ...
1 year ago Esecurityplanet.com
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
1 year ago Cysecurity.news
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
1 year ago Darkreading.com
Advanced ransomware campaigns expose need for AI-powered cyber defense - In this Help Net Security interview, Carl Froggett, CIO at Deep Instinct, discusses emerging trends in ransomware attacks, emphasizing the need for businesses to use advanced AI technologies, such as deep learning, for prevention rather than just ...
1 year ago Helpnetsecurity.com
ML Model Repositories: The Next Big Supply Chain Attack Target - The techniques are similar to ones that attackers have successfully used for years to upload malware to open source code repositories, and highlight the need for organizations to implement controls for thoroughly inspecting ML models before use. ...
1 year ago Darkreading.com
Russian Sandworm Group Using Novel Backdoor to Target Ukraine - Russian nation-state group Sandworm is believed to be utilizing a novel backdoor to target organizations in Ukraine and other Eastern and Central European countries, according to WithSecure researchers. The previously unreported backdoor, dubbed ...
1 year ago Infosecurity-magazine.com
Deepfake attacks will cost $40 billion by 2027 - Now one of the fastest-growing forms of adversarial AI, deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate. Deloitte sees deep fakes proliferating ...
1 year ago Venturebeat.com
Cybersecurity Frameworks: What Do the Experts Have to Say? - Cybersecurity frameworks are blueprints for security programs. Typically developed by governmental organizations, industry groups, or international bodies, they take the guesswork out of developing defense strategies, providing organizations with ...
1 year ago Tripwire.com
Open Source AI Models: Big Risks for Malicious Code, Vulns - Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities. While the attacks appeared to be proofs-of-concept, their success in ...
5 months ago Darkreading.com
Iran's Peach Sandstorm Deploy FalseFont Backdoor in Defense Sector - In its latest campaign, Iranian state-backed hackers, Peach Sandstorm, employs FalseFont backdoor for intelligence gathering on behalf of the Iranian government. Cybersecurity researchers at Microsoft Threat Intelligence Unit have uncovered the ...
1 year ago Hackread.com
Pro-Hamas Cyberattackers Aim 'Pierogi' Malware at Multiple Mideast Targets - A group of pro-Hamas attackers known as the Gaza Cybergang is using a new variation of the Pierogi++ backdoor malware to launch attacks on Palestinian and Israeli targets. According to research from Sentinel Labs, the backdoor is based on the C++ ...
1 year ago Darkreading.com
New 'SpectralBlur' macOS Backdoor Linked to North Korea - Security researchers have dived into the inner workings of SpectralBlur, a new macOS backdoor that appears linked to the recently identified North Korean malware family KandyKorn. The observed SpectralBlur sample was initially uploaded to VirusTotal ...
1 year ago Securityweek.com
EU Reaches Agreement on AI Act Amid Three-Day Negotiations - The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament. The landmark bill will regulate the use of AI ...
1 year ago Infosecurity-magazine.com
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
1 year ago Securityboulevard.com
CVE-2018-8202 - An elevation of privilege vulnerability exists in .NET Framework which could allow an attacker to elevate their privilege level, aka ".NET Framework Elevation of Privilege Vulnerability." This affects Microsoft .NET Framework 2.0, Microsoft ...
3 years ago
CVE-2018-8284 - A remote code execution vulnerability exists when the Microsoft .NET Framework fails to validate input properly, aka ".NET Framework Remote Code Injection Vulnerability." This affects Microsoft .NET Framework 2.0, Microsoft .NET Framework ...
3 years ago
CVE-2019-0545 - An information disclosure vulnerability exists in .NET Framework and .NET Core which allows bypassing Cross-origin Resource Sharing (CORS) configurations, aka ".NET Framework Information Disclosure Vulnerability." This affects Microsoft .NET ...
3 years ago

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)