STRIDE GPT - AI-powered Tool LLMs To Generate Threat Models

STRIDE GPT, an AI-powered threat modeling tool, leverages the capabilities of large language models (LLMs) to generate comprehensive threat models and attack trees for applications, ensuring a proactive approach to security. In conclusion, STRIDE GPT represents a significant advancement in threat modeling, leveraging the power of LLMs to provide detailed, actionable insights into potential security threats. Developed by Matthew Adams, Head of Security Enablement at Citi, STRIDE GPT integrates the STRIDE methodology with the power of LLMs to automate the process of identifying potential threats and vulnerabilities in software applications. It also generates Gherkin test cases based on identified threats, bridging the gap between threat modeling and testing, ensuring that security considerations are integrated into the testing process. STRIDE GPT’s development has been marked by continuous improvements, with recent updates including support for GitHub repository analysis, allowing for a more comprehensive threat modeling by analyzing the README and key files of repositories. Based on this information, STRIDE GPT generates detailed threat models, and attack trees and suggests possible mitigations for identified threats. This holistic approach to security is further enhanced by STRIDE GPT’s ability to suggest possible mitigations, making it a valuable asset for cybersecurity professionals and development teams alike. One of the standout features of STRIDE GPT is its multimodal capability, which allows users to incorporate architecture diagrams, flowcharts, and other visual representations into the threat modeling process. With its focus on STRIDE methodology and the integration of AI, STRIDE GPT stands at the forefront of modern cybersecurity practices, offering a glimpse into the future where AI-driven security solutions are the norm. Matthew Adams presented STRIDE GPT at the Open Security Summit in January 2024, where he discussed the project’s inception, its core functionalities, and future plans. STRIDE GPT also supports locally hosted models via Ollama and LM Studio Server for data privacy concerns, ensuring that application details are not stored, thus maintaining confidentiality.

This Cyber News was published on cybersecuritynews.com. Publication date: Sun, 13 Apr 2025 04:25:05 +0000


Cyber News related to STRIDE GPT - AI-powered Tool LLMs To Generate Threat Models

STRIDE GPT - AI-powered Tool LLMs To Generate Threat Models - STRIDE GPT, an AI-powered threat modeling tool, leverages the capabilities of large language models (LLMs) to generate comprehensive threat models and attack trees for applications, ensuring a proactive approach to security. In conclusion, STRIDE GPT ...
23 hours ago Cybersecuritynews.com Inception
The age of weaponized LLMs is here - It's exactly what one researcher, Julian Hazell, was able to simulate, adding to a collection of studies that, altogether, signify a seismic shift in cyber threats: the era of weaponized LLMs is here. The research all adds up to one thing: LLMs are ...
1 year ago Venturebeat.com
GPT in Slack With React Integration - Understanding GPT. Before delving into the intricacies of GPT Slack React integration, let's grasp the fundamentals of GPT. Developed by OpenAI, GPT is a state-of-the-art language model that utilizes deep learning to generate human-like text based on ...
1 year ago Feeds.dzone.com
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
1 year ago Feeds.dzone.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
1 year ago Helpnetsecurity.com
Integrating LLMs into security operations using Wazuh - Once YARA identifies a malicious file, ChatGPT enriches the alert with details about the detected threat, helping security teams better understand and respond to the incident. Log analysis and data enrichment: Trained LLMs like ChatGPT can interpret ...
1 month ago Bleepingcomputer.com
AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
1 year ago Go.theregister.com
Staying ahead of threat actors in the age of AI - At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified ...
1 year ago Microsoft.com Kimsuky
Leak confirms OpenAI's GPT 4.1 is coming before GPT 5.0 - As spotted by AI researcher Tibor Blaho, OpenAI is already testing model art for o3, o4-mini, and GPT-4.1 (including nano and mini variants) on the OpenAI API platform. Also, GPT-5 isn't happening anytime soon, as OpenAI plans to focus on o3, ...
1 day ago Bleepingcomputer.com
Why training LLMs with endpoint data will strengthen cybersecurity - Capturing weak signals across endpoints and predicting potential intrusion attempt patterns is a perfect challenge for Large Language Models to take on. The goal is to mine attack data to find new threat patterns and correlations while fine-tuning ...
1 year ago Venturebeat.com
ChatGPT 4 can exploit 87% of one-day vulnerabilities - Since the widespread and growing use of ChatGPT and other large language models in recent years, cybersecurity has been a top concern. ChatGPT 4 quickly exploited one-day vulnerabilities. During the study, the team used 15 one-day vulnerabilities ...
9 months ago Securityintelligence.com
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
6 months ago Securelist.com
Meta's Purple Llama wants to test safety risks in AI models - Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input. Take for example the older models that were used to determine whether a file was ...
1 year ago Malwarebytes.com
Malicious ChatGPT Agents May Steal Chat Messages and Data - In November 2023, OpenAI released GPTs publicly for everyone to create their customized version of GPT models. Several new customized GPTs were created for different purposes. On the other hand, threat actors can also utilize this public GPT model to ...
1 year ago Cybersecuritynews.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
1 year ago Feedpress.me
OpenAI's GPT 4.5 spotted in Android beta, launch imminent - As a result, OpenAI CEO Sam Altman recently announced that ChatGPT will simplify its model names and release versions like GPT-4.5, GPT-5, and so on. Beyond the references to GPT-4.5, AI researcher Tibor Blaho has spotted a few additional experiments ...
1 month ago Bleepingcomputer.com
OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
1 year ago Securityboulevard.com
ChatGPT Spills Secrets in Novel PoC Attack - A team of researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack for extracting key architectural information from proprietary large language models such as ChatGPT and ...
1 year ago Darkreading.com
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
1 year ago Go.theregister.com Hunters
What Is Threat Modeling? - Threat modeling emerges as a pivotal process in this landscape, offering a structured approach to identify, assess, and address potential security threats. Threat Modeling Adoption and Implementation The successful adoption of threat modeling within ...
1 year ago Feeds.dzone.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
1 year ago Darkreading.com
OpenAI study reveals surprising role of AI in future biological threat creation - OpenAI, the research organization behind the powerful language model GPT-4, has released a new study that examines the possibility of using AI to assist in creating biological threats. One such frontier risk is the ability for AI systems, such as ...
1 year ago Venturebeat.com
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
1 year ago Darkreading.com
Researchers automated jailbreaking of LLMs with other LLMs - AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models in an automated fashion. Their findings suggest that this vulnerability is universal ...
1 year ago Helpnetsecurity.com Hunters
Detecting Vulnerability Scanning Traffic From Underground Tools Using Machine Learning - Our structured query language (SQL) injection detection model detected triggers containing unusual patterns that did not correlate to any known open-source or commercial automated vulnerability scanning tool. We have tested all malicious payloads ...
6 months ago Unit42.paloaltonetworks.com

Latest Cyber News


Cyber Trends (last 7 days)