Threat Modeling Insider – February 2025

Threat Modeling Insider Newsletter

41st Edition – February 2025

Welcome!

Clear your schedule today because the latest TMI edition is here! This month’s issue features a guest article based on a conversation between Dinis Cruz and Sebastien Deleersnyder, discussing the use of GenAI in threat modeling and application security. Our Toreon blog also has a TMI special where we revisit tips and tricks from past editions, highlighting a top 10 that remain essential to this day!

Threat Modeling Insider edition

Welcome!

New Year, Same Quality-Driven Threat Modeling Insider!

We’re excited to bring you another year filled with valuable Threat Modeling insights, and we’re kicking off 2025 with a bang! In our guest article, Jeroen Verwoest explains how you can enhance your penetration tests using threat modeling techniques. Meanwhile, our Toreon blog post covers Risk Patterns, with Sebastien Deleersnyder discussing their potential and how they can boost your threat models.

But that’s not all—let’s dive in!

On this edition

New Training Alert!
AI Whiteboard Hacking aka Hands-on Threat Modeling Training

Tips & tricks
Drawing DFDs

Training update
An update on our training sessions.

Guest article

Using GenAI in Threat Modeling and Application Security

Introduction

Generative AI (GenAI) is transforming how organizations approach threat modeling and application security (AppSec). Traditionally,
threat modeling and security analysis are manual, expertise-driven processes. Today’s GenAI systems can understand natural language,
analyze patterns, and even generate content, which makes them well-suited to assist in identifying threats and improving security
documentation. By leveraging GenAI, security teams can enhance the speed and scale of threat modeling, making it a continuous and
more integrated part of the software development lifecycle. This introduction sets the stage for understanding GenAI’s significant impact
on Threat Modeling and AppSec, from scaling security analyses to bridging communication gaps and automating risk assessments.

Scaling Threat Modeling with GenAI

Threat modeling is essential for uncovering design flaws and security risks early, but traditional methods often struggle to keep up with
modern development practices. Manually mapping out complex application architectures and documenting potential threats for every
new feature or microservice can be time-consuming and doesn’t easily scale across many projects. GenAI offers a way to amplify and
accelerate this process:

  • Automated Architecture Mapping: GenAI can quickly learn an application’s architecture by ingesting design documents, code, or
    system descriptions. It can generate diagrams or outlines of components, data flows, and trust boundaries without human drawing
    effort. This ensures the architecture model stays up-to-date as the application evolves
  • Threat Modeling at Scale: With a knowledge base of common attack patterns and vulnerabilities, an AI system can examine each
    component and interaction in the architecture to suggest relevant threats. For example, it might flag that a microservice handling user
    input could be susceptible to injection attacks or that an API without authentication is a potential security risk. GenAI can do this
    across dozens of applications or services simultaneously, something impractical to do manually.
  • Automating Documentation: Instead of writing lengthy threat model documents by hand, security teams can use AI to generate
    them. GenAI can produce structured documentation listing identified threats, affected components, and recommended mitigations.
    This not only saves time but also creates consistent outputs. Every project gets a thorough threat model write-up, even when security
    experts are stretched thin.
  • Enhanced Consistency and Coverage: Traditional threat modeling relies heavily on individual expertise, so some threats might be
    missed or assessed inconsistently. AI, on the other hand, can apply the same logic and extensive training data to every analysis. This
    consistency means fewer gaps in threat coverage. GenAI can incorporate established models like STRIDE (which covers Spoofing,
    Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege) to ensure that each category of threat
    is considered for each part of the system.
  • Speed and Iteration: Because AI-driven threat analysis is fast, it can be repeated whenever the application changes. This aligns
    well with agile and DevOps cycles. Developers can get near real-time feedback on security implications as they design or modify
    features. The result is a more iterative approach to threat modeling—catching issues early and often, rather than a one-time review.

Granular Analysis and Deterministic Outcomes: While GenAI’s capabilities are powerful, the key to achieving reliable and explainable threat modeling at scale lies in applying these capabilities systematically at multiple levels of abstraction – from individual methods and classes to modules and full applications, including their dependencies. This granular approach, though seemingly intensive, becomes manageable when integrated into the normal development workflow where only new code changes require analysis. By breaking down the threat modeling process into discrete, deterministic steps at each level, organizations can maintain clear provenance of their security decisions and ensure consistent, verifiable outcomes. This methodical approach allows teams to leverage GenAI’s capabilities while avoiding the pitfalls of treating it as a monolithic black box.

By leveraging GenAI in these ways, organizations address the key challenge of traditional threat modeling: scalability. AI doesn’t replace
the need for human oversight, but it augments security architects and developers by handling the heavy lifting. The outcome is a more
scalable threat modeling process that keeps pace with rapid development, ensuring that even as applications grow and change, security
risks are continuously identified and documented.

Bridging Security and Business with AI

One of the long-standing challenges in security is communication: translating technical security findings into language that developers
and business stakeholders understand. GenAI can act as a bridge between these groups by tailoring the message to the audience:

  • For Developers: AI can provide security feedback in developer-friendly terms. For instance, if a threat model identifies a potential
    SQL injection risk, an AI assistant could explain the issue in the context of the developer’s code and even suggest a code fix (like
    using parameterized queries). This turns abstract security requirements into concrete guidance. Developers receive actionable
    insights rather than just a list of vulnerabilities.
  • For Security Teams: Security professionals can use AI to summarize and clarify complex technical details. When dealing with
    intricate systems or new technologies, an AI model trained on vast security knowledge can help generate clearer explanations or
    diagrams. This ensures the security team itself has a consistent understanding of the threats and can double-check the AI’s findings
    efficiently. AI can also help by aggregating data from multiple tools (like code scanners, threat intel feeds, and compliance checkers)
    and presenting a unified view of risk that the team can work with.
  • For Business Stakeholders: GenAI excels at natural language generation, which means it can translate technical risk into business
    impact statements. An AI-driven report might take a vulnerability (e.g. an insecure data storage practice) and explain: “Impact to
    Business: Customer financial data could be exposed if this is exploited, potentially leading to regulatory fines and reputation
    damage.” By highlighting the potential financial, compliance, or operational impact, AI helps business leaders grasp why a technical issue matters. This bridges the gap between security considerations and business priorities, enabling better-informed decision
    making by executives and product owners. 

By improving communication, AI-driven insights ensure that security is not just a technical discussion isolated within the IT department.
Instead, it becomes a shared conversation. Developers better understand why certain security measures are necessary in terms of the
bigger picture, and business leaders see how security risks translate to business risks. This alignment, facilitated by AI, means security
recommendations are more likely to be acted upon because everyone sees their value in a context they care about. In essence, GenAI
helps security teams tell a compelling story about risk that resonates across the organization, from the server room to the board room.

Automating Risk Assessment

Assessing risk in application security involves prioritizing which vulnerabilities or threats deserve the most attention, analyzing the nature
of those vulnerabilities, and understanding the context around them. GenAI can streamline and enhance this risk assessment process by
automating many of its components:

  • Risk Prioritization: Security teams are often overwhelmed by lengthy lists of vulnerabilities (from scans, pen tests, bug bounty
    reports, etc.). GenAI can act as an intelligent filter, quickly sorting and ranking these issues. By considering factors such as severity
    of the vulnerability, ease of exploitation, and the importance of the affected system, an AI model can assign priority levels to each
    finding. For example, an AI might determine that an SQL injection flaw in a customer login service is a top priority (high impact on a
    critical system), whereas a misconfiguration on an internal test server is lower priority. This automated triage helps focus human
    effort on the most pressing risks first.
  • Vulnerability Analysis: Beyond just reading off a CVE description, GenAI can deeply analyze vulnerabilities. It can ingest details
    about a specific flaw and cross-reference it with the application’s code or configuration to determine if the application is truly affected.
    For instance, if there’s a known vulnerability in a library, the AI can check if the application uses that part of the library and in what
    context. It can also summarize what the vulnerability allows an attacker to do. This contextual analysis helps security teams
    understand each finding – not just that it exists, but what it means for their particular application. Moreover, AI can suggest
    remediation steps by drawing on databases of fixes or past knowledge (e.g. recommending a version upgrade or a code patch)
  • Contextual Risk Assessment: GenAI can take into account the environment and usage context to refine risk evaluations. This
    means looking at where the vulnerability exists and who could exploit it. If a web application vulnerability is exposed to the internet
    and tied to sensitive data, the AI will rate its risk higher due to the greater exposure and impact. On the other hand, an identical
    vulnerability on a strictly internal tool might be rated lower. AI can also incorporate real-time threat intelligence – for example, if a
    particular exploit is actively being used by attackers in the wild, the AI can raise the risk level for any related findings in the organization’s systems. By automatically weaving together context (asset criticality, data sensitivity, threat actor interest, etc.), GenAI
    provides a more nuanced risk assessment than severity scores alone. 

Examples: To illustrate, imagine a security team receives a scan report with 500 findings. Instead of manually combing through it, they
feed it to a GenAI-powered risk assessment tool. The AI immediately highlights 10 issues as “critical,” complete with plain-language
explanations: “These issues involve the customer database and can be exploited over the internet. They could lead to large-scale data
leakage.” It might also note that among those, two vulnerabilities are in components that have known exploits available, suggesting they
pose immediate danger. For each of these top issues, the AI offers a brief mitigation plan (such as “Apply patch X to the payment
service” or “Implement input validation on the user profile form”). In this way, the security team quickly understands which threats to
tackle first and how, dramatically reducing the time from discovery to mitigation.

Structured Storage and Semantic Knowledge: A critical aspect of automated risk assessment is the systematic storage and organization of threat modeling artifacts. By leveraging Git repositories – either alongside the application code or in dedicated sidecar repositories – teams can maintain a complete historical record of their threat analysis. These artifacts should be structured according to well-defined JSON schemas, making them both machine-readable and human-interpretable. Furthermore, the use of Semantic Knowledge Graphs (or Semantic Threat Modeling graphs) provides a powerful framework for understanding the relationships between various security concerns, attack vectors, and system components. This semantic approach enables teams to trace the lineage of security decisions, understand the impact of changes, and maintain a clear chain of reasoning in their threat assessments.

By automating pieces of risk assessment, AI not only saves time but also helps ensure that critical details don’t slip through the cracks. It
provides a second pair of eyes — tireless and informed by vast data — to catch what humans might miss or to connect dots that aren’t
obvious at first glance. The result is a more proactive and informed AppSec program where potential security threats are understood and
addressed in the appropriate context before they can be exploited.

Future Vision

Looking ahead, the integration of AI into threat modeling and AppSec is poised to deepen. We can anticipate several advancements,
accompanied by new challenges and opportunities:

  • Advancements: Future AI-driven threat modeling tools will become even more sophisticated. We might see AI assistants that plug
    directly into development environments, giving real-time security feedback as code is written or architectures are sketched.
    Generative AI could be used to simulate attacker behavior against a proposed design, effectively performing an instant “what-if”
    analysis for threats. Additionally, as these AI models learn from more data (such as past breaches, emerging vulnerabilities, and
    security test results), their recommendations will become more precise. We can expect AI to handle more of the grunt work of threat
    modeling—perhaps automatically generating full threat models for every application build—and to keep them continuously updated
    as systems change. This could lead to the concept of continuous threat modeling, where security analysis is not a one-off project
    but an ongoing, AI-driven service in the development pipeline.
  • Challenges: Along with progress, there are hurdles to overcome. One major challenge is trust and validation. Security
    professionals will need ways to verify AI-generated threat models and recommendations, to ensure the AI isn’t overlooking something
    or producing false positives/negatives. There is also the issue of explainability: AI might flag a design as high-risk, but it must also
    explain its reasoning in a way humans understand, otherwise stakeholders may not trust the recommendation. Privacy and data security concerns will arise since using GenAI often means feeding sensitive system information into AI models – organizations will
    demand solutions that keep this data secure (for example, on-premises models or robust encryption). Another challenge is that
    attackers could exploit AI tools (through prompt injection or feeding false data) to manipulate outcomes, so the AI itself must be
    designed securely. Finally, the security community will need to address the skill shift: as mundane tasks become automated,
    security professionals will need to focus on higher-level analysis and oversight of AI, which may require new training and mindsets. 
  • Opportunities: The fusion of AI with threat modeling opens up exciting possibilities. It can democratize security practices, enabling
    even small companies or teams with limited security staff to perform advanced threat analysis using AI helpers. This empowers
    developers to take on more security responsibility with confidence, because an AI co-pilot is guiding them. There’s an opportunity for
    AI to help unify various aspects of AppSec—requirements, threat modeling, testing, incident response—into a cohesive loop where
    lessons learned in one area (say, a penetration test) feed back into others (like updating the threat model via AI suggestions). Also,
    as AI-driven tools become commonplace, we might see standardized frameworks and industry benchmarks for AI-assisted threat
    modeling (much like OWASP standards), which will help ensure quality and consistency across solutions. In the broader sense, AI
    can help organizations move towards a predictive security posture, where potential attacks are anticipated and mitigated before
    they occur. It shifts AppSec from a reactive stance to a proactive, intelligence-driven discipline.

In summary, the future of AI-driven threat modeling is promising. GenAI will continue to evolve as a force multiplier for security teams —
accelerating analyses, bridging knowledge gaps, and keeping pace with the ever-changing threat landscape. While there are important
challenges to address in terms of trust and security of the AI itself, the opportunities to strengthen application security are immense.
Organizations that harness these AI advancements strategically will be better equipped to design resilient systems and protect their
assets in the face of emerging threats. The collaboration between human expertise and artificial intelligence in threat modeling may well
define the next era of proactive and scalable application security.

New Training Alert!
AI Whiteboard Hacking aka Hands-on Threat Modeling Training

Calling all AI Engineers, Software Developers, Solution Architects, Security Professionals, and Security Architects! Get ready to elevate your skills and master the art of designing secure AI systems in our latest, cutting-edge training.

This hands-on course dives deep into the DICE methodology (Diagramming, Identification of threats, Countermeasures, and Evaluation), giving you the tools you need to tackle AI-specific threats—like prompt injections and data poisoning—head-on. You’ll develop real-world countermeasures, learn to integrate security testing into your AI workflows, and gain insights into staying ahead of the curve in AI security.

But it doesn’t stop there! The grand finale will put your skills to the test in a high-energy wargame, where red and blue teams face off to defend and attack a rogue AI research assistant. It’s a thrilling way to turn theory into action as you perform threat modeling under pressure.

After years evaluating security trainings at Black Hat, including Toreon’s Whiteboard Hacking Sessions, I can say this AI Threat Modeling course stands out. The hands-on approach and flow are exceptional – it’s a must-attend.

Daniel Cuthbert, Global Head of Cyber Security Research, Black Hat Review Board Member

CURATED CONTENT

Handpicked for you

Toreon Blog: Making Threat Modeling Accessible: Top 10 Tools and Resources for Practitioners

New OWASP Agentic AI - Threats and Mitigations Guide

At Toreon, we’re excited to bring you something special from our Threat Modeling Insider (TMI) newsletter! Our CTO and editor, Sebastien Deleersnyder, has curated the Top 10 Most Impactful Threat Modeling Tips & Tricks based on what’s resonated most with our readers over the years. 

Threat modeling can feel overwhelming, especially when you’re unsure where to start. That’s why TMI was created: to help security professionals like you transform threat modeling from a daunting task into an integrated part of your development or threat modeling process.

This article brings together our best tips and tricks—the ones that can make a real difference in your threat modeling efforts.

The OWASP Top 10 for Large Language Model Applications project has launched a comprehensive guide addressing security threats and mitigation strategies for Agentic AI systems, providing crucial insights for developers, security professionals, and engineers.

The first deliverable from the Agentic Security Initiative provides a threat-model-based reference for understanding emerging AI security risks, offering detailed threat modeling approaches, real-world threat models, and a structured Agentic Threat Taxonomy, all developed with input from distinguished experts from leading tech and research institutions.

AI Threat Map

The AI Threat Mind Map is a comprehensive resource designed to help users understand and navigate the complex landscape of AI security threats. It offers a structured framework for identifying, analyzing, and mitigating potential risks associated with artificial intelligence systems. By utilizing this mind map, individuals and organizations can gain valuable insights into the various facets of AI security, enabling them to develop more robust and secure AI applications.
393106507 e342cb03 9b79 45a0 b0c9 6501e8a4b152

TIPS & TRICKS

Drawing DFDs

During our threat modeling training, we hear a lot of confusion around the representation of trust boundaries, more specifically the curvature and where the convex side should be pointing. While the consensus is that the convex side is pointed to the less trusted entities and the concave side to the more trusted entities, this doesn’t always make sense. In a “zero-trust” architecture, nothing is trusted. Also, when data travels across a network, we always advise to add a trust boundary, even if the two entities communicating are equally trusted.

The short and sweet answer is : “Draw your trust boundaries as straight, dotted lines” and put yourself on one side, imagining the threat is on the other side, then switch.

Our trainings & events for 2025

Book a seat in our upcoming trainings & events

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on 17 March 2025

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by NorthSec, Montreal

10-11 May 2025

Hands-on Threat Modeling AI (NEW TRAINING), in-person, hosted by OWASP Global AppSec, Barcelona

27-28 May 2025

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on 17 March 2025

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by NorthSec, Montreal

10-11 May 2025

Hands-on Threat Modeling AI, in-person, hosted by OWASP Global AppSec, Barcelona

27-28 May 2025

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat USA, Las Vegas 

2-5 August 2025

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on 18 August 2025

Agile Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, OWASP Global AppSec, Washington DC

4-5 November 2025

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat USA, Las Vegas 

2-5 August 2025

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on 18 August 2025

Agile Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, OWASP Global AppSec, Washington DC

4-5 November 2025

Threat Modeling Insider Newsletter

Delivering the latest Threat Modeling articles and tips straight to your mailbox.

Start typing and press Enter to search

Shopping Cart