Threat Modeling Insider – January 2024

Threat Modeling Insider Newsletter

31st Edition – January 2024

Welcome!

We’re back once again with another packed edition of our Threat Modeling Insider! This month’s edition features a guest article by Jonathan Marcil, Application Security Specialist, on learning threats in a post ChatGPT world. Our curated content features a recording of the Open Security Summit on AI-Driven Threat Modeling with STRIDE GPT, and we provide a NIST Trustworthy and Responsible AI.

But that’s not all of course, let’s take a look at what else we have in store for this month’s edition:

Threat Modeling Insider edition

Welcome!

Threat Modeling Insider edition

We’re back once again with another packed edition of our Threat Modeling Insider! This month’s edition features a guest article by Jonathan Marcil, Application Security Specialist, on learning threats in a post ChatGPT world. Our curated content features a recording of the Open Security Summit on AI-Driven Threat Modeling with STRIDE GPT, and we provide a NIST Trustworthy and Responsible AI.

But that’s not all of course, let’s take a look at what else we have in store for this month’s edition:

On this edition

Curated content
AI-Driven Threat Modelling with STRIDE GPT

Tips & tricks
NIST Trustworthy and Responsible AI

Training update
An update on our upcoming training sessions.

GUEST ARTICLE

Learning threats in a post ChatGPT world

Jonathan Marcil, Application Security Specialist

A brief history of knowledge accessibility

It’s safe to say that ease of access to knowledge has been historically progressing. The ancient times were limited to face to face speaking and drawings in a cave. Details were volatile and prone to interpretations and changes when passed along. If this, when replacing the cave with a whiteboard, sounds similar to your organization in 2024, maybe it’s time for some progress!

When books came along, they improved retention and integrity of information. Their portable nature (try to put a cave drawing in your pocket I dare you) enabled them to be exchanged by people.

Ultimately they were organized into libraries. Quickly followed by indexing. Being able to look up information by category or even keywords pushed the ease of access forward. Be careful again if your organization in 2024 doesn’t still have a way to search or lookup information.

Then came the Internet, which embodies the universal library of humanity. Accessibility is very high due to not being bound to the physical word, rendering information easy to find, copy and distribute.

Search engines replaced the library index, and contributed greatly to ease of access to global knowledge.

The latest and greatest iteration is arguably LLMs, such as ChatGPT, which distribute information using natural language. This strangely feels like we’re back to the ancient times where a single sage would answer your questions. If this stays opaque like the industry is right now, we’ll be losing a capability of the middle age, where author(s) could be identified most of the time, although some could argue this is just maturity.

Threat Modeling requires curated knowledge

Over historical periods, we learned verbally about the tribal knowledge of threats surrounding us, and then we read it in books. Now we’re at a time where a system can just list threats for you after you query it.

The ease of access made it so that we could end up knowing a larger amount of threats. This also means that the majority of known threats are irrelevant to your situation.

Some civilisations might have never heard of the threat of being stomped by an elephant. That didn’t put them at risk since elephants just don’t exist in their environment.

The sheer amount of information we now have to deal with, despite a high ease of access, makes it so that getting the relevant information becomes a challenge. This is especially true when you’re not already familiar with the content and are learning.

Knowledge base collections related to a certain goal (in our case security and privacy), become invaluable indexes when doing threat modeling.

Learning a collection of threats

While there’s many documents ranging in relevancy for threat elicitations from the OWASP Top 10 to the CAPEC, they lack the simplicity without compromising on completeness and more importantly the focus on threat modeling like the mnemonics tools that are STRIDE and LINDDUN.

STRIDE is around 25 years old and is simply a mnemonic that covers a broad<!– or maybe even full? –> range of security impacts, by grouping threats in six categories: Spoofing, Tempering, Repudiation, Information Disclosure, Denial of Service and Elevation of privileges.

LINDDUN is relatively new and focuses on privacy with seven threat types: Linking, Identifying, Non-repudiation, Detecting, Data Disclosure, Unawareness, Non-compliance. It is also accompanied with methodologies to guide threat modeling and drills down on each category using threat trees.

These are amongst the uttermost references when listing threats related to your system or use case.

But is threat elicitation really all that is threat modeling about? Absolutely not! You need to be able to contextualize threats and transpose them alongside a system model to disseminate relevant information.

A proposal on how to learn threat elicitation

Let’s focus on the part of threat modeling that uses your brain’s ability to list and contextualize threats into a system. This will allow us to tackle something that can be difficult to learn.

Theory

When learning something new, your brain is prone to the biases of primacy and recency effect. Which essentially means that you will recall the first and last things you’ll see. Rest assured that AI can be no better, as you are both neural networks.

The primacy effect might apply at the meta level of learning how to threat model: – If your first threat model exercise is a mnemonic (eg. STRIDE), you can be tempted to start every session with STRIDE forever because that’s what you’ll recall.

And within STRIDE itself, the primacy and recency effects might apply to the items of the mnemonics: – You will remember better, or refer more to, ST and DE than RI.

Even with perfect recollection, some people might be tempted to go through STRIDE items in order. In a situation where time is a constraint, this could mean that they’ll start with S, and will have less time for E.

You could mitigate this by shuffling randomly or choose a better suited order based on probabilities. I’d go ITSEDR if you ask me; as I see around me more impactful attacks that lead to Information Disclosure than failures to Repudiate an action. The most suitable order for you will depend on your organization, systems and threat actors.

Observations

I had the chance to help out Robert Hurlbut during his workshop at Threat Mod Con 2023, where I was walking between teams to arrange chairs, whiteboards and markers.

People were given handouts that explained STRIDE in a single table for the threats brainstorming phase of the session.

I took a picture of the whiteboard used for brainstorming from 4 teams and made these observations:

  • Nobody listed STRIDE in strict order. 
  • One team was stuck at Spoofing and Tampering. 
  • I saw that the teams sticking to STRIDE seemed to be less fluid with their ideas. 
  • One team did not do any STRIDE mapping on the board. 
  • Due to time, people had to finish their brainstorm even if more ideas would have come up later.

This is by no way a proper sample for scientific results, but it shows anecdotally how people can end up thinking while brainstorming. I wish I had access to facilities to be able to do A/B test groups.

This reminded me of the times I showed Threat Modeling to people and how little I was focusing on STRIDE or any specific threats collection at all.

I often see that people who are learning threat modeling also learn what STRIDE means for the first time. For them, the act of threat modeling and threat elicitations goes together. While there’s nothing wrong with that, I prefer more freedom where the threats come from personal experience, at the risk of being less abundant or accurate than when taken out of a list.

Proposal

While learning STRIDE is great for mnemonic purposes, finding other sources of risk patterns can be beneficial if they are relevant to your systems and organization. LINDUNN is definitely a goto for privacy, but many other detailed threat and attack collections exist.

I would keep all those references as ways of learning about threats, but not something you would be basing your threat modeling contribution solely on.

Trying to follow lists too strictly might hinder your ability to wander around while brainstorming.

On the individual level, threat modeling itself requires a certain flow state that would be better achieved by tapping into your inner self and knowledge about your organization systems and situation. On a group level, this would add some randomness to the results, as all people are different. This is a great argument for the need for diversity of function in a threat modeling exercise.

Your memory collections, your own model, is what differentiates you from other participants.

Simply tap in your strengths, your interests; Tap into your individual recent experience; 

  • What posts have you read lately that captured your attention? 
  • What insider knowledge do you have of things that went wrong in the past? 
  • What do you feel you understand the best and would teach others?

You could also also tap into your weaknesses, blind spots; Use your external resources; 

  • What do you understand less and would like clarification? 
  • What did you miss at the last security related meeting? 
  • Is there a security expert you have access to?

Remember that ignorance can lead to great findings, especially when you seek to fill it.

Why would I learn if AI could just replace me?

I believe that a lack of diversity in knowledge or experience is what can be replaced with LLMs first. Your individuality matters, and even luck can differentiate you.

After looking at the history, we see that discernment is something that was always needed. You have to selectively accept information either because it comes from a panoply or you don’t trust its source.

LLMs are now doing that work for you. They give their own selection of what they “think” is the information you need. Right now the technology is at its infancy, so I would advise to use mnemonic or threat lists to cross verify results.

I think the real dream here is to have an LLM or AI so good that you can trust its results fully. That would be what really differentiates it from the prior methods of finding information. This ultimately will give us a sizable increment in speed and ease of access to information.

When LLMs systems will reach that level of maturity, you won’t be able to compete by applying knowledge to a context. Your job will be to write the proper prompt.

While you can get better at writing LLMs prompts the same way you can be better at doing online searches or keywords to lookups in an index, I’m hoping that these systems will actually get better at generating the best output from any level of quality of input. You already see that happening with LLMs that simply ignore your typo and “knows” what you mean.

Will that mean the end of human experts?

In a sense, if we get better results than humans can achieve, yes. That’s the true obsolescence of humans in my vision of a world where AI “replaces” them. It’s a great equalizer for knowledge with the ultimate ease of access. Everyone could have the same level of knowledge regardless of their experience, intelligence or attention.

Is there a way to differentiate yourself?

If we look at the current principles behind LLMs systems, there’s still code to keep some sense of uniqueness. This is achieved by having random seeds included with every prompt. That’s why writing two times the same prompt doesn’t achieve the same results; randomness is induced everytime.

In an AI utopia (or dystopia) where prompts of any quality give outstanding results, it means that the best experts will be the one that will be lucky in their seeding.

Threat Modeling, AI and you

Discernment and critical thinking is really important right now, as we might be tempted into trusting the LLMs results without verifying. This is easy to miss when results appear confident and so well written.

Mnemonic models and collections of threats or attacks are useful as part of the verification arsenal for completeness from any brainstorming process, including LLM ones.

Be careful about your biases, especially when you are learning. Differentiate yourself using your own individuality versus a globally trained LLM that averages knowledge.

When prompting for threat modeling, make sure that it learns about many other types of inputs than your own query. Some LLMs can already be customized right now or parse external documents to augment their knowledge and context related to your request. This will add to the differentiation of your contribution.

Cast your luck often when you consult an LLM. Send the same prompt many times and compare results. During threat modeling, one of those alternatives could end up being the most valuable.

CURATED CONTENT

Handpicked for you

Toreon Blog: Threat Modeling Playbook - Part 1 Get stakeholder buy-in

AI-Driven Threat Modelling with STRIDE GPT

In a series of blogs, we unravel the complexities of executing a successful threat modeling strategy through our Threat Modeling Playbook. Part one features an overview of the resources needed in order to implement the playbook.

During the Open Security Summit, Matthew Adams introduced STRIDE GPT, an AI-powered tool that leverages the latest OpenAI GPT models to generate comprehensive threat models and attack trees. You can watch the recording of this summit to gain insight into this topic.

OTM supported by threat-dragon starting from 2.1.3

Open Threat Modeling (OTM) has gained enhanced support, now backed by threat-dragon starting from version 2.1.3. This evolution promises improved capabilities and heightened efficiency in threat modeling processes. To delve deeper into the details, click the button below and stay updated on the latest advancements shaping the cybersecurity landscape.

AI-Driven Threat Modelling with STRIDE GPT​

During the Open Security Summit, Matthew Adams introduced STRIDE GPT, an AI-powered tool that leverages the latest OpenAI GPT models to generate comprehensive threat models and attack trees. You can watch the recording of this summit to gain insight into this topic.

OTM supported by threat-dragon starting from 2.1.3

Open Threat Modeling (OTM) has gained enhanced support, now backed by threat-dragon starting from version 2.1.3. This evolution promises improved capabilities and heightened efficiency in threat modeling processes. To delve deeper into the details, click the button below and stay updated on the latest advancements shaping the cybersecurity landscape.

TIPS & TRICKS

NIST Trustworthy and Responsible AI

tmi january 24 tips

NIST’s report on Trustworthy and Responsible AI simplifies adversarial machine learning (AML) concepts. They’ve categorized AML ideas, including types of machine learning and attack stages. The report also suggests ways to handle AML issues and highlights remaining challenges in AI security. It aims to create a common language for AML and assist non-experts, helping improve AI system protection.

Save-the-date: ThreatModCon 2024

Upcoming trainings & events

Book a seat in our upcoming trainings & events

Agile Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat Asia, Singapore

Next training date:
16-17 April 2024

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by BruCON Spring training, Belgium

Next training dates:
17-18 April 2024

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on:
13 May 2024

Agile Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat Asia, Singapore

Next training date:
16-17 April 2024

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by BruCON Spring training, Belgium

Next training dates:
17-18 April 2024

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Cohort starting on:
13 May 2024

Threat Modeling Insider Newsletter

Delivering the latest Threat Modeling articles and tips straight to your mailbox.

Start typing and press Enter to search

Shopping Cart