As an acronym, DICE can be used to help break the threat modeling process into four steps:
- D is for Description of the context (What are we building?)
- I is for Identification of threats (What can go wrong?)
- C is for Countermeasure definition (How do we defend against the threats we identified?)
- E is for Evaluation (Did we do a good enough job?)
Let’s start rolling the DICE and explore each step in detail:
Step 1: Description of the context
In this initial step, the goal is to describe how our system works, how its components interact, and what we’re trying to avoid. By clearly documenting this, you lay the groundwork for identifying relevant threats.
Doomsday scenarios
When running threat modeling sessions, one of the first things we do for our clients is to establish the so-called “Doomsday scenarios.” When talking to the Product Owner, who is responsible for solving issues with the product, we paint the picture of them getting a phone call at 9 in the evening.
What was said in that phone call that still had them lying awake until 3 in the morning?
The answers that spring to mind are your doomsday scenarios. Once you have those established, it becomes much easier to do effective threat analysis. When done this way, threat modeling helps defend against what you value the most.
For example, as a health insurance provider, you can deal with a few hours of downtime, but leaking medical records with patient data would be nothing short of a GDPR (and PR) nightmare.
When reviewing threat models done by competitors or development teams, we often are surprised by the fact that the establishment of doomsday scenarios was not taken into account.
The cause of this is a common misconception around the widely accepted “Four Question Framework of Threat Modeling” by Adam Shostack.
While we cannot deny it is the founding inspiration of Toreon’s DICE framework, people tend to shortcut the “What are we building?”-question to merely drawing diagrams. Diagrams are indeed part of most threat models, but they are not the only thing to do to answer that question.
Diagrams
Data flow diagrams are an intuitive format for this, but any kind of drawing (e.g., an attack tree, etc.) can work. A common practice in this phase is to indicate so-called “trust boundaries” where important data travels between components in differing trust realms.
Note: We’re seeing more and more teams use Large Language Models (LLMs) to help brainstorm what an attacker might do, so textual representations can be enough without needing a diagram.
Step 2: Identification of threats to the system
With a clear description of what you’re building in place, you can now identify potential threats relevant to your application’s scenario and context. While many threat modeling approaches exist, the “STRIDE” methodology works well as a threat analysis instrument.
Microsoft developed it to educate their developers on how to think about information security threats. Of course, there are other threat modeling methods.
Still, over the years of teaching our threat modeling trainings, we learned that it works particularly well for people who are not security professionals (or not even necessarily IT professionals…).
STRIDE helps identify intentional threats to a system. The paragraphs below outline each of the six categories in STRIDE but keep in mind that these are not mutually exclusive.
The most important outcome of this step is to have a list of potential threats. The exact category that brought it to your attention is not that important (or, as we often say in our Threat Modeling Practitioner training: “If you spend more than 30 seconds doubting between two categories, just flip a coin”)
Spoofing: Gaining access using a false identity
This threat materializes when an attacker can pretend to be someone they are not. The easiest materialization of this threat is an attacker guessing someone’s password or reading it from the database because the password is stored in a readable way.
However, we need to think beyond just passwords. If the user’s IP address must be that of a partner’s company, would a “visitor” on the guest’s Wi-Fi have the same IP address from our viewpoint?
Note: This category is easy to confuse with “Elevation of Privilege,” and there is indeed some overlap between the two. The key thing to remember is that spoofing is a violation of authentication, whereas elevation of privilege is a violation of authorization.
Tampering: Modifying data as it flows through the system
When an attacker can adjust data to their own benefit (or to the peril of a victim) a tampering issue presents itself. Broadly speaking, this type of threat can be covered by thinking about injection attacks.
For example, you can think of reference data that should not be altered by the client, like the price of a pizza in an online ordering tool. If it is shown to the user in a read-only field of the order form, it’s indeed true that the browser shows the price as greyed-out.
However, if you know a thing or two about HTML, it’s quite trivial to change its value.
Fancy a one-cent pizza, anyone?
Repudiation: Being able to do something without anyone being able to prove it
The word “repudiation” is not very well known among non-native speakers, which is why some of our students tend to call it “The Bart Simpson” or “The Shaggy” (wasn’t me…). This threat mostly comes from a lack of logging and audit trailing, making forensic analysis extremely difficult.
Note: When a spoofing threat materializes, we could think of it as a repudiation threat as well: the attacker can successfully shift the blame for their own evil actions to someone else. However, more detailed audit trailing isn’t going to solve the repudiation issue if the authentication is still broken.
This is a good example of how STRIDE categories can overlap.
Information disclosure: Seeing something you aren’t supposed to see
When confidential or potentially injurious data is shown to unauthorized actors, you have an “Information disclosure” threat on your hands. The nature of the unauthorized actors can differ, so aim to think about this broadly.
It really ties back into the doomsday scenarios we discussed earlier. Whether your customer records are leaked to a competitor or the recipe of your “Special Sauce” is suddenly on the dark web, there are things that you don’t want your systems to spill to an attacker.
Denial of service: Crashing or reducing the availability of the system
This threat needs no introduction and is well understood. If one side of a communication or the channel gets blocked from transmitting or receiving data, that’s a problem you should take a closer look at.
Elevation of privilege: Doing something you aren’t supposed to do
As you probably picked up from the headings, “Elevation of privilege” is closely related to “Information disclosure.” It’s typically a cascade of one leading to the other. As previously said, the threat category that inspired you to write a vulnerability is unimportant.