An interview on threat modeling with ChatGPT

An interview on threat modeling with ChatGPT

I was thinking about topics on which to write an interesting blog post. Over the course of the last months, I noticed that our customers and students of the threat modeling training are always on the lookout for good sources of information. In an ideal world, these people would ask their questions online and get a good result in return, both by an Internet search or by asking these questions on the OWASP Slack channel for threat modeling.

Lately, however, there has been a lot of talk about ChatGPT as a personal assistant.

So, I decided to give ChatGPT a whirl and share my experiences and insights with you. As always, my remarks are deeply personal and might not align with the mainstream views in the threat modeling field. That’s what makes it interesting, right? This is a lengthy read, so grab your favorite beverage and dive in together!

Question: Tell me about threat modeling

lp logo 3.0ChatGPT: Threat modeling is a process for identifying and assessing potential threats to the security of a system or product. It is typically used in the field of cybersecurity but can also be applied to other areas where security is a concern. Threat modeling aims to identify a system’s potential vulnerabilities and risks and devise strategies for mitigating them. This can involve identifying the assets that need to be protected, the potential threats to those assets, and the vulnerabilities that could be exploited by those threats. Once these have been identified, the threat model can be used to develop and implement appropriate countermeasures to protect the system or product.

Question: What is the ultimate goal of threat modeling?

ChatGPT: The ultimate goal of threat modeling is to help ensure the security and integrity of a system or product. By identifying potential vulnerabilities and risks, threat modeling enables organizations to mitigate or eliminate those risks and develop strategies for responding to potential security threats. This can help to prevent security breaches and other forms of attacks and to protect the valuable assets that a system or product may contain. Ultimately, threat modeling aims to help organizations operate more securely and confidently, knowing that they have taken steps to protect themselves and their assets from potential threats.

Steven: Alright, that’s not a bad start at all. It would have been better if the notion of business risk had also been introduced, but as I did not specify threat modeling in a commercial organization, that could be the reason why it’s not mentioned.

Question: Can you create a threat model for me?

ChatGPT: Sure, I can provide an example of how to create a threat model. To create a threat model, you would first need to identify the assets that need to be protected. This could include data, systems, networks, and other resources that are critical to the organization. Next, you would need to identify the potential threats to those assets. This could include external threats such as hackers or malware or internal threats such as malicious insiders or accidental data breaches. Once you have identified the threats, you would need to assess the vulnerabilities that could be exploited by those threats. This could include weaknesses in security protocols, inadequate access controls, or other vulnerabilities that could be exploited.

Once you have identified the assets, threats, and vulnerabilities, you can use this information to develop strategies for mitigating or eliminating the risks. This could involve implementing security controls such as firewalls and intrusion detection systems, implementing security policies and procedures, and training employees on security best practices. By taking these steps, you can help to protect your organization and its assets from potential threats.

Steven: This is somewhat disappointing. I do not consider this to be an example but rather a high-level overview of the threat modeling process. Of course, I did not give any context, and thus, ChatGPT had no idea what exactly I wanted to threat model. The process does not incorporate any reflection: Did we do a good enough job, or do we need another iteration? Let’s give ChatGPT some more context.

Question: What would a threat model look like for a website where users log in to see their medical records?

ChatGPT: A threat model for a website where users log in to see their medical records might include the following elements:

  • Assets: In this scenario, the assets that need to be protected could include the website itself, the database containing the medical records, and the confidential information contained in those records.
  • Threats: Potential threats to these assets could include external attackers attempting to gain unauthorized access to the website or database or internal threats such as employees accessing the records without authorization.
  • Vulnerabilities: The website and database could be vulnerable to attacks such as SQL injection, where an attacker inserts malicious code into the database to access the records. Inadequate access controls or weak passwords could also make it easier for attackers to gain unauthorized access.

To mitigate these risks, the threat model might include strategies such as implementing strong authentication protocols, regularly updating and patching the website and database, and implementing security controls such as firewalls and intrusion detection systems. Regular employee training on best security practices could also help prevent internal threats. By implementing these measures, the organization could help protect the confidentiality and integrity of medical records.

Steven: This is, of course, not a comprehensive threat model, but ChatGPT did distinguish several assets, mentioning both outside and insider threats, etc. In the vulnerabilities, the insider threat seems to be missing. However, the mitigations do mention elements that would help counter this threat. If I put myself in the shoes of a threat modeler without experience, this is not a bad starting point.

Since the answer is entirely threat model methodology agnostic, I feel that this could provide several useful tips for the first iteration of a threat model. I think this is more useful than looking at an existing public threat model since the process is also explained. I’m impressed that an AI system could cook this up. Let’s find out if we can go deeper into threat modeling with ChatGPT. Would ChatGPT know who is needed to create or update a threat model?

Question: Who are the key players in threat modeling?

giphy 1ChatGPT: The key players in threat modeling typically include security experts or analysts who are responsible for identifying and assessing potential threats and developing strategies for mitigating those threats. In some cases, other stakeholders such as business leaders, product managers, or developers may also be involved in the threat modeling process, as they may have valuable insights into the assets that need to be protected, the potential threats to those assets, and the vulnerabilities that could be exploited. Ultimately, the goal of threat modeling is to involve a diverse group of stakeholders in the process, as this can help to identify a broader range of threats and vulnerabilities and develop more comprehensive and effective strategies for addressing those threats.

Steven: Again, this is not a bad answer. I especially like the last paragraph describing the goal of threat modeling since it highlights the importance of the different stakeholders. Would ChatGPT also know anything about implementing the threat modeling process?

Question: How do you integrate threat modeling in an agile development process?

ChatGPT: Threat modeling can be integrated into an agile development process in a number of ways. One approach is to include threat modeling as a regular part of the development process, incorporating it into each iteration or sprint. This could involve identifying potential threats and vulnerabilities at the beginning of the sprint and implementing strategies for addressing those threats throughout the development process. By integrating threat modeling into the agile process, organizations can help to ensure that security is considered at every stage of development and that appropriate countermeasures are implemented to protect the system or product.

Another approach is using threat modeling techniques such as attack trees or STRIDE to identify potential threats and vulnerabilities early in development. This can help organizations prioritize the most significant threats and vulnerabilities and focus their efforts on addressing those risks. By using agile methods to iteratively refine and improve the threat model, organizations can ensure that their security strategies are aligned with the evolving needs of the product or system.

Overall, the key to integrating threat modeling into an agile development process is to make it a regular and ongoing part of the process and to involve all relevant stakeholders in the threat modeling process. By doing so, organizations can help ensure that security is considered and addressed throughout the development process and that the final product or system is as secure as possible.

Steven: There are a number of things going on here, with the first paragraph giving a high-level description of how to implement threat modeling in SCRUM but not mentioning other agile methodologies such as Kanban. The second paragraph describes an iterative and incremental way of working, of which I’m a fan as well. The third paragraph also includes the stakeholders again.

This is not the exact description of a threat modeling process in an agile environment but a high-level explication of how and why. Since ChatGPT brings up STRIDE and attack trees, let’s drill down in that area.

Question: What is the difference between STRIDE and attack trees?

ChatGPT: STRIDE and attack trees are both techniques that are used in threat modeling to identify potential threats and vulnerabilities. STRIDE is an acronym that stands for six different types of threats: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. By identifying which of these threats are relevant to a particular system or product, organizations can develop strategies for mitigating or eliminating those threats.

Attack trees, on the other hand, are graphical representations of the various ways in which a system or product could be attacked. They are typically used to identify potential entry points or vulnerabilities that an attacker could exploit and assess the likelihood and impact of those attacks. By analyzing the attack tree, organizations can identify the most significant threats and vulnerabilities and develop strategies for addressing those risks.

Overall, the key difference between STRIDE and attack trees is that STRIDE is a framework for identifying different types of threats, while attack trees are visual representations of the ways in which a system could be attacked. Both techniques can be useful in the threat modeling process and can be used together to provide a comprehensive view of the potential threats and vulnerabilities facing a system or product.

Steven: I don’t fully agree with the definition of STRIDE. It does not capture the fact that this technique is used to help you ask security questions (not limited to the categories of STRIDE), but I did find the last sentence intriguing and decided to dig a little deeper.

Question: In what situations would you use what technique?

Threat Modeling Insider editionChatGPT: The choice of which threat modeling technique to use in a particular situation will depend on a number of factors, including the type of system or product being assessed, the goals of the threat modeling process, and the expertise of the individuals involved. In general, STRIDE is a useful technique for identifying the different types of threats that a system or product may face and for developing strategies for addressing those threats. Attack trees, on the other hand, are particularly useful for visualizing the potential ways in which a system could be attacked and for identifying the vulnerabilities that could be exploited by those attacks.

In many cases, organizations may choose to use both STRIDE and attack trees in the threat modeling process, as these techniques can complement each other and provide a more comprehensive view of the potential risks facing a system or product. For example, an organization might use STRIDE to identify the different types of threats to a website and then use attack trees to visualize the potential ways in which those threats could be carried out and to assess the likelihood and impact of those attacks. By using both techniques together, organizations can develop more effective and comprehensive strategies for addressing the risks facing their systems and products.

Steven: Here, ChatGPT is clearer about combining both techniques. I like the idea, but I’ve never actually done this in a real threat model. One trick with ChatGPT is to ask for more details, just like you would in a ‘normal’ conversation. Let’s see what extra information that provides.

Question: Can you give a more concrete example?

ChatGPT: Sure, here is a more concrete example of how an organization might use STRIDE and attack trees in the threat modeling process:

Imagine that an organization is developing a new online shopping website. As part of the threat modeling process, they decided to use STRIDE to identify potential threats to the website. After reviewing the system, they identify the following threats:

  • Spoofing: An attacker could create a fake version of the website in order to trick users into entering their login credentials.
  • Tampering: An attacker could alter the website’s code or database in order to access sensitive information or disrupt the website’s functionality.
  • Repudiation: An attacker could use the website to make fraudulent purchases or conduct other illegal activities and then deny responsibility for those actions.
  • Information disclosure: An attacker could access the website’s database and extract sensitive information such as customer credit card numbers or addresses.
  • Denial of service: An attacker could flood the website with traffic in order to make it unavailable to legitimate users.
  • Elevation of privilege: An attacker could exploit vulnerabilities in the website’s code to gain unauthorized access to the website’s administrative functions.

Next, the organization decides to use attack trees to visualize the potential ways in which these threats could be carried out and to assess the likelihood and impact of those attacks. After analyzing the attack tree, they identify the following vulnerabilities:

  • Weak password policies: The website currently allows users to set weak passwords, which could make it easier for attackers to gain unauthorized access.
  • Inadequate access controls: The website’s code does not currently enforce strict access controls, which could allow unauthorized users to access sensitive information.
  • Unpatched software: The website is running on outdated software that has known vulnerabilities, which could be exploited by attackers.

Based on this analysis, the organization can develop strategies for addressing these vulnerabilities and mitigating the identified threats. For example, they could implement strong password policies, improve access controls, and regularly update and patch the website’s software. By taking these steps, they can help to protect the website and its users from potential attacks.

Steven: That was a long answer. In fact, I had to refresh because the first time, the answer was cut off in the middle of a sentence. The issues found and the accompanying mitigations are still relatively high-level, but that is to be expected without a lot of context. I think the answer would double in size if actual examples were used. Let’s figure out if ChatGPT can incorporate more techniques in threat modeling if we finetune the questions a bit.

Question: Is ATT&CK an attack tree?

ATT&CKChatGPT: No, ATT&CK (which stands for Adversarial Tactics, Techniques, and Common Knowledge) is not an attack tree. ATT&CK is a framework developed by the MITRE Corporation that provides a comprehensive, organized view of the tactics, techniques, and procedures (TTPs) that are commonly used by attackers. It is not a specific threat modeling technique but rather a collection of information and resources that can be used to support threat modeling and other security activities.

In contrast, an attack tree is a specific threat modeling technique that involves creating a graphical representation of the ways in which a system or product could be attacked. Attack trees are typically used to identify potential entry points or vulnerabilities that could be exploited by an attacker and to assess the likelihood and impact of those attacks. Although both ATT&CK and attack trees can be useful in the threat modeling process, they serve different purposes and provide different types of information.

Steven: A good answer to get us started. This was the set-up question; now for the real question…

Question: How can ATT&CK be used in a threat model?

ChatGPT: ATT&CK can be used in a threat model in several ways. First, it can provide a wealth of information and insights into the tactics, techniques, and procedures (TTPs) commonly used by attackers. By understanding these TTPs, organizations can develop more effective strategies for detecting and responding to potential attacks.

Second, ATT&CK can help organizations to prioritize their threat modeling efforts. Because ATT&CK is organized into a matrix of tactics, techniques, and platforms, it can provide a high-level view of the types of attacks that are most commonly used against a particular platform or system. This can help organizations to focus their threat modeling efforts on the most significant threats, and to develop strategies for addressing those threats.

Third, ATT&CK can be used as a reference during the threat modeling process. For example, an organization might use ATT&CK to identify specific TTPs that are relevant to their system or product, and then use those TTPs to develop more detailed threat models. By using ATT&CK in this way, organizations can take advantage of the wealth of information and insights contained in the framework, and can develop more effective and comprehensive threat models.

Steven: Again, not a bad answer. Especially the last paragraph describes the beginning of ‘threat-informed defense’, a topic that has gotten a lot of attention in the past two years. Now, for the finale, can this AI do my work for me?

Question: Could you create a complete threat model?

ChatGPT: As a large language model trained by OpenAI, I am capable of providing general information and examples about threat modeling. However, I do not have the ability to create a complete threat model for a specific system or product.

Creating a complete threat model requires detailed knowledge of the assets, threats, and vulnerabilities associated with a specific system or product, as well as the expertise and experience to develop appropriate strategies for addressing those risks. Because I am a machine learning model, I do not have access to this type of information or expertise, and I am not capable of creating a complete threat model for a specific system or product.

My purpose is to assist with a wide range of tasks by providing accurate and helpful information based on the data that has been provided to me.

Steven: I guess I’m not out of a job yet! Maybe ChatGPT can do part of the work for me. To find out how that could be done, I need to ask some more deliberate questions.

Question: What input do you require to create a threat model for me?

ChatGPT: As a machine learning model, I do not have the ability to create a threat model for a specific system or product. Creating a complete threat model requires detailed knowledge of the assets, threats, and vulnerabilities associated with a specific system or product and the expertise and experience to develop appropriate strategies for addressing those risks.

Because I am a machine learning model, I do not have access to this type of information or expertise, and I am incapable of creating a complete threat model for a specific system or product. My purpose is to assist with a wide range of tasks by providing accurate and helpful information based on the data that has been provided to me.

Steven: This answer is a bit disappointing. I guess we’d better start training a machine learning model with threat model information as well. 😉

Conclusion

ChatGPT is a great way to receive generic information because you can ask follow-up questions, which is a clear advantage over a common web search. But I feel a web search will find more and potentially even more detailed information. As usual, a combination of searching the Internet and asking questions to a tool such as ChatGPT will give you the best results if you want to learn more about certain topics.

For beginners, I would certainly advise asking questions in this conversational style. Chat GPT remembers your previous questions and will pick up on where you want to go. I feel this could help a novice threat modeler since they might not know the exact questions they want to ask, and therefore an Internet search is often not compelling enough for them.

Did this article leave you with any questions? Ask away!

Start typing and press Enter to search

Shopping Cart