Showing posts with label threat model. Show all posts
Showing posts with label threat model. Show all posts

Monday, August 29, 2011

TaoSecurity Security Effectiveness Model

After my last few Tweets as @taosecurity on threat-centric vs vulnerability-centric security, I sketched this diagram to help explain my thinking.

Security consists of three areas of interest: 1) What defenders think should be defended, whether or not it matters to the adversary or whether it is in reality defended, what I label "Defensive Plan"; 2) What the adversary thinks matters and really should be defended, but might not be, what I label as "Threat Actions"; and 3) What is in reality defended in the enterprise, whether or not defenders or the adversary cares, what I label "Live Defenses".

I call the Defensive Plan "Correct" when it overlaps with the Adversary Actions, because the defenders correctly assessed the threat's interests. I call it "Incorrect" when Live Defenses are applied to areas outside the interest of the security team or outside the interest of the adversary.

I call the area covered by the Live Defenses as "Defended," but I don't assume the defenses are actually sufficient. Some threats will escalate to whatever level is necessary to achieve their mission. In other words, the only way to not be compromised is to not be targeted! So, I call areas that aren't defended at all "Compromised" if the adversary targets them. Areas not targeted by the adversary are "Compromise Avoided." Areas targeted by the adversary but also covered by Live Defense are "Compromise Possible."

The various intersections produce some interesting effects. For example:

  1. If you're in the lower center area titled "Incorrect, defended, compromise possible," and your defenses hold, you're just plain lucky. You didn't anticipate the adversary attacking you, but somehow you had a live defense covering it.

  2. If you're near the left middle area titled "Correct, undefended, compromised," this means you knew what to expect but you couldn't execute. You didn't have any live defenses in place.

  3. If you're in the area just below the previous space, titled "Incorrect, undefended, compromised," you totally missed the boat. You didn't expect the adversary to target that resource, and you didn't happen to have any live defenses protecting it.

  4. If you're in the very center, called "Correct, defended, compromise possible," congratulations -- this is where you expected your security program to operate, you deployed defenses that were live, but the result depends on how much effort the adversary applies to compromising you. This is supposed to be "security Nirvana" but your success depends more on the threat than on your defenses.

  5. The top-most part titled "Incorrect, undefended, compromise avoided" shows a waste of planning effort, but not wasted live defenses. That's a mental worry region only.

  6. The right-most part titled "Incorrect, defended, compromise avoided" shows a waste of defensive effort, which you didn't even plan. You could probably retire all the security programs and tools in that area.

  7. The area near the top titled "Incorrect, defended, compromise avoided" shows you were able to execute on your vision but the adversary didn't bother attacking those resources. That's also waste, but less so since you at least planned for it.


What do you think of this model? Obviously you want to make all three circles overlap as much as possible, such that you plan and defend what the threat intends to attack. That's the idea of threat-centric security in a nutshell -- or maybe a Venn diagram.

Thursday, July 28, 2011

Risk Modeling, not "Threat Modeling"


Thanks to the great new book Metasploit (review pending), I learned of the Penetration Testing Execution Standard. According to the site, "It is a new standard designed to provide both businesses and security service providers with a common language and scope for performing penetration testing (i.e. security evaluations)." I think this project has a lot of promise given the people involved.

I wanted to provide one comment through my blog, since this topic is one I've covered previously. One of the goals of the standard is to name and explain the steps performed in a penetration test. One of them is currently called "threat modeling," and is partly explained using this diagram:



When I saw elements called "business assets," "threat agents," "business process," and so on, I realized this is more of a risk model, not just a "threat model."

I just tagged a few older posts as discussing threat model vs risk model linguistics, so they might help explain my thinking. This issue isn't life or death, but I think it would be more accurate to call this part of the PTES "Risk Modeling."

Monday, October 01, 2007

Someone Please Explain Threats to Microsoft

It's 2007 and some people still do not know the difference between a threat and a vulnerability. I know these are just the sorts of posts that make me all sorts of new friends, but nothing I say will change their minds anyway. To wit, Threat Modeling Again, Threat Modeling Rules of Thumb:

As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations. While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team.

At the end of the day, this process is about ensuring that our customer’s machines aren’t compromised. When we’re deciding which threats need mitigation, we concentrate our efforts on those where the attacker can cause real damage.

When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future.


Replace every single instance of "threat" in that section with "vulnerability" and the wording will make sense.

Not using the term "threat" properly is a hallmark of Microsoft publications, as mentioned in Preview: The Security Development Lifecycle. I said this in my review of Writing Secure Code, 2nd Ed:

The major problem with WSC2E, often shared by Microsoft titles, is the misuse of terms like "threat" and "risk." Unfortunately, the implied meanings of these terms varies depending on Microsoft's context, which is evidence the authors are using the words improperly. It also makes it difficult for me to provide simple substitution rules. Sometimes Microsoft uses "threat" when they really mean "vulnerability." For example, p 94 says "I always assume that a threat will be taken advantage of." Attackers don't take advantage of threats; they ARE threats. Attackers take advantage of vulnerabilities.

Sometimes Microsoft uses terms properly, like the discussion of denial of service as an "attack" in ch 17. Unfortunately, Microsoft's mislabeled STRIDE model supposedly outlines "threats" like "Denial of service." Argh -- STRIDE is just an inverted CIA AAA model, where STRIDE elements are attacks, not "threats." Microsoft also sometimes says "threat" when they mean "risk." The two are not synonyms. Consider this from p 87: "the only viable software solution is to reduce the overall threat probability or risk to an acceptable level, and that is the ultimate goal of 'threat analysis.'" Here we see confusing threat and risk, and calling what is really risk analysis a "threat analysis." Finally, whenever you read "threat trees," think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.


These sentiments reappeared in my review of Security Development Lifecycle: Microsoft continues its pattern of misusing terms like "threat" that started with "Threat Modeling" and WSC2E. SDL demonstrates some movement on the part of the book's authors towards more acceptable usage, however. Material previously discussed in a "Threat Modeling" chapter in WSC2E now appears in a chapter called "Risk Analysis" (ch 9) -- but within the chapter, the terms are mostly still corrupted. Many times Microsoft misuses the term risk too. For example, p 94 says "The Security Risk Assessment is used to determine the system's level of vulnerability to attack." If you're making that decision, it's a vulnerability assessment; when you incorporate threat and asset value calculations with vulnerabilities, that's true risk assessment.

The authors try to deflect what I expect was criticism of their term misuse in previous books. On p 102 they say "The meaning of the word threat is much debated. In this book, a threat is defined as an attacker's objective." The problem with this definition is that it exposes the problems with their terminology. The authors make me cringe when I read phrases like "threats to the system ranked by risk" (p 103) or "spoofing threats risk ranking." On p 104, they are really talking about vulnerabilities when they write "All threats are uncovered through the analysis process." The one time they do use threat properly, it shows their definition is nonsensical: "consider the insider-threat scenario -- should your product protect against attackers who work for your company?" If you recognize that a threat is a party with the capabilities and intentions to exploit a vulnerability in an asset, then Microsoft is describing insiders appropriately -- but not as "an attacker's objective."

Don't get me wrong -- there's a lot to like about SDL. I gave the book four stars, and I think it would be good to read it. I fear, though, that this is another book distributed to Microsoft developers and managers riddled with sometimes confusing or outright wrong ways to think about security. This produces lasting problems that degrade the community's ability to discuss and solve software security problems.


No one is going to take us seriously until we use the right terms. Argh.

Sunday, September 02, 2007

Final Question on FAIR

I'd hoped to not have to say anything else on FAIR, but I've decided to ask one final question. If I can't get a straight answer to this question I'm giving up on the discussion. So, here it is.

In Thoughts on FAIR I walked through the section Analyzing a Simple Scenario. Steps 3, 4, 5, 8, and 9 [Estimate the probable Threat Event Frequency (TEF), Estimate the Threat Capability (TCap), Estimate Control strength (CS), Estimate worst-case loss, and Estimate probable loss, respectively] each require the user to "Estimate."

Do these Estimates matter to the output of the model?

If the answer is yes, then those answers are important and should be grounded in reality -- not opinions. If FAIR proponents agree with this, then we have been debating for days for no real reason. That would make me happy.

If the answer is no, then what is so magical about FAIR that garbage in does not produce garbage out?

Saturday, September 01, 2007

Wall Street Clowns and Their Models

Recently I cited an Economist article in Economist on the Peril of Models. While walking through the airport this Businessweek cover story, Not So Smart, caught my eye. I found the following excerpts to be interesting.

The titans of home loans announced they had perfected software that could spit out interest rates and fee structures for even the least reliable of borrowers. The algorithms, they claimed, couldn't fail...

It was the assumptions and guidelines that lenders used in deploying the technology that frequently led to trouble, notes industry veteran Jones. "It's garbage in, garbage out," he says. Mortgage companies argued their algorithms provided near-perfect precision. "We have a wealth of information we didn't have before," Joe Anderson, then a senior Countrywide executive, said in a 2005 interview with BusinessWeek. "We understand the data and can price that risk."

But in fact, says Jones, "there wasn't enough historical performance" related to exotic adjustable-rate loans to allow for reasonable predictions. Lenders "are seeing the results of not having that info now..."


At this point in probably sounds like I am seriously anti-model. That isn't really the case. The points I cited from Businessweek involve inserting arbitrary values into models. Non-arbitrary data is based on some reality, such as "historical performance" for an appropriate past period, looking forward into an appropriate future period.

Incidentally, one of the articles I read cited the Intangible Asset Finance Society, which is "dedicated to capturing maximum value from intellectual properties and other intangible assets such as quality, safety, security; and brand equity." That sounds like something to review.

Friday, August 31, 2007

Economist on Models

I intended to stay quiet on risk models for a while, but I read the following Economist articles and wanted to note them here for future reference.

From "Statistics and climatology: Gambling on tomorrow":

Climate models have lots of parameters that are represented by numbers... The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it is derived from actual experience of how the climate behaves — and may thus be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble...

Climate models have hundreds of parameters that might somehow be related in this sort of way. To be sure you are seeing valid results rather than artefacts of the models, you need to take account of all the ways that can happen.

That logistical nightmare is only now being addressed, and its practical consequences have yet to be worked out. But because of their philosophical training in the rigours of Pascal's method, the Bayesian bolt-on does not come easily to scientists. As the old saw has it, garbage in, garbage out. The difficulty comes when you do not know what garbage looks like.
(emphasis added)

This is only a subset of the argument in the article. Note that this piece includes the line "derived from actual experience of how the climate behaves." I do not see the "Bayesian prior assumption[s]" so often cited elsewhere to even be derived from experience, if you take experience to mean something grounded in recent historical evidence instead of opinion. This willingness to rely on arbitrary inputs only aggravates the problems cited by the Economist.

The accompanying piece "Modelling the climate: Tomorrow and tomorrow" further explains some of the problems I've been describing previously.

While some argue about the finer philosophical points of how to improve models of the climate (see article), others just get on with it. Doug Smith and his colleagues at the Hadley Centre, in Exeter, England, are in the second camp and they seem to have stumbled on to what seems, in retrospect, a surprisingly obvious way of doing so. This is to start the model from observed reality.

Until now, when climate modellers began to run one of their models on a computer, they would “seed” it by feeding in a plausible, but invented, set of values for its parameters. Which sets of invented parameter-values to use is a matter of debate. But Dr Smith thought it might not be a bad idea to start, for a change, with sets that had really happened...

[T]he use of such real starting data made a huge improvement to the accuracy of the results. It reproduced what had happened over the courses of the decades in question as much as 50% more accurately than the results of runs based on arbitrary starting conditions.

Hindcasting, as this technique is known, is a recognised way of testing models. The proof of the pudding, though, is in the forecasting, so Dr Smith plugged in the data from two ten-day periods in 2005 (one in March and one in June), pressed the start button and crossed his fingers.
(emphasis added)

This is exactly what I've been saying. Use "observed reality" and not "plausible, but invented" opinions or "arbitrary starting conditions" and I'll be happy to see what the model produces.

Sunday, August 26, 2007

Thoughts on FAIR

You knew I had risk on my mind given my recent post Economist on the Peril of Models. The fact is I just flew to Chicago to teach my last Network Security Operations class, so I took some time to read the Risk Management Insight white paper An Introduction to Factor Analysis of Information Risk (FAIR). I needed to respond to Risk Assessment Is Not Guesswork, so I figured reading the whole FAIR document was a good start. I said in Brothers in Risk that I liked RMI's attempts to bring standardized terms to the profession, so I hope they approach this post with an open mind.

I have some macro issues with FAIR as well as some micro issues. Let me start with the macro issue by asking you a question:

Does breaking down a large problem into small problems, the solutions to which rely upon making guesses, result in solving the large problem more accurately?

If you answer yes, you will like FAIR. If you answer no, you will not like FAIR.

FAIR defines risk as

Risk - the probable frequency and probable magnitude of future loss

That reminded me of

Annual Loss Expectancy (ALE) = Annualized Rate of Occurrence (ARO) X Single Loss Expectancy (SLE)

If you don't agree remove the "annual" terms from the second definition or add them to the FAIR definition.

I have always preferred this equation

Risk = Vulnerability X Threat X Impact (or Cost)

because it is useful for showing the effects on risk if you change one of the factors, ceteris paribus. (Ok, I threw the Latin in there as homage to one of my economics instructors.)

If you consider frequency when estimating threat activity and include countermeasures as a component of vulnerability, you'll notice that Threat X Vulnerability starts looking like ARO. Impact (or Cost) is practically the same as SLE, so the two equations are similar.

FAIR turns its definition into the following.



If you care to click on that diagram, you'll see many small elements that need to be estimated. Specifically, you can follow the Basic Risk Assessment Guide to see these are the steps.

  • Stage 1. Identify scenario components


    • 1. Identify the asset at risk

    • 2. Identify the threat community under consideration


  • Stage 2. Evaluate Loss Event Frequency (LEF)


    • 3. Estimate the probable Threat Event Frequency (TEF)

    • 4. Estimate the Threat Capability (TCap)

    • 5. Estimate Control strength (CS)

    • 6. Derive Vulnerability (Vuln)

    • 7. Derive Loss Event Frequency (LEF)


  • Stage 3. Evaluate Probable Loss Magnitude (PLM)


    • 8. Estimate worst-case loss

    • 9. Estimate probable loss


  • Stage 4. Derive and articulate Risk


    • 10. Derive and articulate Risk



The problem with FAIR is that in every place you see the word "Estimate" you can substitute "Make a guess that's not backed by any objective measurement and which could be challenged by anyone with a different agenda." Because all the derived values are based on those estimates, your assessment of FAIR depends on the answer to the question I asked at the start of this post.

Let's see how this process stands up to some simple scrutiny by reviewing FAIR's Analyzing a Simple Scenario.

A Human Resources (HR) executive within a large bank has his username and password written on a sticky-note stuck to his computer monitor. These authentication credentials allow him to log onto the network and access the HR applications he’s entitled to use...

1. Identify the Asset at Risk: In this case, however, we’ll focus on the credentials, recognizing that their value is inherited from the assets they’re intended to protect.


We start with a physical security risk case. This simplifies the process considerably and actually gives FAIR the best chance it has to reflect reality. Why is that? The answer is that the physical world changes more slowly than the digital world. We don't have to worry about having solid walls being penetrated by a mutant from the X-Men movies or from the state of the credentials suddenly being altered by a patch or configuration change.

Identify the Threat Community: If we examine the nature of the organization (e.g., the industry it’s in, etc.), and the conditions surrounding the asset (e.g., an HR executive’s office), we can begin to parse the overall threat population into communities that might reasonably apply... For this example, let’s focus on the cleaning crew.

That's convenient. The document lists six potential threat communities but decides to only analyze one. Simplification sure makes it easier to proceed with this analysis. It also means the result is so narrowly targeted to be almost worthless, unless we decide to repeat this process for the rest of the threat communities. And this is still only looking at a sticky note.

3. Estimate the probable Threat Event Frequency (TEF): Many people demand reams of hard data before they’re comfortable estimating attack frequency. Unfortunately, because we don’t have much (if any) really useful or credible data for many scenarios, TEF is often ignored altogether. So, in the absence of hard data, what’s left? One answer is to use a qualitative scale, such as Low, Medium, or High.

And, while there’s nothing inherently wrong with a qualitative approach in many circumstances, a quantitative approach provides better clarity and is more useful to most decision-makers – even if it’s imprecise.

For example, I may not have years of empirical data documenting how frequently cleaning crew employees abuse usernames and passwords on sticky-notes, but I can make a reasonable estimate within a set of ranges.

Recognizing that cleaning crews are generally comprised of honest people, that an HR executive’s credentials typically would not be viewed or recognized as especially valuable to them, and that the perceived risk associated with illicit use might be high, then it seems reasonable to estimate a Low TEF using the table below...

Is it possible for a cleaning crew to have an employee with motive, sufficient computing experience to recognize the potential value of these credentials, and with a high enough risk tolerance to try their hand at illicit use? Absolutely! Does it happen? Undoubtedly. Might such a person be on the crew that cleans this office? Sure – it’s possible. Nonetheless, the probable frequency is relatively low.
(emphasis added)

Says who? Has the person making this assessment done any research to determine if inflitrating cleaning crews is a technique used by economic adversaries? If yes, how often does that happen? What is the nature of the crew cleaning this office? Do they perform background checks? Have they been infiltrated before? Are they owned by a competitor? Figuring all of that out is too hard. Let's just supply guess #1: "low."

4. Estimate the Threat Capability (Tcap): Tcap refers to the threat agent’s skill (knowledge & experience) and resources (time & materials) that can be brought to bear against the asset... In this case, all we’re talking about is estimating the skill (in this case, reading ability) and resources (time) the average member of this threat community can use against a password written on a sticky note. It’s reasonable to rate the cleaning crew Tcap as Medium, as compared to the overall threat population.

Why is that? Why not "low" again? These are janitors we're discussing. Guess #2.

5. Estimate the Control Strength (CS): Control strength has to do with an asset’s ability to resist compromise. In our scenario, because the credentials are in plain sight and in plain text, the CS is Very Low. If they were written down, but encrypted, the CS would be different – probably much higher.

It is easy to accept guess #3 because we are dealing with a physical security scenario. It's simple for any person to understand that a sticky note in plain site has zero controls applied against it, so the (nonexistent) "controls" are worthless. But what about that new Web application firewall? Or you anti-virus software? Or any other technical control? Good luck assessing their effectiveness in the face of attacks that evolve on a weekly basis.

6. Derive Vulnerability (Vuln)

This value is derived using a chart that balances Tcap vs Control Strength. Since it is based on two guesses, one could decide if it is more or less accurate than estimated the vulnerability directly.

7. Derive Loss Event Frequency (LEF)

This value is derived using a chart that balances TEF vs Vulnerability. We derived vulnerability in the previous step and estimated TEF in step 3.

8. Estimate worst-case loss: Within this scenario, three potential threat actions stand out as having significant loss potential – misuse, disclosure, and destruction... For this exercise, we’ll select disclosure as our worst-case threat action.

This step considers Productivity, Response, Replacement, Fine/Judgments, Competitve Advantage, and Reputation, with Threat Actions including Access, Modification, Disclosure, and Denial of Access. Enter guess #4.

9. Estimate probable loss magnitude (PLM): The first step in estimating PLM is to determine which threat action is most likely. Remember; actions are driven by motive, and the most common motive for illicit action is financial gain. Given this threat community, the type of asset (personal information), and the available threat actions, it’s reasonable to select Misuse as the most likely action – e.g., for identity theft. Our next step is to estimate the most likely loss magnitude resulting from Misuse for each loss form.

Again, says who? Was identity theft chosen because it's popular in the news? My choice for guess #5 could be something completely different.

10. Derive and Articulate Risk: [R]isk is simply derived from LEF and PLM. The question is whether to articulate risk qualitatively using a matrix like the one below, or articulate risk as LEF, PLM, and worst-case.

The final risk rating is another derived value, based on previous estimates.

The FAIR author tries to head off critiques like this blog with the following section:

It’s natural, though, for people to accept change at different speeds. Some of us hold our beliefs very firmly, and it can be difficult and uncomfortable to adopt a new approach. Ultimately, not everyone is going to agree with the principles or methods that underlie FAIR. A few have called it nonsense. Others appear to feel threatened by it.

Apparently I'm resistant to "change" and "threatened" because I firmly hold on to "beliefs." I'm afraid that is what I will have to do when frameworks like this are founded upon someone's opinion at each stage of the decision-making process.

The FAIR document continues:

Their concerns tend to revolve around one or more of the following issues:

The absence of hard data. There’s no question that an abundance of good data would be useful. Unfortunately, that’s not our current reality. Consequently, we need to find another way to approach the problem, and FAIR is one solution.


I think I just read that the author admits FAIR is not based on "good data," and since we don't have data, we should just "find another way," like FAIR.

The lack of precision. Here again, precision is nice when it’s achievable, but it’s not realistic within this problem space. Reality is just too complex... FAIR represents an attempt to gain far better accuracy, while recognizing that the fundamental nature of the problem doesn’t allow for a high degree of precision.

The author admits that FAIR is not precise. How can it even be accurate when the derived values are all based on subjective estimates anyway?

Some people just don’t like change – particularly change as profound as this represents.

I fail to see why FAIR is considered profound. Is the answer because the process has been broken into five estimates, from which several other values are derived? Why is this any better than articles like How to Conduct a Risk Analysis or Risk Analysis Tools: A Primer or Risk Assessment and Threat Identification?

I'm sure this isn't the last word on this issue, but I need to rest before teaching tomorrow. Thank you for staying with me if you read the whole post. Obviously if I'm not a fan of FAIR I should propose an alternative. In Risk-Based Security is the Emperor's New Clothes I cited Donn Parker, who is probably the devil to FAIR advocates. If the question is how to make security decisions by assessing digital risk, I will put together thoughts on that for a post (hopefully this week).

Incidentally, the fact that I am not a fan of FAIR doesn't mean I think the authors have wasted their time. I appreciate their attempt to bring rigor to this process. I also think the questions they ask and the elements they consider are important. However, I think the ability to insert whatever value one likes into the five estimations fatally wounds the process.

This is the bottom line for me: FAIR advocates claim their output is superior due to their framework. How can a framework that relies on arbitrary inputs produce non-arbitrary output? And what makes FAIR so valuable anyway -- has the result been tested against any other methods?

Economist on the Peril of Models

Anyone who has been watching financial television stations in the US has seen commentary on the state of our markets with respect to subprime mortgages. I'd like to cite the 21 July 2007 issue of the Economist to make a point that resonates with digital security.

Both [Bear Stearns] funds had invested heavily in securities backed by subprime mortgages... On July 17th it admitted that there “is effectively no value left” in one of the funds, and “very little value left” in the other.

Such brutal clarity is, however, a rarity in the world of complex derivatives. Investors may now know what the two Bear Stearns funds are worth. But accountants are still unsure how to put a value on the instruments that got them into trouble.


This reminds me of a data breach -- instant clarity.

Traditionally, a company's accounts would record the value of an asset at its historic cost (ie, the price the company paid for it). Under so-called “fair value” accounting, however, book-keepers can now record the value of an asset at its market price (ie, the price the company could get for it).

But many complex derivatives, such as mortgage-backed securities, do not trade smoothly and frequently in arm's length markets. This makes it impossible for book-keepers to “mark” them to market. Instead they resort to “mark-to-model” accounting, entering values based on the output of a computer.


Note the reference to "complex"[ity] and book-keepers basing their decisions on models created by other people who make assumptions. Is this starting to sound like risk analysis to you, too?

Unfortunately, the market does not always resemble the model... Models are supposed to show the price an asset would fetch in a sale. But in an illiquid market, a big sale can itself drive down prices. This can sometimes create a sizeable difference between “mark-to-model” valuations and true market prices.

That is not the only problem with fair-value accounting. According to Richard Herring, a finance professor at the Wharton School, “models are easy to manipulate”...

Unfortunately, the alternatives to fair-value accounting can be worse. Historic cost may be harder to manipulate than the results of a model. But as Bob Herz, chairman of America's Financial Accounting Standards Board, points out, it too is “replete with all sorts of guesses”, such as depreciation rates...


"Models are easy to manipulate" and alternatives are also "replete with all sorts of guesses." This sounds exactly like risk analysis now.

Fair value is perhaps most worrying for auditors, who are often blamed for faulty accounts. Faced with murky models, the best they can do is examine assumptions and ensure disclosure.

This means that the role of the auditor becomes that of an outside expert who makes a new set of subjective decisions, perhaps challenging the assumptions of those who made subjective decisions when creating their model. The auditor's advantage, however, is that he/she has insight into the workings of many similar companies, and could compare "best practices" against the specific company being audited.

Incidentally, I would love to know how the "CISO of a major Wall Street bank" who criticized Dan Geer as mentioned in Are the Questions Sound? feels now about his precious financial models. Somehow I doubt his bonus will be as big as it was last year, if his company is even solvent by year's end.

Saturday, July 14, 2007

Bank Robber Demonstrates Threat Models

This evening I watched part of a show called American Greed that discussed the Wheaton Bandit, an armed bank robber who last struck in December 2006 and was never apprehended.

Several aspects of the story struck me. First, this criminal struck 16 times in less than five years, only once being repelled when he was detected en route to a bank and locked out by vigilant tellers. Does a criminal who continues to strike without being identified and apprehended bear resemblance to cyber criminals? Second, the banks did not respond by posting guards on site. Guards tend to aggravate the problem and people get hurt, according to the experts cited on the show. Instead, the banks posted greeters right at the front door to say hello to everyone entering the bank. I've noticed this at my own local branch within the last year, but thought it was an attempt to duplicate Wal-Mart; apparently not. Because the robber also disguises himself with a balaclava (pictured at right), the bank banned customers from wearing hoods, sunglasses, and other clothing that obscures the face in the bank.

Third, improved monitoring is helping police profile the criminal. Old bank cameras used tape that was continuously overwritten, resulting in very grainy imagery. Newer monitoring systems are digital and pick up many details of the crime. For example, looking at recent footage the cops noticed the robber "indexing" the gun by keeping his index finger away from the trigger, like we learned in the military or in law enforcement. They also perceived indications he wears light body armor while robbing banks. Finally, one of the more interesting aspects of the show was the reference to a DoJ Bank Robbery (.pdf) document. It contains a chart titled Distinguishing Professional and Amateur Bank Robbers, reproduced as a linked thumbnail at left.

I understand the purpose of the document; it's a way to determine if the robber is an amateur or a professional. This made me consider some recent posts like Threat Model vs Attack Model. A threat model describes the capabilities and intentions of either a professional bank robber or an amateur bank robber. An attack model describes how a robber specifically steals money from a particular bank. Threat models are more generic than attack models, because attack models depend on the nature of the victim.

Watching this show reminded me that security is not a new problem. Who has been doing security the longest? The answer is: physical security operators. If we digital security newbies don't want to keep reinventing the wheel, it might make sense to learn more from the physical side of the house. I think convergence of some kind is coming, at least at some level of the management hierarchy.

If you argue that the two disciplines are too different to be jointly managed, consider the US military. The key warfighting elements are the Unified Combatant Commands, which can be headed by just about any service member. Some commands were usually led by a general from a certain service, like the Air Force for TRANSCOM, but those arrangements are being unravelled. Despite the huge Army occupation in the Middle East, for example, the next CENTCOM leader is a Naval officer, and so is the next Chairman of the Joint Chiefs. Even the new head of SOCOM is Navy. This amazes me. When I first learned about Joint warfare, the joke was "How do you spell Joint? A-R-M-Y." Now it's N-A-V-Y.

For more on this phenomenon, please read Army Brass Losing Influence, which I just found after writing this post.

Perhaps we should look to a joint security structure to combine the physical and digital worlds? That would require joint conferences and similar training opportunities. Some history books with lessons for each side would be helpful too.

Tuesday, June 12, 2007

Threat Model vs Attack Model

This is just a brief post on terminology. Recently I've heard people discussing "threat models" and "attack models." When I reviewed Gary McGraw's excellent Software Security I said the following:

Gary is not afraid to point out the problems with other interpretations of the software security problem. I almost fell out of my chair when I read his critique on pp 140-7 and p 213 of Microsoft's improper use of terms like "threat" in their so-called "threat model." Gary is absolutely right to say Microsoft is performing "risk analysis," not "threat analysis." (I laughed when I read him describe Microsoft's "Threat Modeling" as "[t]he unfortunately titled book" on p 310.) I examine this issue deeper in my reviews of Microsoft's books.

In other words, what Microsoft calls "threat modeling" is actually a form of risk analysis. So what is a threat model?

Four years ago I wrote Threat Matrix Chart Clarifies Definition of "Threat", which showed the sorts of components one should analyze when doing threat modeling. I wrote:

It shows the five components used to judge a threat: existence, capability, history, intentions, and targeting.

That is how one models threats. It has nothing to do with the specifics of the attack. That is attack modeling.

Attack modeling concentrates on the nature of an attack, not the threats conducting them. I mentioned this in my review of Microsoft's Writing Secure Code, 2nd Ed:

[W]henever you read "threat trees," [in this misguided Microsoft book] think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.

That is still true -- Bruce Schneier's work on attack trees and attack modeling is correct in its terminology and its applications. Attack trees are a way to perform attack modeling. Attack modeling can be done separate from threat modeling, meaning one can develop an attack tree that any sufficient threat could execute.

This understanding also means most organizations will have more useful results performing attack modeling and not threat modeling, because most organizations (outside law enforcement and the intel community) lack any real threat knowledge. With the help of a pen testing team an organization can develop realistic attack models and therefore effective countermeasures. This is Ira Winkler's point when he says most organizations aren't equipped to deal with threats and instead they should mitigate vulnerabilities that any threat might attack.

This does not mean I am embracing vulnerability-centric security. I still believe threats are the primary security problem, but only those chartered and equipped to deter, apprehend, prosecute, and incarcerate threats should do so. The rest of us should focus our resources on what we can, but take every step to get law enforcement and the military to do the real work of threat removal.

Sunday, November 02, 2003

Threat Matrix Chart Clarifies Definition of "Threat"

I ran across this chart at the Kentucky government security page, of all places. They must have reproduced it from a Department of Homeland Security briefing. It shows the five components used to judge a threat: existence, capability, history, intentions, and targeting. My earlier definitions focuses on capability and intentions, as I believe existence is taken for granted once you begin a threat assessment. You can easily wrap history into intentions. Targeting is a "special form" of intentions, meaning current intelligence suggesting plans for imminent attack against specific targets. As an enemy meets more of the criteria, the threat rating increases from "low" to "severe."

Update:A blog visitor asked if publication of this chart was a sarcastic move. While I don't think this matrix represents the ultimate in threat assessment, I reproduced it here to show some of the elements used to assess threats. They include the five components mentioned earlier. The choice of words "severe, high," etc., don't fit with any threat model I've used in the military. We had THREATCONs which used words like "normal, alpha, bravo," etc. THREATCONs became Terrorist Force Protection Conditions (FPCON) in 2001.

I'm looking forward to the first Cyber Threat Matrix by Echo CCT.