Sunday, June 28, 2009

Simpler IP Range Matching with Tshark Display Filters

In today's SANS ISC journal, the story IP Address Range Search with libpcap wonders how to accomplish the following: to find SYN packets directed to natted addresses where an attempt was made to connect or scan a service natted to an internal resource. I used this filter for addresses located in the range to

The proposed answer is this:

tcpdump -nr file '((ip[16:2] = 0xc0a8 and ip[18] = 0x19 and ip[19] > 0x06)\
and (ip[16:2] = 0xc0a8 and ip[18] = 0x19 and ip[19] < 0x23) and tcp[13] = 0x02)'

I am sure it's clear to everyone what that means!

Given my low success rate in getting comments posted to the SANS ISC blog, I figured I would reply here.

Last fall I wrote Using Wireshark and Tshark display filters for troubleshooting. Wireshark display filters make writing such complex Berkeley Packet Filter syntax a thing of the past.

Using Wireshark display filters, a mere mortal could write the following:

tshark -nr file 'tcp.flags.syn and (ip.dst > and ip.dst <'

Note that if you want to be inclusive, change the > to >= and the < to <= .

To show that my filter works, I ran the filter against a file with traffic on my own network, so I altered the last two octets to match my own traffic.

$ tshark -nr test.pcap 'tcp.flags.syn and (ip.dst > and ip.dst <'

137 2009-06-28 16:21:44.195504 -> HTTP Continuation or non-HTTP traffic

You have plenty of other options, such as ip.src and ip.addr.

Which one do you think is faster to write and easier to understand?

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Effective Digital Security Preserves Long-Term Competitiveness

Yesterday I mentioned a speech by my CEO, Jeff Immelt. Charlie Rose also interviewed Mr Immelt last week. In both scenarios Mr Immelt talked about preserving long-term competitiveness. Two of his themes were funding research and development and ensuring the native capability to perform technical tasks.

It occurred to me that digital security is reflected in both themes. In Crisis 0: Game Over I asked I'm sure some savvy reader knows of some corporate espionage case that ended badly for the victim, i.e., bankruptcy or the like? I got a few interesting cases, but I believe the net result is that it is difficult to find examples where an intrusion or breach was so devastating that it ended up destroying the victim organization.

This makes sense once you reflect on it. Why would a mature, thoughtful intruder seek to destroy his victim, if the purpose of his mission is to conduct espionage on behalf of a competitor or intelligence service? Destroying the victim renders it useless as a source for stealing intellectual property gained by the victim's research and development. In the foreign intelligence case, almost all operators prefer to keep a source active, even in wartime when you might think that destruction is the ultimate goal.

Taking this line of reasoning to its natural conclusion, we can see that digital security can be considered a means to preserve long-term competitiveness, particularly for organizations that seek to drive internal growth via investing in research and development. Such an organization is a natural target for competitors who find it immensely cheaper to steal intellectual property, rather than fund their own.

The problem is showing those who make budgetary and management decisions that digital security has a real role in loss prevention. I've written a lot about intellectual property and digital security, but it is exceptionally difficult to tie individual intrusions to real impact. How does pervasive theft of intellectual property (IP) manifest itself? In commercial cases, perhaps it would appear as a loss of sales to rivals who make similar or duplicate products based on stolen IP. Would the victim organization even know these lost or declining sales were the result of IP theft?

Even if the victim identified the stolen IP, could it be traced back to one or more intrusions, or would it be considered the consequences of product reverse engineering by competitors? The bottom line could be that the victim is still in business, but the double-digit growth and expanding market share it craves are reduced to single-digit growth and eroding market share.

It's a waste of time to use terms like "ROI" or "ROSI" when talking to managers or business people. It is usually impossible to fully explain, from loss to impact, the IP theft cases like the one I described in Intellectual Property: Develop or Steal, i.e., spend $10 million over 10 years on a product, then watch the Chinese duplicate it for $1.4 million in 6 months after stealing the IP. More often than not, the victim of IP theft simple whithers, wondering why its competitive advantage is not what it expected it to be. It's time to get managers and business people to think in terms of long-term competitiveness.

Clearly Mr Immelt has determined that it is not in his company's best interest, nor in the interests of the country, for the US to be underfunding R&D or outsourcing everything overseas. We security professionals need to adopt this line of reasoning to emphasize how effective digital security preserves long-term competitiveness.

By the way, you might be wondering if I can prove there is an impact to IP theft. I look at the question this way. If there were no impact to IP theft, why would economic and national competitors fund teams to steal IP? You might argue that IP thieves can duplicate and sell products at prices lower than the IP owner could afford, thereby serving a new market. If that were true, why would IP owners file patents? Clearly there is value in IP, so stealing it lessens the value available to the IP owner.

I use a variant of this argument when I encounter asset owners who claim there is no impact associated with an intrusion. My reply is usually this: If there is no impact, then why operate the asset? Retire it.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Saturday, June 27, 2009

Posts to Read Elsewhere

I'm not a big fan of just publishing links to other people's stories, but there's a few that I really like this week. Please consider checking these out:

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Black Hat Budgeting

Earlier this month I wondered How much to spend on digital security. I'd like to put that question in a different light by imagining what a black hat could do with a $1 million budget.

The ideas in this post are rough approximations. They certainly aren't a black hat business plan. I don't recommend anyone follow through on this, although I am sure there are shops our there who do this work already.

Let's start by defining the mission of this organization, called Project Intrusion (PI). PI is in "business" to steal intellectual property from organizations and sell it to the highest bidders. In the course of accomplishing that mission, PI may develop tools and techniques that it could sell down the food chain, once PI determines their utility to PI has sufficiently decreased.

With $1 million in funding, let's allocate some resources.

  • Staff. Without people, this business goes nowhere. We allocate $750,000 of our budget to salaries and benefits to hire the following people.

    • The team leader should have experience as a vulnerability researcher, exploit developer, penetration tester, enterprise defender, and preferably an intelligence operative. The leader can be very skilled in at least one speciality (say Web apps or Windows services) but should be familiar with all of the team's roles. The team leader needs a vision for the team while delivering value to clients. $120,000.

    • The team needs at least one attack tool and technique developer for each target platform or technology that PI intends to exploit. PI hires three. One focuses on Windows OS and client apps, one on Web apps, and one on Unix and network infrastructure. $330,000.

    • The team hires two penetration operators who execute the team leader's mission directives by using the attack tools and techniques supplied by the developers. The operators penetrate the target and establish the persistence required to acquire the desired intellectual property. $180,000.

    • The team hires one intelligence operative to direct the penetration operators attention toward information of value, and then assess the value of exfiltrated data. The intel operative interfaces with clients to make deals. $120,000.

  • Technology. The team will need the following, for a total of $200,000.

    • Lab computers running the software likely to be attacked during operations.

    • Operations computers from which the penetration operators run attacks.

    • Network connectivity and hosting for the lab computers and operations computers, dispersed around the world.

    • Software required by the team, since many good attack tools are commercial. MSDN licenses are needed too. There's no need to steal these; we have the budget!

  • Miscellaneous. The last $50,000 could be spent on incidentals, bribes, team awards, travel, or whatever else the group might require in start-up mode.

If the attack developers manage to make enough extra money by selling original exploits, I would direct the funds to additional penetration operators. It would take about six of them to support a sustainable 24x7 operation. With only two they would need to be careful and operate within certain time windows.

So what is the point of this exercise? I submit that for $1 million per year an adversary could fund a Western-salaried black hat team that could penetrate and persist in roughly any target it chose to attack. This team has the structure and expertise to develop its own attack methods, execute them, and sell the results of its efforts to the highest bidders.

This should be a fairly scary concept to my readers. Why? Think about what $1 million buys in your security organization. If your company is small, $1 million could go a long way. However, when you factor in all of the defensive technology you buy, and the salaries of your staff, and the scope of your responsibilities, and so on, quickly you realize you are probably out-gunned by Project Intrusion. PI has the in-house expertise to develop its own exploits, keep intruders on station, and assess and sell the information it steals.

Worse, PI can reap economies of scale by attacking multiple targets for that same $1 million. Why? Everyone runs Windows. Everyone uses the same client software. Everyone's enterprise tends to have the same misconfigurations, missing patches, overworked staff, and other problems. The tools and techniques that penetrate company A are likely to work against company B.

This is why I've always considered it folly to praise the Air Force for standardizing its Windows deployment with supposedly secure configurations. If PI looks at its targets and sees Windows, Windows, some other OS that might be Linux or BSD or who knows what, Windows, Windows, who do you think PI will avoid?

It's all about cost, on the part of the attacker or defender. Unfortunately for defenders, it's only intruders who can achieve "return on investment" when it comes to exploiting digital security.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Being a Critic Is Easy, So What Would I Do?

After my last post, some of you are probably thinking that it's easy to be a critic, but what would I suggest instead? The answer is simple to name but difficult to implement.

  1. Operate a defensible network architecture. Hardly anyone does. I don't need to explain all of the reasons why here; they could occupy a series of posts, or maybe even a book.

  2. Once the DNA is operating, detect and respond to failures. The nice aspect of operating a DNA is that the number of failures should be lower but of higher complexity. Unfortunately at the moment almost all of the world's detection and response teams have to deal with the entire spectrum of security incidents. These range from the most mundane to the most complex. Too often the mundane hide the complex, or at the very least divert resources and attention.

  3. Use the knowledge learned from failures (either caused by adversaries or adversary simulation) to guide the next version of the DNA. Since most enterprises are not operating a DNA, they never get to work on the next version anyway.

I know other people think this way too. Harlan Carvey is one. He is also an incident responder and he finds so many clients that are not doing the basics anywhere remotely right.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Ugly Security

I read Anton Chuvakin's post MUST READ: Best Chapter From “Beautiful Security” Downloadable! with some interest. He linked to a post by Mark Curphey pointing out that Mark's chapter from O'Reilly's new book Beautiful Security was available free for download in .pdf format. O'Reilly had been kind enough to send me a copy of the book, so I decided to read Mark's chapter today.

I found the following excerpts interesting.

Builders Versus Breakers

Security people fall into two main categories:

  • Builders usually represent the glass as half full. While recognizing the seriousness of vulnerabilities and dangers in current practice, they are generally optimistic people who believe that by advancing the state they can change the world for the better.

  • Breakers usually represent the glass as half empty, and are often so pessimistic that you wonder, when listening to some of them, why the Internet hasn’t totally collapsed already and why any of us have money left unpilfered in our bank accounts. Their pessimism leads them to apply the current state of the art to exposing weaknesses and failures in current approaches.

I remembered I had seen something like this before and wrote On Breakership in response. However, back then the debate seemed to center around calling people who helped create and defend systems as "builders, while labeling people who exploited or at least tested systems as "breakers." Mark seems to have dismissed people who "break" systems in order to improve security, while praising builders as people who stay "optimistic." I don't think this is fair. My post Response to Is Vulnerability Research Ethical? explains my position, which is essentially that Offense and Defense Inform Each Other.

Next, in a section titled Clouds and Web Services to the Rescue, Mark describes how centralized data storage for his 6 home PCs at Amazon S3 is great for security. Unfortunately, all he is really showing is that there is value in offsite storage. Storing data at Amazon S3 doesn't help much when those 6 systems are part of Calin's botnet in Romania. This is an example of focusing on one aspect of security (availability) while ignoring the other parts (confidentiality and integrity). Don't get me wrong -- I think cloud storage is great and I use a variety of services myself. However, it only helps with one aspect of the security landscape, and if not properly utilized introduces other vulnerabilities and exposures not found in other models.

Next Mark talks about using cloud services for data analysis.

Event logs can provide an incredible amount of forensic information, allowing us to reconstruct an event. The question may be as simple as which user reset a specific account password or as complex as which system process read a user’s token. Today there are, of course, log analysis tools and even a whole category of security tools called Security Event Managers (SEMs), but these don’t even begin to approach the capabilities of supercrunching. Current tools run on standard servers with pretty much standard hardware performing relatively crude analysis...

[T]he power and storage that is now available to us all if we embrace the new connected computing model will let us store vast amounts of security monitoring data for analysis and use the vast amounts of processing power to perform complex analysis. We will then be able to look for patterns and derive meaning from large data sets to predict security events rather than react to them. You read that correctly: we will be able to predict from a certain event the probability of a tertiary event taking place. This will allow us to provide context-sensitive security or make informed decisions about measures to head off trouble.

Does Mark mean that the real problem we've had with detecting and responding to security events is a lack of processing power? Good grief. I hear thoughts like this quite often from people who don't actually detect and respond to security incidents. Even academic security researchers in their ivory towers are probably laughing at Mark's angle. "Oh, you're right -- we've just been waiting for a supercomputer to run our algorithms!"

Mark then talks about using Business Process Management (BPM) software to improve security:

When security BPM software (and a global network to support it) emerges, companies will be able to outsource this step not just to a single company, in the hope that it has the necessary skills to provide the appropriate analysis, but to a global network of analysts. The BPM software will be able to route a task to an analyst who has a track record in a specific obscure technology (the best guy in the world at hacking system X or understanding language Y) or a company that can return an analysis within a specific time period. The analysts may be in a shack on a beach in the Maldives or in an office in London; it’s largely irrelevant, unless working hours and time zones are decision criteria...

This same fundamental change to the business process of security research will likely be extended to the intelligence feeds powering security technology, such as anti-virus engines, intrusion detection systems, and code review scanners. BPM software will be able to facilitate new business models, microchunking business processes to deliver the end solution faster, better, or more cheaply. This is potentially a major paradigm shift in many of the security technologies we have come to accept, decoupling the content from the delivery mechanism. In the future, thanks to BPM software security, analysts will be able to select the best anti-virus engine and the best analysis feed to fuel it — but they will probably not come from the same vendor.

Again, this is so detached from reality, I am curious how anyone could think this is possible. Mark works for Microsoft. Would you ever imagine Microsoft pivoting on a dime to "select the best anti-virus engine and the best analysis feed" -- or would they stick to their own product, because it's their own product? What about your company -- have you witnessed the organizational inertia associated with any IT product or system?

How about trust factors? What if "the best guy in the world at hacking system X or understanding language Y" works in a country with a reputation for industrial espionage? What if that guy was just hired by a competitor, or is working for a competitor now? How long does it take outside help to become familiar with the aspects of your business that eventually determine success? There's a reason why companies are not collections of free agents working independently.

Mark's last section talks about social networking for the security industry, talking about how people should share what they know. There are indeed certain collaborative forums where this works, but you are seldom if ever going to find any serious company telling other companies how their security defenses work, how they fail, and what is lost as a result of that failure. Individual collaboration occurs, but there could be severe consequences for a security staff member who unloads specific technical security information to a social network. The most productive associations that currently exist are found in certain private mailing lists, associations of peer companies that sign mutual nondisclosure agreements, and individual exchanges among peers.

Mark is a smart guy but I think his prognosis for the security industry in his Beautiful Security chapter are largely incomplete and unrealistic.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Thursday, June 25, 2009

SANS Forensics and Incident Response 2009

The agenda for the second SANS WhatWorks Summit in Forensics and Incident Response has been posted. I am really happy to see I am speaking on Tuesday, because I will not be available Wednesday. Day 1 appears mainly technical, and day 2 is mainly legal. Please consider registering for the two-day conference. It's the best incident response event in the US this year!

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Wednesday, June 24, 2009


Today is an historic day for our profession, and for my American readers, our country. As reported in The Washington Post and by several of you, today Secretary Gates ordered the creation of U.S. Cyber Command, a subordinate unified command under U.S. Strategic Command. The NSA Director will be dual-hatted as DIRNSA and CYBERCOM Commander, with Title 10 authority, and will be promoted to a four-star position. Initial Operational Capability for CYBERCOM is October 2009 with Full Operational Capability planned for October 2010. Prior to CYBERCOM achieving FOC, the Joint Task Force - Global Network Operations (JTF-GNO) and the Joint Task Force - Network Warfare (JTF-NW) will be "disestablished."

As one of my friends said: "Step one to your Cyber Service -- what will the uniforms look like?"

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Tuesday, June 23, 2009

Free .pdf Issue of BSD Magazine Available

Karolina at BSD Magazine wanted me to let you know that she has posted a free .pdf issue online. I mentioned this issue last year and its focus is OpenBSD. Check it out, along with Hakin9!

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

The Problem with Automated Defenses

Automation is often cited as a way to "do more with less." The theory is that if you can automate aspects of security, then you can free resources. This is true up to a point. The problem with automation is this:

Automated defenses are the easiest for an intruder to penetrate, because the intruder can repeatedly and reliably test attacks until he determines they will be successfully and potentially undetectable.

I hope no one is shocked by this. In a previous life I worked in a lab that tested intrusion detection products. Our tests were successful when an attack passed by the detection system with as little fuss as possible.

That's not just an indictment of "IDS"; that approach works for any defensive technology you can buy or deploy off-the-shelf, from anti-malware to host IPS to anything that impedes an intruder's progress. Customization and localization helps make automation more effective, but that tends to cost resources. So, automation by itself isn't bad, but mass-produced automation can provide a false sense of security to a certain point.

In tight economic conditions there is a strong managerial preference for the so-called self-defending network, which ends up being a self-defeating network for the reason in bold.

A truly mature incident detection and response operation exists because the enterprise is operating a defensible network architecture, and someone has to detect and respond to the failures that happen because prevention eventually fails. CIRTs are ultimately exception handlers that deal with everything that falls through the cracks. The problem happens when the cracks are the size of the Grand Canyon, so the CIRT deals with intrusions that should have been stopped by good IT and security practices.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

You Know You're Important When...

You know you're an important when someone announces a "Month of Bugs" project for you. July will be the Month of Twitter Bugs, brought to my attention in this story by Robert Westervelt. The current project is led by a participant in the Month of Browser Bugs from three years ago named Avi Raff.

I don't see projects like that as being irresponsible. What would be more irresponsible is selling the vulnerabilities to the underground. Would the critics prefer that? In many cases, "Month of" projects are the result of running into resistance from developers or managers are not taking vulnerabilities seriously. In many cases the vulnerabilities are already being exploited. Sure, packaging all of the vulnerabilities into a "Month of" project gains attention, but isn't that the point?

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Sunday, June 21, 2009

The Centrality of Red Teaming

In my last post I described how a Red Team can improve defense. I wanted to expand on the idea briefly.

First, I believe the modern enterprise is too complex for any individual or group to thoroughly understand how it can be compromised. There are so many links in the chain that even knowing they exist, let alone how they connect, can be impossible.

To flip that on its end, in a complementary way, the modern enterprise is too complex for any individual or group to thoroughly understand how its defenses can fail. The fact that vendors exist to reduce firewall rule sets down to something intelligible by mere mortals is a testament to the apocalyptic fail exhibited by digital defenses.

Furthermore, it is highly likely that hardly anyone cares about attack models until they have been demonstrated. We seen this repeatedly with respect to software vulnerabilities. It can be difficult for someone to take a flaw seriously until a proof of concept is shown to exploit a victim. L0pht's motto "Making the theoretical practical since 1992" is a perfect summarization of this phenomenon.

So why mention Red Teams? They are central to digital defense because Red Teams transform theoretical intrusion scenarios into reality in a controlled and responsible manner. It is much more realistic to use your incident detection and response teams to know what adversaries are actually doing. However, if you want to be more proactive, you should deploy your Red Team to find and connect those links in the chain that result in a digital disaster.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Offense and Defense Inform Each Other

If you've listened to anyone talking about the Top 20 list called the Consensus Audit Guidelines recently, you've probably heard the phrase "offense informing defense." In other words, talk to your Red Team / penetration testers to learn how they can compromise your enterprise in order to better defend yourself from real adversaries.

I think this is a great idea, but there isn't anything revolutionary about it. It's really just one step above the previous pervasive mindset for digital security, namely identifying vulnerabilities. In fact, this neatly maps into my Digital Situational Awareness ranking. However, if you spend most of your time writing policy and legal documents, and not really having to deal with intrusions, this idea probably looks like a bolt of lightning!

And speaking of the Consensus Audit Guidelines: hey CAG! It's the year 2000 and the SANS Top 20 List wants to talk to you!

The SANS/FBI Top Twenty list is valuable because the majority of successful attacks on computer systems via the Internet can be traced to exploitation of security flaws on this list...

In the past, system administrators reported that they had not corrected many of these flaws because they simply did not know which vulnerabilities were most dangerous, and they were too busy to correct them all...

The Top Twenty list is designed to help alleviate that problem by combining the knowledge of dozens of leading security experts from the most security-conscious federal agencies, the leading security software vendors and consulting firms, the top university-based security programs, and CERT/CC and the SANS Institute.

Expect at some point to hear Beltway Bandits talking about how we need to move beyond talking to the Red Team and how we need to see who is actively exploiting us. Guess what -- that's where the detection and response team lives. Perhaps at some point these "thought leaders" will figure out the best way to defend the enterprise is through counterintelligence operations, like the police use against organized crime?

For now, I wanted to depict that while it is indeed important for offense to inform defense, the opposite is just as critical. After all, how is the Red Team supposed to simulate the adversary if it doesn't know how the adversary operates? A good Red Team can exploit a target using methods known to the Red Team. A great Red Team can exploit a target using methods known to the adversary. Therefore, I created an image describing how offense and defense inform each other. This assumes a sufficiently mature, resourced, and capable set of security teams.

This post may sound sarcastic but I'm not really bitter about the situation. If we keep making progress like this, in 3-5 years the mindset of the information security community will have evolved to where it needed to be ten years ago. I'll keep my eye on the Beltway Bandits to let you know how things proceed.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Response to the Möbius Defense

One of you asked me to comment on Pete Herzog's "Möbius Defense". I like Lego blocks, but I don't find the presentation to be especially compelling.

  1. Pete seems to believe that NSA developed "defense in depth" (DiD) as a strategy to defend DoD networks after some sort of catastrophic compromise in the 1970s. DiD as a strategy has existed for thousands of years. DiD was applied to military information well before computers existed, and to the computers of the time before the 1970s as well.

  2. Pete says DiD is

    "all about delaying rather than preventing the advance of an attacker... buying time and causing additional casualties by yielding space... DiD relies on an attacker to lose momentum over time or spread out and thin its massive numbers as it needs to traverse a large area... All the while, various units are positioned to harm the attacker and either cause enough losses in resources to force a retreat or capture individual soldiers as a means of thinning their numbers."

    That's certainly one way to look at DiD, but it certainly isn't the only way. Unfortunately, Pete stands up this straw man only to knock it down later.

  3. Pete next says

    "Multiple lines of defense are situated to prevent various threats from penetrating by defeating one line of defense. 'Successive layers of defense will cause an adversary who penetrates or breaks down one barrier to promptly encounter another Defense-In-Depth barrier, and then another, until the attack ends.'"

    It would be nice to know who he is quoting, but I determined it is some NSA document because I found other people quoting it. I don't necessarily agree with this statement, because plenty of attacks succeed. This means I agree with Pete's criticism here.

  4. So what's the deal with Möbius? Pete says:

    "The modern network looks like a Moebius strip. Interactions with the outside happen at the desktop, the server, the laptop, the disks, the applications, and somewhere out there in the CLOUD. So where is the depth? There is none. A modern network throws all its fight out at once."

    I believe the first section is party correct. The modern enterprise does have many interactions that occur outside of the attack model (if any) imagined by the defenders. The second section is wrong. Although there may be little to no depth in some sections (say my Blackberry) there is plenty of depth elsewhere (at the desktop, if properly defended). The third section is partly correct in the sense that any defense that happens generally occurs at Internet speed, at least as far as exploitation goes. Later phases (detection and response) do not happen all at once. That means time is a huge component of enterprise defense; comprehensive defense doesn't happen all at once.

  5. Pete then cites "Guerrilla Warfare and Special Forces Operations" as a new defensive alternative to DiD, but then really doesn't say anything you haven't heard before. He mentions counterintelligence but that isn't new either.

I've talked about DiD in posts like Mesh vs Chain, Lessons from the Military, and Data Leakage Protection Thoughts.

I think it is good for people to consider different approaches to digital security, but I don't find this approach to be all that clever.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Sunday, June 14, 2009

How Much to Spend on Digital Security

A blog reader recently asked the following question:

I recently accepted a position and was shocked to learn, I know this shouldn't have happened, that Information Security/Warfare is largely an afterthought even though this organization has had numerous break ins. Many of my peers have held their position for one or even two decades and are great people yet they are not proactively preparing for modern threat/attack vectors. I believe the main difference is that they are satisfied with the status quo and I am not.

I have written a five-year strategic plan for IT security which I am now following with a tactical plan on how to get there. with respect to the tactical plan I was wondering what percentage of the IT budget you think an organization should allocate for their InfoSec programs?

It would seem that, using Google, many people advocate somewhere between ten and twenty percent of the IT budget. I have no knowledge of our overall IT budget but I do know we aren't anywhere near ten percent.

Additionally, how important is the creation and empowerment of a CISO in as organization? Many places still place security under the CIO which I have seen both good and bad examples of. Thank you for your time, it's much appreciated.

Regarding the cost question: I don't think anyone should use a rule of thumb to decide how much an organization should spend on digital security. Some would disagree. If you read Managing Cybersecurity Resources, the authors create some fairly specific recommendations, even saying "it is generally uneconomical to invest in cybersecurity activities costing more than 37 percent of the expected lost." (p 80) Of course, one could massage "expected loss" to be whatever figure you like, so the 37% part tends to become irrelevant.

When one tries to define digital security spending as a percentage of an IT budget, you face an interesting issue. First you must accept that the value of the organization's information is the upper bound for any security spending. (In other words, don't spend more money than the assets are worth.) If you base security spending on IT spending, then the entire IT budget becomes the theoretical upper bound for the supposed value of the organization's information. If you arbitrarily decide to shrink the IT budget, following this logic, you are also shrinking the value of the organization's information. This situation holds even if you don't spend more than "37%" of the value of the organization's information on security it. Clearly this doesn't make any sense.

I have not met anyone with a really solid approach for justifying security spending. "Calculating risk" or "measuring ROI/ROSI" are all subjective jokes. All I can really offer are some guidelines that I try to follow.

  1. First, focus on outputs, not inputs. It doesn't matter how much you spend on security (inputs) if the organization is horribly compromised (outputs). Determining how compromised the enterprise is becomes the real priority.

  2. Second, like I said in cheap IT is ultimately expensive, "security is an IT problem, not a 'security' problem. The faster asset owners realize this and be held responsible for the security of their systems, the less intrusion debt will mount and the greater the chance that enterprise assets will survive digital earthquakes." Security teams don't own any assets, other than the infrastructure supporting their teams. Asset owners are ultimately responsible for security because they usually make the key decisions over the asset value and vulnerabilities in their assets.

    The best you can do in this situation is to ask asset owners to imagine a scenario where assets A, B, and C are under complete adversary control, and could be rendered useless in an instant by that adversary, and then let them tell you the impact. If they say there is no impact, you should report that the asset is worthless and should be retired immediately. That will probably get the asset owners' attention and start a real conversation.

  3. Third, continue to tell anyone who will listen what you need to do your job, and what is lost as a result of not being able to do your job. Asset owners have a perverse incentive here, because the less they let the security team observe the score of the game (i.e., the security state of their assets), the less able the security team is able to determine the security posture of the enterprise. You've got to find allies who are more interested in speaking truth to power than living in Potemkin villages.

Regarding this CISO question: I believe the jury is out on where the CISO should sit. When reporting to the CTO and/or CIO, the CISO is one of many voices competing for attention. When working for the CTO and/or CIO, the position of the CISO probably reinforces the notion that the CTO and/or CIO somehow own the organization's information, and hence require security expertise from the CISO to secure it.

However, I am developing a sense that the asset owners, i.e., the profit and loss (P/L) entities in the organization, should be formally recognized as the asset owners. In that respect, the CISO should operate as a peer to the CTO and/or CIO. In their roles, the CTO and/or CIO would provide services to the asset owners, while the CISO advises the asset owners on the cost-benefit of security measures.

Note that when I say "asset" I'm referring to the real information asset in most organizations: data. Platforms tend to be worth far less than the data they process. So, the CTO and/or CIO might own the platform, but the P/L owns the data. The CISO ensures the data processed by the CTO and/or CIO is kept as secure as possible, serving the asset owner's interests first.

I would be interested in hearing other opinions on both of these questions. Thank you.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Monday, June 08, 2009

Counterintelligence Options for Digital Security

As a follow-up to my post Digital Situational Awareness Methods, I wanted to expand on the idea of conducting counterintelligence operations, strictly within the digital security realm. I focus almost exclusively on counter-criminal operations, as opposed to actions against nation-states or individuals.

Those of you who provide security intelligence services (SIS), or subscribe to those services, may recognize some or all of these. By SIS I am not talking about vulnerability notices repackaged from other sources.

Note that some of these approaches can really only be accomplished by law enforcement, or by collaboration with law enforcement. Even taking a step into the underground can be considered suspicious. Therefore, I warn blog readers to not try implementing these approaches unless you are an experienced professional with the proper associations. The idea behind this post is to explain what could be done to determine what one sort of adversary (primarily the criminal underground) knows about your organization. It obviously could be extended elsewhere but that is not the focus of this post.

  1. See who is selling or offering to sell your information or access to your information. This approach is similar to identifying places where credit cards or personally identifiable information are sold. Stepping into the underground and seeing where your company is mentioned is one way to estimate how prevalent your data might be outside your control. This is a passive approach.

  2. Solicit the underground for your organization's data or for access to your organization. By taking this step you ask if anyone would be able to provide stolen data or access to the organization. This is a dangerous step because it may motivate the underground to go looking for data. On the other hand, if your data is freely available you're simply unearthing it. This is the first of the active approaches.

  3. Penetrate adversary infrastructure. By this step I mean gaining entry or control of command-and-control channels or other mechanisms the adversary uses to exploit victim organizations. Security intelligence services do this all the time, but gaining access to a server owned by another organization is fairly aggressive.

  4. Infiltrate the adversary group. An underground organization usually functions as a team. It might be possible to infiltrate that group to learn what it knows about your organization. Acting with law enforcement would be the only real way to more or less "safely" accomplish this task.

  5. Pose as an individual underground member. In this capacity, other criminals with access to your organization's data might come to you. This is exceptionally dangerous too and would only be done in collaboration with law enforcement.

None of these steps are new; you can review success stories posted by the FBI and other organizations to know they work. However, I post them here to reinforce that asset-centric mindset and not just the vulnerability-centric mindset in digital security.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Sunday, June 07, 2009

Crisis 0: Game Over

A veteran security pro just sent me an email on my post Extending the Information Security Incident Classification with Crisis Levels. He suggested a Crisis beyond Crisis 1 -- "organization collapses." That is a real Game Over -- Crisis 0. In other words, the cost of dealing with the crisis bankrupts the victim organization, or the organization is ordered to shut down, or any other consequence that removes the organization as a "going concern," to use some accountant-speak.

I guess the hunt is on now to discover example organizations which have ceased to exist as a result of information security breaches. The rough part of that exercise is connecting all the dots. Who can say that, as a result of stealing intellectual property, a competitor gained persistent economic advantage over the victim and drove it to bankruptcy? These are the sorts of consequences whose timeline is likely to evade just about everyone.

Putting on my historian's hat, I remember the many spies who stole the manufacturing methods developed by the pioneers of the Industrial Revolution in Great Britain, resulting in technology transfers to developing countries. Great Britain's influence faded in the following century.

I'm sure some savvy reader knows of some corporate espionage case that ended badly for the victim, i.e., bankruptcy or the like?

Incidentally, I should remind everyone (and myself) that my classification system was intended to by applied to a single system. It is possible to imagine a scenario where one system is so key to the enterprise that a breach of its data does result in Crisis 3, 2, 1, or 0, but that's probably a stretch for the worst Crisis levels. Getting to such a severe state probably requires a more comprehensive breach. So, let's not get too carried away by extending the classification too far.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Extending the Information Security Incident Classification with Crisis Levels

Last week I tweaked my Information Security Incident Classification chart. Given recent events I might consider extending it to include Crisis 3, 2, and 1 levels.

Perhaps they would look like this. I previously alluded to "11" in my original post.

  • Crisis 3. 11 / Intruder has publicized data loss via online or mainstream media.

  • Crisis 2. 12 / Data loss prompts government or regulatory investigation with fines or other legal consequences.

  • Crisis 1. 13 / Data loss results in physical harm or loss of life.

I thought about these situations because of the latest Crisis 3, now affecting T-Mobile, as posted to Full-disclosure yesterday:

Date: Sat, 6 Jun 2009 15:18:06 -0400

Hello world,

The U.S. T-Mobile network predominately uses the GSM/GPRS/EDGE 1900 MHz frequency-band, making it the largest 1900 MHz network in the United States. Service is available in 98 of the 100 largest markets and 268 million potential customers.

Like Checkpoint[,] Tmobile [sic] has been owned for some time. We have everything, their databases, confidental documents, scripts and programs from their servers, financial documents up to 2009.

We already contacted with their competitors and they didn't show interest in buying their data -probably because the mails got to the wrong people- so now we are offering them for the highest bidder.

Please only serious offers, don't waste our time.


Name Type Team Application Name ApplicationID Application Operating System IP Address Facility Blank Blank Blank Tier 1 Apps Tier 2 Apps ? Prod
protun03 Prod IHAP Caller Tunes 64 CallerTunes HP-UX 11.11 BOTHELL_7 #N/A 64 1
protun04 Prod IHAP Caller Tunes 64 CallerTunes HP-UX 11.11 BOTHELL_7 #N/A 64 1
protun05 Prod IHAP Caller Tunes 64 CallerTunes HP-UX 11.11 BOTHELL_7 #N/A 64 1
protun06 Prod IHAP Caller Tunes 64 CallerTunes HP-UX 11.11 BOTHELL_7 #N/A 64 1
...edited out 505 more server entries...
proxfr03 Prod Infra Connect Direct 106 Connect Direct HP-UX 11.11 NEXUS #N/A #N/A 1
proxfr04 Prod Infra Connect Direct 106 Connect Direct HP-UX 11.23 NEXUS #N/A #N/A 1

Talk about monetizing an intrusion. Can you imagine your company's data posted to a public forum like this?

This sort of incident is becoming more common. Remember the 8 million Virginian patient records from April?


I have your shit! In *my* possession, right now, are 8,257,378 patient records and a total of 35,548,087 prescriptions. Also, I made an encrypted backup and deleted the original. Unfortunately for Virginia, their backups seem to have gone missing, too. Uhoh :(

For $10 million, I will gladly send along the password. You have 7 days to decide. If by the end of 7 days, you decide not to pony up, I'll go ahead and put this baby out on the market and accept the highest bid. Now I don't know what all this shit is worth or who would pay for it, but I'm bettin' someone will. Hell, if I can't move the prescription data at the very least I can find a buyer for the personal data (name,age,address,social security #, driver's license #).

Something similar happened to Express Scripts last year.

If this isn't enough to convince management that every active remote command and control channel presents clear and present danger to the enterprise, I don't know what is. All of these incidents started with an intruder gaining access to at least one system. If the organization doesn't take these incidents seriously, the next step could be public humiliation. You might say "the Feds will grab these guys." True, but what is the cost to the reputation of the victim organization?

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Department of Defense Digital Security Job Opportunities

A friend of mine from DoD is trying to hire clueful digital security practitioners. He is looking for people to accept positions with DoD-wide and/or service-specific responsibilities. Skillsets needed include reverse engineering, incident response and analysis, penetration testing, and security engineering. The most important characteristic of the candidate is a desire to see DoD achieve its missions successfully. The next requirement is intense interest in the sorts of subjects discussed in this blog. A SECRET clearance is a minimum requirement but TS is preferred. Please email cyberjobs2009 [at] hotmail [dot] com if interested. I have no other information -- email the point of contact with all questions. Thank you.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Saturday, June 06, 2009

Digital Situational Awareness Methods

I've written about digital situational awareness before, but I wanted to expand on the topic as I continue my series of posts on various aspects of incident detection and response.

Here I would like to describe ways that an enterprise can achieve digital situational awareness, or a better understanding of their security posture. What is interesting about these methods is that they do not exclude each other. In fact, a mature enterprise should pursue all of them, to the extent possible allowed by technical and legal factors.

  1. External notification is the most primitive means of learning the state of the enterprise's security posture. If all you do is wait until law enforcement or the military knock at your door, you're basically neglecting your responsibilities to your organization and customers.

  2. Vulnerability assessment identifies vulnerabilities and exposures in assets. This is necessary but not sufficient, because VA (done by a blue team) typically cannot unearth the complicated linkages and relationships among assets and their protection mechanisms. You have to do it however, and knowing your vulnerabilities and exposures is better than waiting for a knock on the door.

  3. Adversary simulation or penetration testing identifies at least one way that an adversary could exploit vulnerabilities and exposures to compromise a target or satisfy a related objective. AS (done by a red team) shows what can be done, moving beyond the theoretical aspects of VA. Many times this is the only way to really understand the enterprise and prove to management that there is a problem.

  4. Incident detection and response shows that real intruders have compromised the enterprise. If you think it's bad to see your red team exfiltrate data, it's worse when a real bad guy does it. Knowing that intruders are actively exploiting you is almost the best way to achieve digital situational awareness, and it's usually the highest form an enterprise can practice since it's closest to the ground truth of the state of the enterprise.

  5. Counterintelligence operations are the ultimate way to achieve digital situational awareness. As I wrote in The Best Cyber Defense, this means finding out what the enemy knows about you. I covered this extensively in the referenced post, but now you can see where counterintelligence fits in the overall digital situational awareness hierarchy.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Incident Detection Paradigms

This is the second in a series of "mindset" posts where I'd like to outline how I've been thinking of various aspects of incident detection and response. My primary focus for these discussions will be intrusions.

I'd like to discuss incident detection paradigms. These are ways that security people tend to think when they are trying to identify intrusions. I'm going to list the three attitudes I've encountered.

  1. Detection is futile. This school of thought says that some intruders are so crafty that it is not possible to detect them. I consider this paradigm short-sighted and defeatist. If you read the intruder's dilemma you'll know that it is generally not possible for intruders to hide themselves perfectly, continuously, perpetually. True, as the intruder's persistence time decreases, and as the amount of data exfiltrated decreases, it becomes more difficult to detect the intruder. However, both conditions are good for the defense. The question for the intruder is how persistent and successful he can be without alerting the defender to his presence.

  2. Sufficient knowledge. This school of thought says that it is possible for a defender to know so much about an intruder's actions that one can apply that understanding to automated systems to detect the intruder. This is essentially the opposite of the futility school. Unfortunately, this paradigm is unrealistic too. As I mentioned in Security Event Correlation: Looking Back, Part 3, the natural question to ask if one believes the sufficient knowledge paradigm is this: if you can detect it, why can't you prevent it?

    As I explained in Why is the Snort IDS still alive and thriving?, that question supposedly made "IDS dead" at the expense of IPS. Users and vendors who believe the sufficient knowledge school expect security people to be satisfied when they receive an alert that something bad happened, but the analyst is not given sufficient evidence to validate that claim.

  3. Indicators plus retrospective security analysis. In good debating style I save the best approach for last. I wish I had a better name but this phrase captures the essence of this paradigm. Here the analyst recognizes that any alert or other input one collects and analyzes is simply an indicator. Indicators may have various levels of confidence associated with them, but the importance of an indicator is that it should signal the start of the analysis process. Validating the indicator to produce a warning that can be escalated to perform incident response is accomplished by analyzing sufficient evidence. This evidence can be network traffic or data about network traffic, system logs, host information, and so on.

    As I discussed in Black Hat Briefings Justify Retrospective Security Analysis, once an analyst has learned of new indicators to detect advanced intruders, he can apply them to stored evidence. Retrospective security analyst finds the crafty intruders missed by traditional approaches, but it requires sufficient digital situational awareness to know how to proceed.

I'll discuss different digital situational awareness paradigms in a later post.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Incident Phases of Compromise

This is the first in a series of "mindset" posts where I'd like to outline how I've been thinking of various aspects of incident detection and response. My primary focus for these discussions will be intrusions.

First I'd like to discuss phases of compromise, again primarily designed for intrusions. They can be extended to other scenarios, but as with other recent posts I'm focusing on advanced persistent threats who operate beyond the norms of regular intruders. I've listed the phases elsewhere but they are relevant here; I've also expanded the last phase. I list the information security incident classification for each where appropriate.

  1. Reconnaissance. Identify target assets and vulnerabilities, indirectly or directly. Cat 6.

  2. Exploitation. Abuse, subvert, or break a system by attacking vulnerabilities or exposures. If the intruder does not seek to maintain persistence, then this could be the end of the compromise. Cat 2 or 1.

  3. Reinforcement. The intruder deploys his persistence and stealth techniques to the target. Still Cat 2 or 1, leading to Breach 3.

  4. Consolidation. The intruder ensures continued access to the target by establishing remote command-and-control. Breach 3.

  5. Pillage. The intruder executes his mission. Here we assume data theft and persistence are the goals.

    • Propagation. Intruders usually expand their influence before stealing data, but this is not strictly necessary. At this point the incident classifications should be applied to the new victims.

    • Exfiltration. The intruder steals data. Depending on the type of data, Breach 2 or 1.

    • Maintenance. The intruder ensures continued access to the victim until deciding to execute another mission.

With these phases of compromise outlined I'll have them ready for later reference.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Friday, June 05, 2009

Information Security Incident Classification

Thank you to those who commented on my previous post on this subject. I've had a few people ask to use this chart, but I wanted to clarify a few items now that there has been some good public and private discussion about it.

My intention with this chart is to help classify an incident involving compromise of an individual system. There are plenty of other sorts of information security incidents, but at the moment this is the biggest problem I deal with on a daily basis. I need a way to talk about the state of an individual compromised asset. I found the traditional DoD Category system wasn't sufficient, especially in the post-Cat 1 world. I still like those Categories but I needed to go further (post-exploitation) and for one of my constituents, backwards (to when a system is just vulnerable, but no one is yet interested in it -- as far as we can tell).

I decided to call this updated chart a "classification" rather than a "rating," and to remove the label "impact." The words rating and impact imply "risk" and asset value to some degree, and I'm not talking about either here. This is a little bit like assigning a CVE number; it says nothing about the seriousness of the vulnerability, but at least we can all reference the same vulnerability. With my chart I can now build service expectation timelines around the incident type. I can also quickly understand where we are with any incident when one of our team says "we have a Cat 1, but our perimeter defenses appear to have contained the incident so it has not reached Breach 3 status."

I think it is important to keep in mind that having anything remotely approaching a valid understanding of "risk" requires a great deal of understanding about the assets in question. Not only must you understand the nature of the compromised asset (its function, normal usage patterns, its inputs, its processes, its outputs), but you must understand the means by which the asset interacts with the network, any trust relationships, and many other factors. In most cases the only way to gain a real appreciation of these real-world conditions is to either 1) observe the intruder in action, seeing what he can do or get, or 2) red-team the system yourself to see what you can do or get. Modern systems and enterprises are far too complex for anyone to sit back like Mycroft Holmes and truly understand the "risk" of a compromise.

I should also say that I would never expect to tell a manager that we have discovered a Breach 2 and then walk away. The natural next question involves the issues of the previous paragraph, and answering them takes far longer than the process of detecting and validating the incident. If you doubt me, try talking to the office in the DoD that does nothing but computer incident damage assessment all day long.

Incidentally, please feel free to use this diagram, providing you cite the source. I am encouraged when others seek to adopt this sort of language for their own programs, because it moves us closer to having common ways to discuss operational problems. Thank you.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Wednesday, June 03, 2009

Cyber Security Coordinator

The article Obama's likely pick for cybersecurity head remains murky by Doug Beizer and Alice Lipowicz in FCW caught me off guard:

There is surprisingly little buzz circulating about who President Barack Obama might choose to lead cybersecurity policy. Although a number of analysts have ideas about the qualities the person filling the position might need, no one is naming names of likely contenders yet — partly because it remains unclear what the eventual appointee will actually do...

Rohyt Belani, co-founder and managing partner of computer security firm Intrepidus Group, said the ideal candidate would combine qualities from three people: security consultant Bruce Schneier; Richard Bejtlich, director of incident response at General Electric; and Chris Eagle, a senior lecturer and associate chairman of the Computer Science Department at the Naval Postgraduate School.

Belani said Schneier has “an ability to focus on what matters and call out silly bureaucratic processes that do little to help security.” Bejtlich’s strength is a passion for digital security, and Eagle knows where the needs are for offensive capabilities, Belani added.

I have not received any phone calls, but if need be I am sure the right people can find me. :)

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.