Wednesday, July 28, 2010

Time Issues in Libpcap Traces

Time is an important aspect of Network Security Monitoring. If you don't pay close attention to the time shown in your evidence, and recognize what it means, it's possible you could misinterpret the values you see.

My students and I encountered this issue in TCP/IP Weapons School at Black Hat this week. Let's look at the first ICMP packet in one of our labs.

I'm going to show the output using the Hd tool and then identify and decode the field that depicts time.

In the following output, 2d 0c 65 49 occupies the part of the packet where Libpcap has added a timestamp.

Hd output:

$ hd icmp.sample.pcap
00000000 d4 c3 b2 a1 02 00 04 00 00 00 00 00 00 00 00 00 |................|
00000010 ea 05 00 00 01 00 00 00 2d 0c 65 49 5f bf 0c 00 |........-.eI_...|
00000020 4a 00 00 00 4a 00 00 00 00 0c 29 82 11 33 00 50 |J...J.....)..3.P|
00000030 56 c0 00 01 08 00 45 00 00 3c 02 77 00 00 80 01 |V.....E..<.w....|
00000040 ea f1 c0 a8 e6 01 c0 a8 e6 05 08 00 43 5c 07 00 |............C\..|
00000050 03 00 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e |..abcdefghijklmn|
00000060 6f 70 71 72 73 74 75 76 77 61 62 63 64 65 66 67 |opqrstuvwabcdefg|
00000070 68 69 |hi|
00000072

Tpo convert 2d 0c 65 49 to a time, we have to swap the bytes, so becomes 0x49650c2d, or 1231359021 in decimal. 1231359021 is a Unix timestamp that we can convert with the -r option found in the FreeBSD version of the date command.

First let me show the date on this system so you can see the timezone of the FreeBSD system, and then I'll convert the seconds into a human readable time.

$ date
Wed Jul 28 00:11:23 EDT 2010

$ date -r 1231359021
Wed Jan 7 15:10:21 EST 2009

So, this ICMP packet has a timestamp of Wed Jan 7 15:10:21 EST 2009. Note that the date command produces a time in EST and not EDT. 15:10:21 EST becomes 16:10:21 EDT. I would have preferred seeing the date output show EDT since that is the time zone on the system in question, but I can understand the output. That seems simple enough, right?

Let's see what Tcpdump says about this packet. First I run the date command to remind us where we are running Tcpdump.

FreeBSD Tcpdump:

$ date
Wed Jul 28 00:11:23 EDT 2010

$ tcpdump -h
tcpdump version 3.9.8
libpcap version 0.9.8

$ tcpdump -n -tttt -r icmp.sample.pcap
2009-01-07 16:10:21.835423 IP 192.168.230.1 > 192.168.230.5:
ICMP echo request, id 1792, seq 768, length 40

As we expected, this packet has a timestamp of 16:10:21 (ignore the fractions of a second), and since the time zone is EDT it matches what we expect.

Let's see what a tool like Tshark says.

FreeBSD Tshark:

$ tshark -v
TShark 1.0.7
...edited...
Compiled with GLib 1.2.10, with libpcap 0.9.8, with libz 1.2.3, without POSIX
capabilities, without libpcre, without SMI, without ADNS, without Lua, without
GnuTLS, without Gcrypt, with Heimdal Kerberos.
NOTE: this build doesn't support the "matches" operator for Wireshark filter
syntax.

Running on FreeBSD 7.2-RELEASE-p7, with libpcap version 0.9.8.

Built using gcc 4.2.1 20070719 [FreeBSD].

$ tshark -n -t ad -r icmp.sample.pcap
1 2009-01-07 15:10:21.835423 192.168.230.1 -> 192.168.230.5
ICMP Echo (ping) request

What? Why does it show 15:10:21? That's the result for EST, not EDT, which is the time zone of the FreeBSD system.

Let's see if a Linux system in EDT behaves the same way.

$ date
Wed Jul 28 00:44:54 EDT 2010

$ tcpdump -V
tcpdump version 4.0.0
libpcap version 1.0.0

$ tcpdump -n -tttt -r icmp.sample.pcap
2009-01-07 16:10:21.835423 IP 192.168.230.1 > 192.168.230.5:
ICMP echo request, id 1792, seq 768, length 40

$ tshark -v
TShark 1.2.7
...edited...
Compiled with GLib 2.24.0, with libpcap 1.0.0, with libz 1.2.3.3, with POSIX
capabilities (Linux), with libpcre 7.8, with SMI 0.4.8, with c-ares 1.7.0, with
Lua 5.1, with GnuTLS 2.8.5, with Gcrypt 1.4.4, with MIT Kerberos, with GeoIP.

Running on Linux 2.6.32-23-generic, with libpcap version 1.0.0, GnuTLS 2.8.5,
Gcrypt 1.4.4.

Built using gcc 4.4.3.

$ tshark -n -t ad -r icmp.sample.pcap
1 2009-01-07 15:10:21.835423 192.168.230.1 192.168.230.5
ICMP Echo (ping) request

Again, we see Tcpdump correctly honor the local time zone (EDT) and display the timestamp as 16:10:21, whereas Tshark shows the timestamp as EST or 15:10:21.

I am really disappointed by this Tshark behavior. Incidentally, you get the same results from any tool in the Wireshark suite, such as Wireshark itself, capinfos, etc.

Does anyone know why Tshark and the like don't really honor the local time zone, and instead use Standard Time instead of recognizing Daylight Savings Time?

Tuesday, July 27, 2010

Review of Digital Forensics for Network, Internet, and Cloud Computing Posted

Amazon.com just published by two star review of Digital Forensics for Network, Internet, and Cloud Computing by Terrence V. Lillard and company. From the review:

Digital Forensics for Network, Internet, and Cloud Computing (DFFNIACC) is one of the worst books I've read in the last few years. You may wonder why I bothered reading a two star book. Blame a flight from the east coast to Las Vegas and not much else to read during those five hours! DFFNIACC is a jumbled collection of incoherent thoughts, loosely bound by the idea of "forensics" but clearly not subjected to any real planning or oversight. This book is very similar to the Syngress book "Botnets" which I gave 2 stars in 2008, and as you might expect features one of the same authors. Save your money and skip DFFNIACC; only the chapter on NetFlow and another offering a general overview of NetWitness are worth reading.

Review of Virtualization and Forensics Posted

Amazon.com just published my three star review of Virtualization and Forensics by Dianne Barrett and Gregory Kipper. From the review:

"Virtualization and Forensics" (VAF) offers "a digital forensic investigator's guide to virtual environments" as its subtitle. Eric Cole's introduction says "How do we analyze the [virtual] systems forensically since standard methods no longer work? Let me introduce a key piece of research and literature, VAF." I disagree with Eric's claim: I did not find VAF to be a compelling resource for forensic investigators of virtual environments. If an author writes a book on virtual forensics, I would expert more advice on how to accomplish the task, and less description of virtual environments. Unfortunately, VAF spends most of its time talking about virtual systems and not enough time helping investigators analyze them.

Review of Digital Triage Forensics Posted

Amazon.com just published my two star review of Digital Triage Forensics: Processing the Digital Crime Scene by Stephen Pearson and Richard Watson. From the review:

I have to preface this review by saying my criticism of this book should not be taken as criticism of the brave men and women who put their lives on the line fighting for our freedom in Southwest Asia (SWA). I'm reviewing the book "Digital Triage Forensics" (DTF), not the people who wrote it or the people who rely on the concepts therein.

DTF is a misleading, disappointing book. The subtitle is "processing the digital crime scene." The back cover says "the expert's model for investigating cyber crimes," and it claims "now corporations, law enforcement, and consultants can benefit from the unique perspectives of the experts who pioneered DTF." That sounds promising, right? It turns out that DTF is essentially a handbook for Weapon Intelligence Teams (WITs) who deploy to Iraq and Afghanistan to collect battlefield intelligence before and after Improvised Explosive Devices (IEDs) detonate! I cannot fathom why Syngress published this book, when the intended audience probably numbers in the dozens. Unless you need to learn the basics of how to collect cell phones and hard drive images to provide "actionable intelligence" to warfighters, you can avoid reading DTF.

Wednesday, July 21, 2010

Dell Needs a PSIRT

It's clear to me that Dell needs a Product Security Incident Response Team, or PSIRT. Their response to the malware shipping with R410 replacement motherboards is not what I would like to see from a company of their size and stature.

Take a look at this Dell Community thread to see what I mean. It's almost comical.

These are a few problems I see:

  1. They are informing the public of this malware problem using phone calls, not a posting on a Web site. A customer thinks he's being scammed and posts a question to a support forum. Someone named "DELL-Matt M" replies:

    "The service phone call you received was in fact legitimate... We have assembled a customer list and are directly contacting customers like you through a call campaign. On the call, you should be provided a phone number to call if you have additional questions. Hopefully you received this on your call. If not, let me know and we’ll get it to you as soon as possible so you have all of the follow-up information needed."

    Another customer rightfully asks: "So why is there no information in the recall links or other readily obvious place on the site?"

  2. The next information about the problem is another post to the same thread from "DELL-Matt M".

    "We will continue to update this forum as new information becomes available or questions arise."

    This story is making the news and Dell will update customers in a forum thread?!?

  3. One customer then questions whether 'DELL-Matt M" works for Dell!

    "Will you please post your employee number? In a phone call to Dell this morning I was told that no Dell employee wrote this...."

  4. "DELL-Matt M" replies:

    "Yes Art, I am a Dell employee and the information I posted is accurate. If you need specific information, please contact US_EEC_escalations@dell.com.

    Thanks, Matt"

    Still no link to an official Dell story.

  5. Try searching for "Dell PSIRT" or "Dell security". You get nothing about the security of Dell products.


Dell needs to step up its game. It's shipping products to customers with malware, and it's "handling" the issue through a support forum.

I think my post Every Software Vendor Must Read and Heed referencing Matt Olney's recommendations is a good place to start, Dell!

Sunday, July 18, 2010

Review of The Watchman Posted

Amazon.com just posted my three star review of The Watchman by Jonathan Littman. From the review:

The Watchman by Jonathan Littman is a tough book to review. The author states that he started writing a book about Kevin Poulsen (The Watchman), then delayed that project to write a book about Kevin Mitnick (The Fugitive Game, or TFG). After finishing TFG, the author returned to the Poulsen book. Unfortunately, it seems that the approach that the author took in TFG (recounting direct telephone conversations with Kevin Mitnick) didn't translate well for The Watchman. Whereas TFG covers the part of the time Mitnick was on the run and speaking with the author, The Watchman tries to tell the overall story of Kevin Poulsen's life. The end result is not likely to reflect reality as well as a story where the author was a first-hand participant. It seems several of the main characters in The Watchman, most notable Poulsen himself, disagree with their portrayal in the book. Nevertheless, The Watchman is worth reading since it is the only book on Kevin Poulsen; just beware its likely weaknesses.

Review of The Fugitive Game Posted

Amazon.com just posted my four star review of The Fugitive Game by Jonathan Littman. From the review:

"The Fugitive Game" (TFG) recounts author Jonathan Littman's discussions with Kevin Mitnick, largely while the latter evaded authorities in the mid-1990s. This book is unlike others about Kevin, because the author describes multiple lengthy telephone conversations. As much as one can trust the author to reproduce them faithfully, these exchanges provide insights into Kevin's thoughts and feelings regarding his position as the so-called "greatest computer criminal in the world," according to dubious New York Times reporting.

Review of At Large Posted

Amazon.com just posted my four star review of At Large by David H. Freedman and Charles C. Mann. From the review:

"At Large" is a "hacking" book published during the mid-1990s, but it doesn't address the characters usually considered to be the "stars" of that era. Rather, At Large tells the tale of a single-minded and possibly mentally-challenged intruder who infiltrated a large number of sensitive US networks. While I didn't find the characters or story particularly compelling, I did note a number of points that remain true even today. For this reason you are likely to learn more from a book like At Large than a similar title, such as "Masters of Deception" (which I reviewed recently and gave 3 stars).

Saturday, July 17, 2010

Review of The Cuckoo's Egg Posted

Amazon.com just posted my five star review of The Cuckoo's Egg by Cliff Stoll. From the review:

Cliff Stoll's "The Cuckoo's Egg" (TCE) is the best real-life digital incident detection and response book ever written. I know something about this topic; I've written books on the subject and have taught thousands of students since 2000. I've done detection and IR since 1998, starting in the military, then as a consultant and defense contractor, and now as director of IR for a Fortune 5 company. If you're not an incident detector/responder, you're probably going to read TCE as a general enthusiast or maybe an IT professional. You'll like the book. If you're a security professional, you'll love it.

Review of Code Version 2.0 Posted

Amazon.com just posted my four star review of Code Version 2.0 by Lawrence Lessig. From the review:

Code Version 2.0 (CV2) is a compelling and insightful book. Author Lawrence Lessig is a very deep thinker who presents arguments in a complete and methodical manner. I accept his thesis that "cyberspace" has abandoned its tradition as an ungovernable, anonymous playground and risks becoming the most regulated and "regulable" "place" in which one could spend any time. This position has been strengthened by recent news events, such as the White House's "National Strategy for Trusted Identities in Cyberspace (NSTIC) that outlines this vision to reduce cybersecurity vulnerabilities through the use of trusted digital identities." Lessig maintains that code is making such regulation possible, and anyone who cares about privacy and freedom needs to start paying attention.

Review of Crypto Posted

Amazon.com just posted my four star review of Crypto by Steven Levy. From the review:

Steven Levy's "Crypto" is a fascinating look at part of the story of modern cryptography, at least from the point of view of key non-government cryptographers. The author clearly conducted plenty of research into the lives of certain individuals, such as Whit Diffie and Marty Hellmen, the RSA trio, and other entrepreneurs. Unlike some other reviewers, I thought the text was lively enough and the book kept my attention throughout. My only real concern is the obvious bias against the concerns of government cryptographers. If you doubt the bias, it starts on the cover: "How the Code Rebels Beat the Government - Saving Privacy in the Digital Age." Regardless, if you are a security professional or just have an interest in digital privacy, you will enjoy reading Crypto.

Review of The Illusion of Due Diligence Posted

Amazon.com just posted my two star review of The Illusion of Due Diligence by Jeffrey Bardin. From the review:

I have mixed feelings about Jeffrey Bardin's "The Illusion of Due Diligence" (TIODD). I did read the whole book. However, I am not sure I would advise others to read it. TIODD struck me as a collection of stories describing how bad choices can lead to difficult situations. Some of the bad choices are the author's, so I have trouble sympathizing with him. Still, I was continuously amazed that the author would choose to record his professional life story in print, especially given the reader's ability to reassemble the true names behind the pseudonyms. Overall, I consider TIODD to be a curiosity that would keep your attention mainly for the "train wreck" aspect of the author's security career.

Friday, July 16, 2010

Human Language as the New Programming Language

If you've read the blog for a while you know I promote threat-centric security in addition to vulnerability-centric security. I think both approaches are needed, but I find a lot of security shops ignore threat-centric approaches. But in this brief post I'd like to talk about one skill you're likely to need in a threat-centric team.

Clearly knowledge of programming languages is helpful for vulnerability-centric security. Those who can program in the right languages can help identify vulnerabilities, develop exploits, and do other code-centric work.

Different skills are needed for threat-centric security, however. If a programming language is helpful for vulnerability-centric operations, then a foreign language is helpful for threat-centric operations. Specifically, analysts will find it useful to read and potentially speak the language used by their adversaries. It is likely that while learning a foreign language, and more importantly maintaining or improving that skill, the analyst will learn about the adversary's culture. At the highest level of threat-centric security, analysts understand the adversary not through native eyes, but through the adversary's eyes.

None of this is news to anyone with an intelligence or counterintelligence background, but I think this approach represents additional maturity in an enterprise security program.

Wednesday, July 14, 2010

Brief Thoughts on WEIS 2010

Last month I attended my first Workshop on the Economics of Information Security (WEIS 2010) at Harvard. It was cool to visit and it reminded me that I probably spent too much time playing ice hockey and learning martial arts during graduate school, and not enough time taking advantage of the "Hah-vahd experience." Oh well, as Mr Shaw said, "Youth is wasted on the young."

So what about WEIS? I attended because of the "big brains" in the audience. Seriously, how often do you get Dan Geer, Ross Anderson, Whit Diffie, Bruce Schneier, Hal Varian, etc., in the same room? I should have taken a picture. Dumb security groupie.

I'll share a few thoughts.

  • Tracey Vispoli from Chubb Insurance spoke about cyber insurance. Wow, what an interesting perspective. She said the industry has "no expected loss data" and "no financial impact data." Put that in your pipe and smoke it, Annualized Loss Expectancy (ALE) fans! So how does Chubb price risk without any data, in order to sell polcies? Easy -- price them high and see what happens. This is what the industry did when legislators started creating laws on employment discrimination. Companies wanted insurance, so the industry made them pay through the nose. Later, to compete, insurers dropped rates -- but too low. When they started losing money they jacked up the rates again. Eventually insurers have some data, but only after years of offering a service in the marketplace. That floored me but it makes sense now.

  • Again on insurance, Tracey said the industry insures for incidents whose impact can be concretely and quickly measured. What does that mean? Insurance against economic espionage, national security incidents, and related events is unlikely because you can't really measure the impact, at least in the short term!

  • After spending two days with academics, I'd like to add to Allan Schiffman's famous phrase "Amateurs study cryptography; professionals study economics":

    Amateurs study cryptography; professionals study economics. Operators work in the real world.

    Seriously, I think economics will help mitigate many security problems, but some researchers need to visit living, breathing enterprise environments before publishing papers. I won't name names, but if you're writing a paper that relies on raw IDS alerts to measure "attacks on open source software," you need to spend some time in a SOC or CIRT to see what analysts think of that kind of "evidence."

  • It seems researchers have a suit of academic tools (math, statistics, functions, models, game theory, simulations, previous research, etc.) and they look for data to which they can apply those tools. They formulate a hypothesis, and at that point the applicability of the approach is probably out the window. Very quickly in several talks I noticed that the topic at hand was implementation of an analytical technique, with the underlying problem somewhere several slides back. This seemed a little weird, but it makes sense in the context of researchers doing what they know how to do -- identify an issue, develop a hypothesis, collect data, etc.


Overall I found the experience very interesting, but I'm not sure if I will try to return next year.

Brief Thoughts on SANS WhatWorks Summit in Forensics and Incident Response 2010

Last week I spoke at the third SANS WhatWorks Summit in Forensics and Incident Response in DC, organized and led by Rob Lee. As usual, Rob did a wonderful job bringing together interesting speakers and timely topics. I thought my presentation on "CIRT-level Response to Advanced Persistent Threat" went well and I enjoyed participating on the "APT Panel Discussion."

I wanted to share a few thoughts from the event.

  • This is just the sort of event I like to attend. It's almost more about the participants than the presentation content. I found plenty of peers interested in sharing leading practices. I hope to continue a relationship with several other CIRT leaders I met (or saw again) at SANS.

  • Props to Kris Harms and Nick Harbour for starting their talk with a printed handout as reference for an in-class IR exercise, during a 1 hour talk! I kid you not. What a great way to make a point about the need for OpenIOC. Kevin Mandia called existing IR report writing "the state of caveman art" and I agree. Expect to hear more from me about OpenIOC in the future.

  • I heard Harlan Carvey say something like "we need to provide fewer Lego pieces and more buildings." Correct me if I misheard Harlan. I think his point was this: there is a tendency for speakers, especially technical thought and practice leaders like Harlan, to present material and expect the audience to take the next few logical steps to apply the lessons in practice. It's like "I found this in the registry! Q.E.D." I think as more people become involved in forensics and IR, we forever depart the realm of experts and enter more of a mass-market environment where more hand-holding is required?

  • Developing people was a constant theme. I liked what Mike Cloppert said: "Be ready to hire someone who isn't perfect for your open role, but could grow into the role. Alternatively, when you don't have an open role, but someone perfect becomes available, you must hire that person."

  • I sent a lot of thoughts via Twitter at the summit, so you can check out what I wrote through @taosecurity.


Finally, I'd like to remind everyone that I will begin planning my second SANS WhatWorks in Incident Detection and Log Management Summit, which will be held again in DC, on 8-9 December 2010. If you liked last year's Summit, you will love this new one. I'll have more to say as we get closer to registration.

Network Forensics Vendors: Get in the Cloud!

I know some of us worry that the advent of the "cloud" will spell the end of Network Security Monitoring and related network-centric visibility and instrumentation measures. I have a proposal for any network forensics vendors reading this blog: get in the cloud!

For example, imagine you are a proxy-in-the-cloud (PITC) provider, like ScanSafe, now owned by Cisco. You provide a Web portal to your customers so they can see what bad sites employees were not allowed to visit. But what about all the subtle traffic that evaded your filters, block lists, heuristics, and other defensive mechanisms? What about the insider stealing intellectual property, indistinguishable from a "normal employee?" How does your abuse-centric Web portal address the sorts of threats that really matter?

To me, one answer is to deploy a network forensics solution like NetWitness or Solera in front of your PITC infrastructure. The PITC vendor must have a way to identify legitimate clients, or else you've created the world's greatest open Web proxy. Use the identity information to tag the traffic collected by the network forensics product.

When a customer needs to analyze an intrusion, or conduct an investigation, he can connect to the hosted network forensics platform.

I also like this approach because it helps address the consumerization of IT. You can create a policy (weak I know, but it's an option) that Company users must point any device that processes Company data to the PITC infrastructure for Web access. By doing so you can collect the network forensic data you need.

Of course, encryption is always an issue, but if really necessary I'm sure you can work with the PITC vendor on a MITM approach.

I'm sure I'll get a few comments from critics saying "NSM is dead," "network traffic is worthless," etc. It's just a sign you don't know how to use that sort of data effectively, and probably never will. After evangelizing for 10 years, I've given up trying to convince critics like that.

I also don't intend for this post to be a signal that I hate logs or host-based evidence. It's just another piece in the puzzle.

So, network forensics vendors, who will be the first to publish a press release saying you've partnered with a PITC provider?

Gartner on CSIRTs

I know some of you pay attention to what Gartner says, or more probably, your management does. I found this new report How to Build a Computer Security Incident Response Team by Jeffrey Wheatman, Rob McMillan, and Andrew Walls helpful if you need external validation from a source your management is likely to recognize. You need a Gartner account to breach the paywall.

I wanted to provide a few reasons why you might want to buy it and share it:

It is becoming increasingly common for auditors, regulators and other stakeholders to require organizations to formalize their responses to security events...

Even smaller organizations with limited legal and regulatory requirements can gain significant benefits in risk mitigation from the implementation of a basic security incident response team. Following the phased approach outlined in this research will guide clients on how to best assess their needs and implement a response team that will satisfy all stakeholders...

A competent and adequately resourced CSIRT is an important part of an organization's information security program. Many organizations either have nothing in place or follow inconsistent procedures.

In many organizations, the goal is to recover from an incident and get back up and running with minimal attention being paid to evidence collection, analysis or postmortem reporting.

Over the long term, this approach results in more security events, not fewer, as the organization is unable to discern the root causes of incidents and incorporate these lessons learned into improvements in infrastructure and process management.

Further, in those instances where an organization's individual experience is part of a broader incident affecting multiple organizations, this approach may result in added legal complexity and
liability.


That should help justify a CIRT. I was glad to see the following:

CSIRT staff will require access to key systems where required, such as capabilities that are normally available via network operations centers (NOCs) or security operations centers (SOCs).

The team will also require dedicated infrastructure, possibly protected from the rest of the organization, including secure physical facilities, material storage and dedicated
computers, as well as specialized software and hardware.

Redundancy in physical resources and technical systems is required to ensure CSIRT operations when normal facilities and technology are corrupted or unavailable. For example, CSIRT members should be able to access mobile telephones, fixed-line telephones, faxes and, in extreme circumstances, radio communications.


The need for separate infrastructure -- a "technology gap," as my team calls it -- is crucial. How can you defend vulnerable infrastructure using the same vulnerable infrastructure?

More on tools:

The key issue is that the CSIRT is likely to require tools in order to perform its function. Since these tools will be used in an uncertain operational environment (that is, one that is suspected or confirmed as having been compromised), it is important that the organization be able to confidently assert that these tools are reliable and preserve evidence in an untainted fashion...

In other words, the technology gap can also help a CIRT defend its evidence.

I found this interesting:

A variety of public and commercial organizations provide a range of support services for CSIRTs, including...

FIRST (http://first.org): This membership-based organization provides a support service for CERTs and CSIRTs on a global basis. FIRST members tend to be governmental organizations (for example, the U.S. Army CERT — ACERT) and major commercial organizations (for example, GE-CIRT, General Electric's CIRT).


Wow, I guess we made the big time!

In conclusion, check out the Gartner document. It might help you. If anyone wants to post links to the myriad of other resources out there (FIRST, CERT/CC, etc.), link away. I don't feel like hunting down the results of a Google search for building an IRT. Thank you.

Tuesday, July 13, 2010

My Article on Advanced Persistent Threat Posted

My article Understanding the Advanced Persistent Threat provides an overview of APT. It's the cover story in the July 2010 Information Security Magazine. From the article:

The term advanced persistent threat, or APT, joined the common vocabulary of the information security profession in mid-January, when Google announced its intellectual property had been the victim of a targeted attack originating from China. Google wasn't alone; more than 30 other technology firms, defense contractors and large enterprises had been penetrated by hackers using an array of social engineering, targeted malware and monitoring technologies to quietly access reams of sensitive corporate data.

Google's public admission put a high-profile face on targeted attacks and the lengths attackers would go to gain access to proprietary corporate and military information. It also kicked off a spate of vendor marketing that promised counter-APT products and services that have only served to cloud the issue for security managers and operations people.

In this article, we'll define APT, dispel some myths and explain what you can do about this adversary.

Wednesday, July 07, 2010

A Little More on Cyberwar, from Joint Pub 1

Everyone's been talking about cyberwar this week, thanks in part to the Economist coverage. Many of the comments on my posts and elsewhere discuss the need for definitions.

I thought it might be useful to refer to an authoritative source on war for the United States: DoD Joint Publication 1: Doctrine for the Armed Forces of the United States (.pdf), known as JP 1.

Incidentally, back in 1997 as an Air Force 1Lt straight from intelligence school, I worked on doctrine publications like this for Air Intelligence Agency, specifically the early doctrine on information warfare, like the August 1998 publication of Air Force Doctrine Document 2-5: Information Operations (.pdf).

What does JP 1 say about war?

War is socially sanctioned violence to achieve a political purpose. In its essence, war is a violent clash of wills. War is a complex, human undertaking that does not respond to
deterministic rules. Clausewitz described it as “the continuation of politics by other means” [Book one, Chapter 1, Section 24 heading]. It is characterized by the shifting interplay of a trinity of forces (rational, nonrational, and irrational) connected by principal actors that comprise a social trinity of the people, military forces, and the government...


The use of the term "violence" would seem to preclude cyberwar as being "war." Read on however:

Traditional war is characterized as a confrontation between nation-states or coalitions/alliances of nation-states. This confrontation typically involves small-scale to large-scale, force-on-force military operations in which adversaries employ a variety of conventional military capabilities against each other in the air, land, maritime, and space physical domains and the information environment (which includes cyberspace).

The objective is to defeat an adversary’s armed forces, destroy an adversary’s war-making capacity, or seize or retain territory in order to force a change in an adversary’s government or policies. Military operations in traditional war normally focus on an adversary’s armed forces to ultimately influence the adversary’s government...

The near-term results of traditional war are often evident, with the conflict ending in victory for one side and defeat for the other or in stalemate.


We see "traditional war" involving state-on-state, military v military conflict, with the listed objectives. Those elements do not preclude cyberwar.

[Irregular Warfare, or] IW has emerged as a major and pervasive form of warfare although it is not per se, a new or an independent type of warfare. Typically in IW, a less powerful adversary seeks to disrupt or negate the military capabilities and advantages of a more powerful, conventionally armed military force, which often represents the nation’s established regime. The weaker opponent will seek to avoid large-scale combat and will focus on small, stealthy, hit-and-run engagements and possibly suicide attacks.

That is very interesting and consistent with ongoing operations.

The weaker opponent also could avoid engaging the superior military forces entirely and instead attack nonmilitary targets in order to influence or control the local populace. An adversary using irregular warfare methods typically will endeavor to wage protracted conflicts in an attempt to break the will of their opponent and its population. IW typically manifests itself as one or a combination of several possible forms including insurgency, terrorism, information operations (disinformation, propaganda, etc.), organized criminal activity (such as drug trafficking), strikes, and raids. The specific form will vary according to the adversary’s capabilities and objectives.

Here we read about engaging nonmilitary targets, very relevant to today's nation-vs-private enterprise activity. However, the following text clarifies the main idea behind Irregular Warfare:

IW focuses on the control of populations, not on the control of an adversary’s forces or territory. The belligerents, whether states or other armed groups, seek to undermine their adversaries’ legitimacy and credibility and to isolate their adversaries from the relevant population, physically as well as psychologically... What makes IW “irregular” is the focus of its operations – a relevant population – and its strategic purpose – to gain or maintain control or influence over, and the support of that relevant population through political, psychological, and economic methods.

This text shows that Irregular Warfare is thought of in JP 1 as being more like insurgency operations as witnessed in southwest Asia.

One more thought before I publish this post: I don't consider any of the following to meet the definition of war:

  • War on Poverty: President Lyndon Johnson declared "war" against a tragic human condition, but it's not really a war if the target is a physical condition.

  • War on Drugs: President Richard Nixon declared "war" against narcotics, but it's not really a war either if the target is a substance.

  • War on Terror: President George Bush declared "war" on terror after 9/11. While there is no doubt war happened, the target should be defined groups, like Al Qaeda, as stated by President Barack Obama -- not effects, like "terror."


Please note I keep these ideas in mind when forming thoughts on cyberwar.

Thoughts on "Application SOC" and New MSSPs

I'd like to briefly comment on a few ideas that appeared on lists I read.

First, in this Daily Dave post from June, Dave Aitel writes:

So when I gave the FIRST talk, one of the questions was "What is the solution?" ...

Immunity sees lots of success (and has for many years) with organizations that have done high level instrumentations [sic] against their applications, and then used powerful data mining tools to look at that data...

So what you see is the start up of what I like to call the "Application SOC". It's like a network SOC, but way more expensive, and with the chance of being actually useful!


On a related note, after discussing iTunes fraud, Stephen Northcutt adds the following comments in this SANS Newsbites post from yesterday:

I think we are seeing more and more market demand for a new type of MSSP, a cross between (1) a software security and quality consultant, (2) a monitoring company that focuses primarily on web logs and probably has some of their own routines (think Suhosin [a PHP hardening system] on steroids ) and (3) a high end code and configuration incident response capability.

Both Dave and Stephen mention an "application SOC" sort of idea, so let's talk about this first. I believe this already exists, and is indeed used effectively by a variety of organizations. It's certainly at the high end of maturity, but it's there.

Logs can be a supplementary data source, for forensic reference during incident response triggered by a traditional security indicator. Alternative, logs can provide the primary indicator. Unfortunately, logs alone may not necessarily contain the data needed to convince an analyst that a security incident has occurred.

There's also the problem of failing to build visibility in to applications. Gunnar, feel free to reply with a link to your latest logs for developers class!

Turning strictly to Stephen's remaining points, I think companies like Cigital already have the "a software security and quality consultant" space firmly in control.

Stephen's last point, however, seems really interesting. I may be misinterpreting what he said because I like my interpretation, but at the very least he may be advocating for an outsourced PSIRT. I think this is a cool idea. Create a MSSP who provides customer-facing support to vulnerability researchers and others who find software flaws. Work with the software developer to transform vulnerability reports into improved code, handling the public relations, disclosures, coordination with CERT, etc. I don't know of anyone who does that work, but I think every software provider needs a PSIRT. What do you think?

Tuesday, July 06, 2010

Ponemon Institute Misses the Mark

Today the Ponemon Institute announced results of a survey they conducted titled Growing Risk of Advanced Threats: Study of IT Practitioners in the United States. Unfortunately, this survey looks like it is mainly the blind asking the blind to describe a threat neither really understands. For example, the survey states:

While the definition of what constitutes an advanced threat still varies within the industry, for purposes of this research we have defined an advanced threat as a methodology employed to evade an organization’s present technical and process countermeasures which relies on a variety of attack techniques as opposed to one specific type.

The predominant majority of these threats are represented by unknown, zero-day attacks, but there are increasingly many instances where known attacks are being re-engineered and repackaged to extend their usefulness.


If this survey stuck with this definition, and didn't mention Advanced Persistent Threat, then I could possibly live with it. Unfortunately they veer off into the land of speculation and confusion with questions and answers like the following:

Q1d. What other terms are used to describe an advanced threat? Please select all that apply.

  • Advanced persistent threat (50%)

  • Emerging threat (41%)

  • Spear-phishing (38%)

  • SQL Injection (31%)

  • Cyber warfare (25%)

  • Continuous attack (21%)

  • Cyber terrorism (21%)

  • Denial of service attack (19%)



Please. No. Make it stop. It's bad enough to pollute the APT term with the "advanced threat" definition Ponemon manufactured, but now it includes SQL injection and DoS? And the statement "the predominant majority of these threats are represented by unknown, zero-day attacks" in no way describes how APT acts. They can elevate to research, weaponize, and use zero-day, but that does not define them.

The ultimate shame is seeing SearchSecurity.com fall for this with their article More firms targeted by advanced persistent threats, study finds:

Advanced persistent threats (APTs), which are carried out by organized cybercriminal groups, may be a growing trend as a new survey finds an increase in advanced threats over the last 12 months.

No. APT is not cybercrime.

While there might be some interesting survey data in the Ponemon results, please don't think for a second it has anything to do with APT whatsoever.

Monday, July 05, 2010

Joint Strike Fighter -- Face of Cyberwar?

Does anyone remember this story from April 2009?

Computer Spies Breach Fighter-Jet Project

Computer spies have broken into the Pentagon's $300 billion Joint Strike Fighter project -- the Defense Department's costliest weapons program ever -- according to current and former government officials familiar with the attacks...

In the case of the fighter-jet program, the intruders were able to copy and siphon off several terabytes of data related to design and electronics systems, officials say, potentially making it easier to defend against the craft...

"There's never been anything like it," this person said, adding that other military and civilian agencies as well as private companies are affected. "It's everything that keeps this country going..."

Former U.S. officials say the attacks appear to have originated in China...

Six current and former officials familiar with the matter confirmed that the fighter program had been repeatedly broken into...


A week ago this story appears:

DoD Adviser: Foes' Advances Might Lead to F-35 Fleet Shrinkage

The Obama administration may have to rethink whether the U.S. military will need 2,500 F-35 fighter jets...

With possible American enemies, like China, developing and fielding ever-more advanced systems - such as sophisticated radar suites and surface-to-air missiles - Pentagon and administration officials must examine if the Lockheed Martin-made Lightning II will bring as much "value" to combat by the time it comes online next decade as thought decades previous when it was designed, [Andrew Krepinevich] said...

[B]ecause it might not be as useful from so-called forward air bases or aircraft carriers because of foes' advanced air defenses, the Defense Department might have to swallow hard, buy fewer F-35s and use any savings to buy other aircraft and missiles...


And today we have China's response:

China Seizes on F-35 Remarks:

[O]ne U.S. publication’s report about the possible cancellation of an American fighter jet program is being seized on by Chinese media as evidence of the Asian giant’s growing military prowess...

On Monday, the website of the People’s Daily, a mouthpiece for the Communist Party, reported the Obama administration was reconsidering its purchase of the fighter jets as a result of “astonishing progress” by militaries of China and other potential U.S. adversaries...


In case you missed it, here's the formula:

  1. US designs and builds a 5th-generation Joint Strike Fighter, with plans not only for the American military but for allied nations.

  2. China steals crucial information about JSF.

  3. American military officials and analysts realize the JSF might not be as effective as hoped due to the incorporation of counter-JSF technology into adversary (China) defense systems.

  4. China rejoices as American military officials rethink their plans for the JSF. China downs the JSF without firing a shot.


Incidentally, I am aware of financial issues with JSF, performance concerns, etc., along with DoD's $100 billion budgetary challenge. I live near the Beltway and watch This Week in Defense News religiously! However, I find my scenario plausible, and at least a possible contributing factor to plans to scale back JSF.

Sunday, July 04, 2010

Cyberwar Is Real

A number of people, inside and outside the security world, think that any discussion of real threats is a manufactured justification for intrusive government action.

Their argument is simple.

  1. The government wants to control the people, or obtain a resource, or pursue some objective that could not be reasonably achieved if transparently presented to the citizenry.

  2. The government "propaganda machine," sometimes in coordination with "the media" and "big business," "manufactures" a "crisis" whose only solution is increased government power.

  3. The people acquiesce in order to preserve their safety, and the government achieves its objective.


As a result, those who see the world in this manner treat any discussion of real threats as step 2 in this process towards decreased liberty via increased government power. Those who seek to inform the citizenry of real threats are dismissed as sowing "FUD."

This is a tragedy, because it means that we continue to suffer at the hands of real threats who laugh while pillaging their target.

Yes, there are surely those in government who see any crisis as an opportunity to advance their agenda. Yes, governments have manufactured threats in the past to justify action. I am a history major so I am well schooled in these events, and as a libertarian I am suspicious of the government. However, I am not blinded to reality, unlike those who choose to dismiss threats as "simple espionage" and the like.

In the past I've been somewhat ambiguous about cyberwar. Starting now, I've decided to say it: cyberwar is real.

The reason some others aren't willing to say this is because they are keeping their minds narrowed to historical definitions of war, or they are not aware of the "facts on the ground," or they choose to ignore facts because they see them as elements of "step 2" and thereby inherently false.

I mentioned in a recent post that Attrition.org has decided to ridicule those who quote Sun Tzu, and I largely agree. At the micro level of civilian defense of corporate systems, where defenders cannot strike back, "war" does not seem to be the correct paradigm, so Sun Tzu fails as a way to interpret enterprise defense.

However, at the level of nation states, the entities which wage war, Sun Tzu is as applicable as ever. And this is the problem with those who dismiss cyberwar; they think that without bullets being fired, there is no war. Sun Tzu would laugh at that:

For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.

Bruce Lee, and before him Tsukahara Bokuden understood that "fighting without fighting" is the highest form of war.

Cyberwar, therefore, may be seen as a means to subdue the enemy without traditional "fighting."

It's likely that if those who dismiss cyberwar as "simple espionage" gain the political and philosophical high ground, and threats continue to ravage their victims, no bullets would ever need to be fired. The victim would not need to be "conquered" by traditional means; physical "war" would be redundant.

Does all this mean I agree with government plans to "defend" the Internet? Of course not. However, it is foolish to dismiss the threat because one does not agree with a government-proposed "solution."

Saturday, July 03, 2010

Security Is Never Free -- Ask DNSSEC

Volume 13 Number 1 of the Cisco IP Journal features a fascinating DNS troubleshooting article titled "Rolling Over DNSSEC Keys" by George Michaelson, APNIC, Patrick Wallstrõm, .SE, Roy Arends, Nominet, and Geoff Huston, APNIC. It's one of the best articles I've ever read in IPJ. You should subscribe (it's free) if you like this blog.

In the article, the authors investigate a surge of DNS traffic suffered by a secondary DNS server that is authoritative for a number of subdomains of the in-addr.arpa zone.

The article explains what happens next.

I can cut to the chase with the following quotes:

In other words, in this example scenario with stale Trust Anchor keys in a local client's resolver, a single attempt to validate a single DNS response will cause the client to send a further 844 queries, and each .com Name Server to receive 56 DNSKEY RR queries and 4 DS RR queries...

The problem with key rollover and local management of trust keys appears to be found in around 1 in every 1,500 resolvers in the in-addr.arpa zones. With a current client population of some 1.5 million distinct resolver client addresses each day for these in-addr.arpa zones, there are some 1,000 resolvers who have lapsed into this repeated query mode following the most recent key rollover of December 2009. Each subzone of in-addr.arpa has six Name Server records, and all servers see this pathological re-query behavior following key rollover.


The conclusion is excellent:

It is an inherent quality of the DNSSEC deployment that in seeking to prevent lies, an aspect of the stability of the DNS has been weakened.

When a client falls out of synchronization with the current key state of DNSSEC, it will mistake the current truth for an attempt to insert a lie.

The subsequent efforts of the client to perform a rapid search for what it believes to be a truthful response could reasonably be construed as a legitimate response, if indeed this instance was an attack on that particular client. Indeed, to do otherwise would be to permit the DNS to remain an untrustable source of information.

However, in this situation of slippage of synchronized key state between client and server, the effect is both local failure and the generation of excess load on external servers — and if this situation is allowed to become a common state, it has the potential to broaden the failure state to a more general DNS service failure through load saturation of critical DNS servers.

This aspect of a qualitative change of the DNS is unavoidable, and it places a strong imperative on DNS operations and the community of the 5 million current and uncountable future DNS resolvers to understand that "set and forget" is not the intended mode of operation of DNSSEC-equipped clients.


To me, an interesting aspect of this story is that deployment of a security protocol in the real world is ultimately degraded by operational issues. We could probably name countless examples of this; DNSSEC is only the latest.

Lessons from NETOPS vs CND

Volume 13 Issue 2 of IATAC's IA Newsletter features an article titled Apples and Oranges: Operating and Defending the Global Information Grid by Dr Robert F Mills, Maj Michael Birdwell, and Maj Kevin Beeker. The article nicely argues for refocusing DoD's "NETOPS" and "CND" missions, where the former is defined currently as

activities conducted to operate and defend the Global Information Grid

and the latter is defined currently as

actions taken to protect, monitor, analyze, detect, and respond to unauthorized activity within DoD information systems and computer networks.

After spending years to "converge" the two missions, the authors argue DoD needs to separate them (as I understand the Air Force has done, bringing back the AFCERT for example).

I'd like to present selected excerpts with my own emphasis.

Cyberspace is a contested, warfighting domain, but we’re not really treating it as such, partly because our language and doctrine have not matured to the point that allows us to do so.

One reflection of our immature language is our inability to clearly differentiate the concepts of network operations (NETOPS) and computer network defense (CND). This creates confusion about the roles and responsibilities for provisioning, sustaining, and defending the network — much less actually using it.

Only by separating these activities can we more effectively organize, train, and equip people to perform those tasks...

Effective CND uses a defense-in-depth strategy and employs intelligence, counterintelligence, law enforcement, and other military capabilities as required. However, the CND culture is largely one of information assurance (e.g., confidentiality, integrity, and availability), system interoperability, and operations and maintenance (O&M).

Many of the things that we routinely call ‘cyberspace defense’ in cyberspace are really just O&M activities — such as setting firewall rules, patching servers and workstations, monitoring audit logs, and troubleshooting circuit problems...

[W]e do not treat cyberspace operations like those conducted in other domains... [T]housands of systems administrators routinely count and scan computers to ensure that their software and operating system patches are current. The objective is 100% compliance, but even if we could achieve that, this is a maintenance activity.

(Indeed, do we even really know how many computers we have, let alone how many are compliant?)

This is no more a defensive activity than counting all the rifles in an infantry company and inspecting them to ensure that they are properly cleaned and in working order.

Our current NETOPS/CND mindset is intentionally focused inward... Contrast this with a traditional warfighting mentality in which we study an adversary’s potential courses of action, develop and refine operational plans to meet national and military objectives, parry thrusts, and launch counter attacks.

While we do worry about internal issues such as security, force protection, logistics, and sustainment, our focus remains outward on the adversary.


Does that sound familiar? An "outward focus on the adversary" reminds me of my concept of threat-centric security instead of "inward" or vulnerability-centric security.

Our intent is not to diminish the importance of NETOPS activities... But they are not defensive activities — at least not in the classical understanding of the concept. Turning to Carl von Clausewitz, we see a much different concept of defense than is currently applied to cyberspace:

"Pure defense, however, would be completely contrary to the idea of war, since it would mean that only one side was waging it....

But if we are really waging war, we must return the enemy’s blows; and these offensive acts in a defensive war come under the heading of ‘defense’ – in other words, our offensive takes place within our own positions or theater of operations.

Thus, a defensive campaign can be fought with offensive battles, and in a defensive battle, we can employ our divisions offensively... So the defensive form of war is not a simple shield, but a shield made up of well-directed blows."


I find it interesting to see these authors cite Clauswitz. Anyone notice attrition.org blast Sun Tzu but speak better of Clauswitz recently?

These definitions of defense do not sound like our current approach to NETOPS and CND. Clausewitz might say we have a shield mentality about cyber defense...

An active defense — one that employs limited offensive action and counterattacks to deny the adversary — will be required to have a genuinely defensive capability in cyberspace.

Our recommendations to remedy this situation are as follows:

  1. Redefine NETOPS as “actions taken to provision and maintain the cyberspace domain.” This would capture the current concepts of operations and maintenance while removing the ambiguity caused by including defense within the NETOPS construct.

  2. Leverage concepts such as ‘mission assurance’ and ‘force protection’ to help change the culture and engage all personnel — users, maintainers, and cyber operators. Everyone has a role in security and force protection, but we are not all cyber defenders. Force protection and mission assurance are focused inward on our mission.

  3. Redefine our CND construct to be more consistent with our approach to the concept of ‘defense’ in the other domains of warfare, to include the concept of active defense. This would shift the concept from maintenance to operations, from inward to outward (to our adversaries). CND is about delivering warfighting effects (e.g., denying, degrading, disrupting, and destroying the cyber capabilities of our adversaries).



I like these three recommendations from a corporate point of view:

  1. IT provides "NETOPS".

  2. User and management training and awareness are "force protection" activities.

  3. CIRTs with Red capabilities, authorized to perform "active defense" against adversaries, perform "CND."


What do you think?

Friday, July 02, 2010

Secunia Survey of DEP and ASLR

At the FIRST conference last month, Dave Aitel said something to the effect that DEP and ASLR are the only two noteworthy technologies produced by Microsoft since starting their security initiative. Forgive me Dave if I messed that up, and feel free to respond!

I thought that was interesting after reading the post DEP / ASLR Neglected in Popular Programs by Secunia. The figure at left summarizes their findings over time.

The report concludes thus:

DEP and ASLR support, although usually trivial to implement, is overlooked by a large number of application developers. The requirement for an additional call to "SetProcessDEPPolicy()" proved confusing to almost all vendors, resulting in late implementation of DEP when running on Windows XP.

Some developers have over time made their applications compatible with DEP, but overall the implementation process has proven slow and uneven between OS versions. ASLR support is on the other hand improperly implemented by almost all vendors, allowing return-into-libc techniques to likely succeed in their applications or in browsers designed to be otherwise ASLR compliant.

While most Microsoft applications take full advantage of DEP and ASLR, third-party applications have yet to fully adapt to the requirements of the two mechanisms. If we also consider the increasing number of vulnerabilities discovered in third-party applications, an attacker's choice for targeting a popular third-party application rather than a Microsoft product becomes very understandable.

Hopefully, vendors will see the importance of properly deploying the two measures, resulting in an increased number of third-party applications having full DEP and ASLR support in the near future.


I found the report interesting -- what do you think?