Monday, April 30, 2007

Help SANS with Security Career Stories

The latest issue of the SANS @Risk (link will work shortly) newsletter contains this request:

Project In Which You Might Contribute: Career models for information security. If you know of someone who has accomplished a lot in security by exploiting deep technical skills, and moved up in their organizations, please write is a little note about them to apaller [at] sans [dot] org. We have been asked by five different publications for articles or interviews on how to make a successful career in information security. A couple of the editors have heard that security folks with soft skills are no longer in demand and they want to hear about models of success for people with more technical backgrounds. No names or companies will be disclosed without written permission.

If you can share a story, please email Alan Paller as indicated above. This is another opportunity for the technical people of the security world to make our mark.

Open Source Training

I'd like to mention a few notes on training for open source software that appeared on my radar recently. The first is Wireshark University, the result of collaboration among Laura Chappell and her Protocol Analysis Institute, Gerald Combs (Wireshark author), and CACE Technologies, maintainers/developers of WinPcap and AirPcap. WiresharkU is offering a certification and four DVD-based courses, along with live training delivered through another vendor.

WiresharkU's content looks pretty simple, but I guess beginners need to start somewhere. If you want to understand more advanced security-related network traffic, I recommend one of my TCP/IP Weapons School classes, offered at Techno Security in Myrtle Beach, SC in June; USENIX 2007 in Santa Clara, CA in June; and Black Hat Training in Las Vegas, NV in July.

On a related Wireshark note, a client recently asked why Lua was required on a sensor he built. He had heard about Lua and Snort 3.0 but was running Snort 2.6.x. I just realized Wireshark uses Lua. Here is one example. If you're attending BSDCan, consider taking the BSD Certification Exam Beta. It's free but won't convey certification if passed. Registration opened last week. I will again miss BSDCan due to conflicting engagements, namely AusCERT 2007. In addition to speaking at AusCERT, I'm teaching Network Security Monitoring and talking to the Sydney Snort Users Group on 25 May 2007.

Sunday, April 22, 2007

What Should the Feds Do

Recently I discussed Federal digital security in Initial Thoughts on Digital Security Hearing. Some might think it's easy for me to critique the Feds but difficult to propose solutions. I thought I would try offering a few ideas, should I be called to testify on proposed remedies.

For a long-term approach, I recommend the steps offered in Security Operations Fundamentals. Those are operational steps to be implemented on a site-by-site basis, and completing all of them across the Federal government would probably take a decade.

In the short term (over the next 12 months) I recommend the following. These ideas are based on the plan the Air Force implemented over fifteen years ago, partially documented in Network Security Monitoring History along with more recent initiatives.

  1. Identify all Federal networks and points of connectivity to the Internet. This step should already be underway, along with the next one, as part of OMB IPv6 initiative. The Feds must recognize the scope and nature of the network they want to protect. This process must not be static. It must be dynamic and ongoing. Something like Lumeta should always be measuring the nature of the Federal network.

  2. Identify all Federal computing resources. If you weren't laughing with step 1, you're probably laughing now. However, how can anyone pretend to protect Federal information if the systems that process that data are unknown? This step should also be underway as part of the IPv6 work. Like network discovery, device discovery must be dynamic and automated. At the very least passive discovery systems should be continuously taking inventory of Federal systems. To the extend active discovery can be permitted, those means should also be implemented. Please realize steps 1 and 2 are not the same as FISMA, which is static and only repeated every three years for known systems.

  3. Project friendly forces. You can tell these steps are becoming progressively difficult and intrusive into agency operations. With this step, I recommend third party government agents, perhaps operated by OMB for unclassified networks and a combination of DoD and ODNI for classified networks, "patrol" friendly networks. Perhaps they operate independent systems on various Federal networks, conducting random reconnaissance and audit activities to discover malicious parties. The idea is to get someone else besides intruders and their victims into the fight at these sites, so an independent, neutral third party can begin to assess the state of enterprise security. The Air Force calls this friendly force projection, which is a common term but they are performing it now on AF networks.

    This step is important because it will unearth intrusions that agencies can't find or don't want to reveal. It is imperative that end users, administrators, and managers become separated from the decision on reporting incidents. Right now incident reporting resembles status reports in the Soviet Union. "Everything is fine, production is exceeding quotas, nothing to see here." The illusion is only shattered by whistleblowers, lawsuits, or reporters. Independent, ground-truth reporting will come from this step and from centralized monitoring (below).

  4. Build a Federal Incident Response Team. FIRT is a lousy name for this group, but there should be a pool of supreme technical skill available to all Federal enterprises. Each agency should also have an IRT, but they should be able to call upon FIRT for advice, information sharing, and surge support.

  5. Implement centralized monitoring at all agencies. All agencies should have a single centralized monitoring unit. Agents from step three should work with these network security monitoring services to improve situational awareness. Smaller agencies should pool resources as necessary. All network connectivity points identified in step 1 should be monitored.

  6. Create the National Digital Security Board. As I wrote previously:

    The NDSB should investigate intrusions disclosed by companies as a result of existing legislation. Like the NTSB, the NDSB would probably need legislation to authorize these investigations.

    The NDSB should also investigate intrusions found by friendly force projection and centralized monitoring.

None of these steps are easy. However, there appears to be support for some of them. This is essentially the formula the Air Force adopted in 1992, with some of the steps (like friendly force projection) being adopted only recently. I appreciate any comments on these ideas. Please keep in mind these are 30 minutes worth of thoughts written while waiting for a plane.

Also -- if you read this blog at, you'll see a new theme. Blogger "upgraded" me last night, removing my old theme and customizations. I think most people use RSS anyway, so the change has no impact. I like the availability of archives on the right side now.

Update: I added step 6 above.

Saturday, April 21, 2007

Two Pre-reviews

I'm going to spend more time hanging in the sky over the coming weeks, so I plan to read and review many books. Publishers were kind enough to send two which I look forward to reading. The first is Designing BSD Rootkits by Joseph Kong. I mentioned this book last year. Publisher No Starch quotes me as saying

"If you understand C and want to learn how to manipulate the FreeBSD kernel, Designing BSD Rootkits is for you. Peer into the depths of a powerful operating system and bend it to your will!" The second book I plan to read is
IT Auditing: Using Controls to Protect Information Assets
by Chris Davis, Mike Schiller, and Kevin Wheeler. Contrary to what you might think, I am not instinctively at odds with auditors. In fact, I believe working with them is more productive than working against them. I hope this book, published by McGraw-Hill/Osborne, helps me understand their world.

Friday, April 20, 2007

Initial Thoughts on Digital Security Hearing

Several news outlets are reporting on the hearing I mentioned in my post When FISMA Bites. There following excerpts appear in Lawmakers decry continued vulnerability of federal computers:

The network intrusions at State and Commerce follow years of documented failure to comply with the Federal Information Security Management Act (FISMA), which requires agencies to maintain a complete inventory of network devices and systems. Government and industry officials at the hearing acknowledged a disconnect between FISMA's intent and effecting improved network security.

"The current system that provides letter grades seems to have no connection to actual security," said Rep. Zoe Lofgren, D-Calif.
(emphasis added)

WOW -- does Zoe Lofgren read my blog?

Some lawmakers are considering whether the Department of Homeland Security should be given primary responsibility for overseeing federal network security, but officials at DHS and elsewhere suggested that wouldn't be the best idea. Noting that DHS has not performed well on the annual FISMA report card and has not implemented all of the recommendations put forth for improved analysis and warning capabilities for attacks, Greg Wilshusen, director of information security issues at the Government Accountability Office, said it would be problematic from an organizational standpoint to put DHS in the position of compelling other agencies to comply.

I agree DHS is not in a position to defend the entire Federal government, but centralized network security is a good idea when skilled defenders are in short supply and high demand. It's clear that some agencies are not capable of defending themselves, while others do a better job. Perhaps a "center of excellence" model might work, where an agency with a very good monitoring team might watch the entire network. Another agency with a very good red team might assess the whole network, and so on.

Further news about the State intrusion appears in Response to May-July 2006 Cyber Intrusion on Department of State Computer Network and State Department got mail -- and hackers.

It's important to note that State was compromised by a zero-day, and a patch from Microsoft took eight weeks to be appear.

The article Intruders infect 33 US government computers with Trojans talks about a compromise an the Department of Commerce:

[T]he cyberintrusion affecting the Commerce Department's Bureau of Industry and Security systems was first noticed last July, when a Bureau of Industry and Security deputy under secretary reported being locked out of his computer. An investigation showed that the system had been compromised and someone had installed malicious code on it that was causing it to make unauthorised attempts to access another computer on the bureau's network...

Investigations also showed that the infected system had attempted to access two external IP address[es] after business hours when the computer was no longer being used...

Over the next 10 days or so, investigators at Bureau of Industry and Security noticed about 10 other computers making similar attempts to connect with suspicious IP addresses. By 18 August, 32 the bureau systems and one non-bureau system had tried to communicate with at least 11 suspicious IP addresses...

To date, an analysis of the forensic data has shown no evidence that information was actually stolen, despite the compromises, Jarrell said. At the same time, it remains unclear just when the first breach occurred or for how long intruders might have had access to the Bureau’s systems.

That's interesting. The initial incident is found through non-technical means ("Hey, my PC no workie!") but extrusion detection would have noticed the outbound connections. If BIS had been collecting session data they would have known when the first attempt to access those external IPs occurred, thereby helping to scope the incident.

Thursday, April 19, 2007

Pirates in the Malacca Strait

Given my recent post Taking the Fight to the Enemy Revisted, does this AP report sound familiar?

Countries lining the Malacca Strait have vastly improved security in the strategic shipping route over the last five years, the top U.S. commander in the Pacific said on Monday...

Attacks in the Malacca Strait have been on the decline with only 11 cases last year compared to 18 in 2005 and 38 in 2004, according to the International Maritime Bureau, a martime watchdog...

Indonesia, Malaysia and Singapore began stepping up their surveillance by coordinating sea patrols in 2004 and following with air patrols a year later.

Last August, the British insurance market Lloyd's lifted its "war-risk" rating for the waterway, saying the safety of the 550-mile-long strait had improved due to long-term security measures.
(emphasis added)

Despite this development, Malaysia is looking for alternatives to shipping when transporting oil, according to this article:

A proposed oil pipeline project to pump oil across northern Malaysia could lower transportation costs and avoid risks of pirate attacks on tankers.

The US$14.2-billion project would involve building a 320-kilometre pipeline across northern Malaysia, linking ports on the two coasts, officials in northern Kedah state announced...

Crude oil would be refined in Kedah, pumped through the pipe to Kelantan on the east coast and then loaded onto tankers bound for Japan, China and South Korea, completely bypassing Singapore and the Malacca Strait, which lies off peninsular Malaysia’s west coast.

The strait, which carries half the world’s oil and more than one-third of its commerce, is shared by Malaysia, Indonesia and Singapore. It is notorious for robberies and kidnappings by pirates, but attacks have fallen following increased security patrols in 2005.
(emphasis added)

I see two lessons here. First, shipping companies did not try to "patch" their way out of this problem. There is no way to address all of the vulnerabilities associated with transporting oil by tanker. A two-pronged approach was taken. First, to protect ships, governments increased security patrols to deter and repel pirates. Ships did not get equipped with Yamato-size deck guns and battleship armor. Second, an alternative means to transport oil is being considered. This is a form of backup or redundancy to ensure oil still flows if the Strait becomes too dangerous.

I think these stories have plenty of lessons for digital security. Of course the next step would be going after the pirates directly, before they ever reach friendly ships. Consider the history of the US Navy:

Operations Against West Indian Pirates 1822-1830s

By the second decade of the 19th Century, pirates increasingly infested the Caribbean and Gulf of Mexico, and by the early 1820's nearly 3,000 attacks had been made on merchant ships. Financial loss was great; murder and torture were common.

Under the leadership of Commodores James Biddle, David Porter and Lewis Warrington, the U.S. Navy's West India Squadron, created in 1822, crushed the pirates. The outlaws were relentlessly ferreted out from uncharted bays and lagoons by sailors manning open boats for extended periods through storm and intense heat. To the danger of close-quarter combat was added the constant exposure to yellow fever and malaria in the arduous tropical duty.

The Navy's persistent and aggressive assault against the freebooters achieved the desired results. Within 10 years, Caribbean piracy was all but extinguished, and an invaluable service had been rendered to humanity and the shipping interests of all nations.

That's what I'm talking about.

Thanks to geek00l and mboman for discussing pirates in #snort-gui for inspiring this post.


CALEA is the Communications Assistance for Law Enforcement Act. I wrote about CALEA three years ago in Excellent Coverage of Wiretapping:

CALEA requires telecommunications carriers to allow law enforcement "to intercept, to the exclusion of any other communications, all wire and electronic communications carried by the carrier" and "to access call-identifying information," among other powers.

A lot has happened since then. Basically, all facilities-based broadband access providers and interconnected VoIP service providers must be CALEA-compliant by 14 May 2007. This means a lot of companies, of all sizes, are scrambling to deploy processes and tools to collect information in accordance with the law, as well as filing the right reports with the FCC.

If you're affected by CALEA I don't think you'll learn much from this post. However, those who do not work for ISPs might like to know a little bit about what is happening. (Note: I am not personally affected, so this post is based on some research I did this morning.) This post CALEA Mediation provides a lot of details and links, and the Wikipedia entry is good (as long as no one makes crazy changes). WISPA's mailing lists have carried several extended threads on CALEA compliance for wireless ISPs. The definitive blog on CALEA appears to be Demystifying Lawful Intercept and CALEA, by Scott Coleman, Director of Marketing for Lawful Intercept at SS8 Networks.

What started me looking at CALEA again was the story Solera Networks' CALEA Compliance Device, which talked about this Solera Networks appliance. The article mentioned OpenCALEA, which was new to me.

I checked out OpenCALEA via SVN from its OpenCALEA Google code site. Jesse Norell was helpful in #calea on I installed the code on two FreeBSD 6.x boxes, cel433 (the "sensor") and poweredge (the box a Fed might use to collect data).

First I started a collector on the "Fed" box.

poweredge:/usr/local/opencalea_rev38/bin# ./lea_collector -t /tmp/cmii.txt
-u richard -f /tmp/cmc.pcap

Next I started a "tap" on the sensor to watch port 6667 traffic.

cel433:/usr/local/opencalea_rev38/bin# ./tap -x x -y y -z z -f "port 6667"
-i dc0 -d -c

As I typed traffic in an IRC channel on a connection watched by the tap...

13:25 < helevius> This is another CALEA test

...the tap sent traffic to the Fed box.

13:26:28.795644 IP > UDP, length 265
0x0000: 4500 0125 80ca 0000 4011 cdf8 0a01 0a02 E..%....@.......
0x0010: 0a01 0d02 f470 1a0a 0111 44ce 7800 0000 .....p....D.x...
0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0090: 0000 0000 0000 0000 0000 0000 3230 3037 ............2007
0x00a0: 2d30 342d 3139 5431 373a 3236 3a32 382e -04-19T17:26:28.
0x00b0: 3430 3600 015c 22aa c200 02b3 0acd 5e08 406..\".......^.
0x00c0: 0045 0000 64c3 8f40 003f 0635 8245 8fca .E..d..@.?.5.E..
0x00d0: 1c8c d3a6 0380 331a 0b4f bb43 bfc4 6a95 ......3..O.C..j.
0x00e0: e080 187f ffe4 cc00 0001 0108 0a52 0b91 .............R..
0x00f0: ad05 c1a5 e150 5249 564d 5347 2023 736e .....PRIVMSG.#sn
0x0100: 6f72 742d 6775 6920 3a54 6869 7320 6973
0x0110: 2061 6e6f 7468 6572 2043 414c 4541 2074 .another.CALEA.t
0x0120: 6573 740d 0a est..
13:26:28.795810 IP > UDP, length 423
0x0000: 4500 01c3 80cb 0000 4011 cd59 0a01 0a02 E.......@..Y....
0x0010: 0a01 0d02 d418 1a0b 01af 3d00 7900 0000 ..........=.y...
0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0090: 0000 0000 0000 0000 0000 0000 7a00 0000 ............z...
0x00a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0100: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0110: 0000 0000 0000 0000 0000 0000 3230 3037 ............2007
0x0120: 2d30 342d 3139 5431 373a 3236 3a32 382e -04-19T17:26:28.
0x0130: 3430 3678 0000 0000 0000 0000 0000 0000 406x............
0x0140: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0150: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0160: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0170: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0180: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0190: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x01a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x01b0: 0000 00bf 0080 0508 1cca 8f45 03a6 d38c ...........E....
0x01c0: 8033 1a .3.

The traffic on port 6666 UDP is the content and the traffic on port 6667 UDP is a connection record of some kind.

After shutting down the tap and collector, I checked the files the collector created.

poweredge:/usr/local/opencalea_rev38/bin# cat /tmp/cmii.txt
x, y, z, 2007-04-19T17:26:28.406,,, 32819, 6656
x, y, z, 2007-04-19T17:26:28.514,,, 6667, 32768
x, y, z, 2007-04-19T17:26:34.195,,, 6667, 32768
x, y, z, 2007-04-19T17:26:34.196,,, 32819, 6656

CMII is Communications Identifying Information. Here's the content, which is saved in Libpcap form.

poweredge:/usr/local/opencalea_rev38/bin# tcpdump -n -r /tmp/cmc.pcap -X
reading from file /tmp/cmc.pcap, link-type EN10MB (Ethernet)
13:26:28.406000 IP >
P 1337672639:1337672687(48) ack 3295319520 win 32767

0x0000: 4500 0064 c38f 4000 3f06 3582 458f ca1c E..d..@.?.5.E...
0x0010: 8cd3 a603 8033 1a0b 4fbb 43bf c46a 95e0 .....3..O.C..j..
0x0020: 8018 7fff e4cc 0000 0101 080a 520b 91ad ............R...
0x0030: 05c1 a5e1 5052 4956 4d53 4720 2373 6e6f ....PRIVMSG.#sno
0x0040: 7274 2d67 7569 203a 5468 6973 2069 7320
0x0050: 616e 6f74 6865 7220 4341 4c45 4120 7465 another.CALEA.te
0x0060: 7374 0d0a st..

Jesse told me there's a lot of work to be done with this open source suite. The idea is to give businesses that can't afford a commercial CALEA solution the option of open source.

I plan to keep an eye on the OpenCALEA mailing list and try new versions as they are released.

Wednesday, April 18, 2007

War in the Third Domain

Recently I wrote Taking the Fight to the Enemy Revisited that mentioned air power concepts as they relate to information warfare. The Air Force Association just published a story by Hampton Stephens titled War in the Third Domain. I found several points quoteworthy.

When the Air Force formed Air Force Space Command in 1982, it marked formal recognition that space was a distinct operating arena. The first commander, Gen. James V. Hartinger, said, “Space is a place. ... It is a theater of operations, and it was just a matter of time until we treated it as such..."

The Air Force has come to recognize cyberspace, like “regular” space, as an arena of human activity—including armed activity. It is, to reprise Hartinger, a theater of operations...

Though Cyber Command has not yet reached full major command status, it already is providing combat capabilities in cyberspace to the unified US Strategic Command and combatant commanders, according to Air Force officials.

Cyber Command has in place systems and capabilities for integrating cyber operations into other Air Force global strike options. All that is lacking, according to one top official, are the “organizational and operational constructs” to integrate cyber ops with those of air and space operations.

The Air Force believes it must be able to control cyberspace, when need be, as it at times controls the air. The goal is to make cyberspace capabilities fully available to commanders.
(emphasis added)

This last point is crucial. I believe I described it earlier, but you should recognize the significance of this statement. You can complain to whatever you degree you like that it's unfair, unjust, whatever -- the fact remains that it is the USAF's plan to be able to control this latest domain. I don't think civilians appreciate the themes with which military planners contemplate capabilities. Reading the National Military Strategy (.pdf) will provide some background. Apparently a National Military Strategy for Cyberspace Operations (published 2006) exists but it is not public.

“Almost everything I do is either on an Internet, an intranet, or some type of network—terrestrial, airborne, or spaceborne,” said Gen. Ronald E. Keys, head of Air Combat Command, Langley AFB, Va. “We’re already at war in cyberspace -- have been for many years.”

There's another commander stating the US is already fighting wars in cyberspace.

According to Lani Kass, special assistant to the Chief of Staff and director of the Chief’s Cyberspace Task Force:

Simply put, cyberspace has become major bad-guy territory. Air Force officials say it never has been easier for adversaries—whether terrorists, criminals, or nation-states—to operate with cunning and sophistication in the cyber domain.

Kass said there is “recognition by our leadership that ... cyberspace is a domain in which our enemies are operating, and operating extremely effectively because they’re operating unconstrained.”

At the moment I don't see a way for the USAF to accomplish its goals. I don't know if they even have the capability to execute their vision for space operations, never mind cyberspace. We're only now acquiring and exercising the capabilities to execute in the air domain. I remember being a cadet when the Air Force devised its "Global Reach, Global Power" vision. That idea came to fruition in 1996, when B-52s from Louisiana flew all the way to Iraq and back, over 14,000 miles. It's definitely not a normal method of operations, but the capability exists.

My point is that these sorts of guiding principles are not just rhetoric. It will be interesting to see how they materialize at some point in the future.

Why UTM Will Win

We know how many words a picture is worth. The figure at left, from Boxed In by Information Security magazine, shows why Unified Threat Management appliances are going to replace all the middleboxes in the modern enterprise. At some point the UTM will be the firewall, so the gold UTM box above will also disappear. In some places even the firewall will disappear and all network security functions will collapse into switches and/or routers.

I'd like to show one other diagram from the story.

Figures like these, showing which products and their "features," are another reason UTM will replace point product middleboxes. "Hey, I read in this magazine that product X checks 7 boxes, but product Y only checks 3. Let's look at product X." These are the sorts of figures that people who are not security experts and are not interested in or capable of assessing security products like.

Just because I think this is going to happen (or is happening -- look at what your Cisco router can do) doesn't mean I like it. The more functions a box performs, the greater the likelihood that all of those functions will be performed at a mediocre level. Mediocrity is an improvement over zero security protection for some sites, but elsewhere it will not be sufficient.

I should say that the top diagram has its merits, with simplicity being the primary advantage. With so many networks having multiple "moving parts," it can be tough to stay operational and understand what's working or not working. Moving all those moving parts onto a single platform may not yield all the simplicity one might expect, however!

One way to address the weaknesses of these UTMs is to deploy stand-alone devices performing network forensics, so they record exactly what happens on the network. Using that data, one can investigate security incidents as well as measure the effectiveness of the UTM. I do not foresee network forensics collapsing into security switches/routers due to the data retention requirements and reconstruction workload required for investigations.

To survive I think network security inspection/interdiction vendors either need to be in the "meta-security" space (SIM/SEM) or in the do-it-all space (UTM). If your favorite vendor is in neither space, expect them to be acquired or go out of business.

Threat Advantages

My post Fight to Your Strengths listed some of the advantages a prepared enterprise might possess when facing an intruder. I thought it helpful to list a few advantages I see for intruders.

  • Initiative: By virtue of being on the offensive, intruders have the initiative. Unless threats are being apprehended, prosecuted, and incarcerated, intruders are free to pick the victim, the time and nature of the attack, the means of command and control (if desired), and many other variables. Defenders can limit the enemy's freedom of maneuver, but the intruder retains the initiative.

  • Flexibility: Intruders have extreme flexibility. Especially on targets where stealth is not a big deal, intruders can experiment with a variety of exploitation and control tools and tactics. Defenders, on the other hand, have to take special care when applying patches, performing memory- or host-based forensics, and other administrative duties. Defenders have to conform to organizational policies and user demands. Intruders (to the degree they don't want to be noticed) are much freer.

  • Asymmetry of Interest: This may be controversial, but in my experience intruders are much more interested in gaining and retaining control (or accomplishing their mission, whatever it is) than defenders may be in stopping the attack. A dedicated attacker can inflict damage, withdraw for two weeks while defenders scramble to assess and repair, and then return when "incident fatigue" has degraded the incident response team and system administrators. Defenders usually have a lot on their plate besides incident handling, whereas intruders can be obsessively focused on attacking and controlling a target.

  • Asymmetry of Knowledge: This may also be controversial, but skilled intruders (not script kiddies) may know more about target software and applications than some of the developers who write them, never mind the administrators who deploy them. This is especially true of incident handlers, who are supposed to be "experts in everything," but are lucky to at least be "conversant" in victimized applications and systems. Often the first time security staff learn of a new service is when that service is compromised.

Notice these last two intruder strengths come from having the flexibility to decide what to attack. This is particularly true of targets of opportunity. When an incident involves a specific target, the playing field may be more level. The intruder has to exploit whatever is available, not that in which he or she may have specialized experience.

Again, comments with other ideas are appreciated.

Update: From Hackers get free reign to develop techniques says Microsoft security chief:

"Part of the picture is bleak. In the online world, cyber criminals can do their research for as long as they want in absolute security and secrecy then when they're done they can take their exploit, find a way to automate it and post it on a Web site where thousands or millions of other criminals can download it," said Scott Charney, vice president of Trustworthy Computing at Microsoft, in Redmond, Wash...

Charney, speaking at the Authentication and Online Trust Alliance Summit, said that technology and procedures for defeating online attacks and finding hackers has advanced by leaps and bounds since his days at the Department of Justice in the 1990s. But, he added that in some respects the fight against online criminals is not a fair one. The attackers have all the time in the world, the cooperation of other hackers and a virtually limitless number of potential targets. Law enforcement agents, meanwhile, are governed by strict guidelines and in many cases are hampered by a lack of available data once a crime has been committed.

Another challenge for security specialists and law enforcement is the patchwork of state and federal laws in the United States, and the lack of any cybercrime laws in a number of foreign countries. Given the global nature of cybercrime and the fact that hackers often attack systems in a number of different countries at once, these hurdles can often stop promising investigations before they really get started.

USENIX HotBots Papers Posted

If you want to read recent good research on bot nets, visit the USENIX HotBots workshop site. They've posted all the speakers' papers for visitors to read for free. Several look very interesting.

Fight to Your Strengths

Recently I mentioned the History Channel show Dogfights. One episode described air combat between fast, well-turning, lightly-armored-and-gunned Japanese Zeroes and slower, poor-turning, heavily-armored-and-gunned American F6F Hellcats. The Marine Top Gun instructor/commentator noted the only way the Hellcat could beat the Zero was to fight to its strengths and not fight the sort of battle the Zero would prefer. Often this meant head-to-head confrontations where the Hellcat's superior armor and guns would outlast and pummel the Zero.

When I studied American Kenpo in San Antonio, TX, my instructor Curtis Abernathy expressed similar sentiments. He said "Make the opponent fight your fight. Don't try to out-punch a boxer. Don't try to out-kick a kicker. Don't try to wrestle a grappler." And so on.

I thought about these concepts today waiting in another airport. I wondered what sorts of strengths network defenders might have, and if we could try forcing the adversary into fighting our fight and not theirs.

Here are some preliminary thoughts on strengths network defenders might have, and how they can work against intruders.

  • Knowledge of assets: An intruder pursuing a targeted, server-side attack will often try to locate a poorly-configured asset. The act of conducting reconnaissance to locate these assets results in the opponent fighting your fight -- if you and/or your defensive systems possess situational awareness. It is not normal for remote hosts to sweep address space for active hosts or individual hosts for listening services. Defenders who manually or automatically take defensive actions when observing such actions can implement blocks that will at least frustrate the observed source IP.

  • Knowledge of normal behavior: An intruder who compromises an asset will try to maintain control of that asset. This may take the form of an outbound IRC-based command-and-control channel, an inbound or outbound encrypted channel, or many other variations. To the extend that the intruder does not use a C&C channel that looks like normal behavior for the victim, the intruder is fighting your fight. Whenever you constrain network traffic by blocking, application-aware proxying, and throttling, you force the intruder into using lanes of control that you should architect for maximum policy enforcement and visibility.

  • Diversity: Targets running Windows systems or PHP-enabled Web applications are much more likely to be compromised and manipulated by intruders. Attack tools and exploits for these platforms are plentiful and well-understood by the enemy. If you present a different look to the intruder, you are making him fight your fight. An intruder who discovers a target running an unknown application on an unfamiliar OS is, at the very least, going to spend some time researching and probing that target for vulnerabilities. If you possess situational awareness, diversity buys time for defensive actions.

  • Situational awareness: A well-instrumented network will possess greater knowledge of the battlespace than an intruder. A network architected and operated with visibility in mind provides greater information on activity than one without access to network traffic. Unless the intruder implements his own measures to expand his visibility (compromising a switch to enable a SPAN port, controlling a router, etc.), the defender will know more about the scope of an attack than the intruder. Of course, the intruder will have absolute knowledge of his activities because he is executing them, possibly via an encrypted channel.

These are some initial ideas recorded in an airport. I may augment them as time permits.

Notice that if you don't know your assets or normal behavior, if you run the same vanilla systems as the rest of the world, and you don't pay attention to network activity, you have zero strengths in the fight beyond (hopefully) properly configured assets. We all have those, right?

At the risk of involving myself in a silly debate, I'd like to briefly mention how these factors affect the decision to run OpenSSH on a nonstandard port. Apparently several people with a lot of free time have been vigorously arguing that "security through obscurity" is bad in all its forms, period. I don't think any rational security professional would argue that relying only upon security through obscurity is a sound security policy. However, integrating security through obscurity with other measures can help force an intruder to fight your fight. Here's an example.

I'm sure you've seen many brute force login attacks against OpenSSH services over the past year or two years. I finally decided I'd seen enough of these on my systems, so I moved sshd to nonstandard ports. Is that security through obscurity? Probably. Have I seen any more brute force attacks against sshd since changing the port? Nope. As far as I'm concerned, a defensive maneuver that took literally 5 seconds per server has been well worth it. My logs are not filling with records of these attacks. I can concentrate on other issues.

Now, what happens if someone really takes an interest in one or more of my servers? In order to find sshd, he needs to port scan all 65535 TCP ports. That activity is going to make him fight my fight, because scanning is way outside the normal profile for activity involving my servers. Will he eventually find sshd? Yes, unless my systems automatically detect the scan and block it. Are there ways to make the intruder's ability to connect to sshd even more difficult? Sure -- take a look at Mike Rash's Single Packet Authorization implementations. The bottom line is that a defensive action which cost me virtually nothing has increased the amount of work the intruder must perform to attack sshd.

If I knew my action to change sshd's port could be discovered by the intruder with minimal effort (perhaps they have visibility of the change via illicit monitoring) then obscurity has been lost and the change is not worthwhile.

As a final thought, it's paramount to consider cost when making security decisions. If altering the sshd port had required buying new software licenses, hardware, personnel training, etc., it would not have been worth the effort.

I would be interested in hearing your thoughts on ways to get the intruder to fight your fight. These are all strictly defensive measures, since offense is usually beyond the rules for most of us.

Tuesday, April 17, 2007

When FISMA Bites

After reading State Department to face hearing on '06 security breach I realized when FISMA might actually matter: combine repeated poor FISMA scores (say three F's and one D+) with publicly reported security breaches, and now Congress is investigating the State Department:

In a letter sent to Secretary of State Condoleeza Rice on April 6, committee Chairman Bennie Thompson asked the department to provide specific information regarding how quickly department security specialists detected the attack, whether the department knows how long the attackers had access to the network and what other systems may have been compromised during the attack. The three-page letter also asks the department to provide evidence that it completely eliminated any malicious software the attackers may have planted, as well as documentation of all of the communications between State and the Department of Homeland Security regarding the incident.

I'm going to keep an eye on the Subcommittee on Emerging Threats, Cybersecurity, and Science and Technology to see what is published on these matters. It's ironic that FISMA scores really have nothing to do with State's problems, and no aspect of FISMA can answer any of the questions cited above.

Management by Fact: Flight Data Recorder for Windows

Whenever I fly I use the time to read ;login: magazine from USENIX. Chad Verbowksi's article The Secret Lives of Computers Exposed: Flight Data Recorder for Windows in the April 2007 issue was fascinating. (Nonmembers can't access it until next year -- sorry.) Chad describes FDR:

Flight Data Recorder (FDR) collects events with virtually no system impact, achieves 350:1 compression (0.7 bytes per event), and analyzes a machine day of events in 3 seconds (10 million events per second) without a database. How is this possible, you ask? It turns out that computers tend to do highly repetitive tasks, which means that our event logs (along with nearly all other logs from Web servers, mail servers, and application traces) consist of highly repetitive activities. This is a comforting fact, because if they were truly doing 28 million distinct things every day it would be impossible for us to manage them.

Ok, that's cool by itself. However, the insights gained from these logs is what I'd like to highlight.

Before investigating my own computer’s sordid life, I wanted to understand the state of what ought to be well-managed and well-maintained systems. To understand this I monitored hundreds of MSN production servers across multiple different properties. My goal was to learn how and when changes were being made to these systems, and to learn what software was running. Surely machines in this highly controlled environment would closely reflect the intentions of their masters? However, as you’ll see in the following, we found some of them sneaking off to the back of the server room for a virtual cigarette.

When I read this I remembered what I said in my recent Network Security Monitoring History post. The Air Force in the early 1990s thought it was pretty squared away. The idea behind deploying ASIM sensors was to "validate" the common belief that the Air Force network was "secure." When ASIM started collecting data, AFIWC and AFCERT analysts realized reality was far different.

In my post Further Thoughts on Engineering Disasters I mentioned management by belief (MBB) vs management by fact (MBF). With MBB you make decisions based on what you assume is happening. With MBF you make decisions based on what you measure to be happening. It's no accident the M in ASIM stands for Measurement.

This is exactly what Chad is doing with FDR -- moving from MBB to MBF:

To avoid problems, administrators form a secret pact they call lockdown, during which they all agree not to make changes to the servers for a specific period of time. The theory is that if no changes are made, no problems will happen and they can all try to enjoy their time outside the hum of the temperature-controlled data center.

Using FDR, I monitored these servers for over a year to check the resolve of administrators by verifying that no changes were actually made during lockdown periods. What I found was quite surprising: Each of the five properties had at least one lockdown violation during one of the eight lockdown periods. Two properties had violations in every lockdown period.

We’re not talking about someone logging in to check the server logs; these are modifications to core Line-Of-Business (LOB) and OS applications. In fact, looking across all the hundreds of machines we monitored, we found that most machines have at least one daily change that impacts LOB or OS applications.
(emphasis added)

That is an ITIL or Visible Ops nightmare. It gets better (or worse):

We would all expect server environments to be highly controlled: The only thing running should be prescribed software that has been rigorously tested and installed through a regulated process.

Using the FDR logs collected from the hundreds of monitored production servers, I learned which processes were actually running. Without FDR it is difficult to determine what is actually running on a system, which is quite different from what is installed. It turns out that only 10% of the files and settings installed on a system are actually used; consequently, very little of what is installed or sitting on the hard drives is needed.

Brief aside -- what a great argument for building a system up from scratch instead of trying to strip out unnecessary components!

Reviewing a summary of the running processes, we found several interesting facts. Fully 29% of servers were running unauthorized processes. These ranged from client applications such as media players and email clients to more serious applications such as auto-updating Java clients. Without FDR, who can tell from where the auto-updating clients are downloading (or uploading?) files and what applications they run? Most troubling were the eight processes that could not be identified by security experts.

Again, facts show the world is not as it was assumed. Now remediation can occur.

Chad's closing thoughts are helpful:

For the past 20 years, systems management has been more of a “dark art” than a science or engineering discipline because we had to assume that we did not know what was really happening on our computer systems. Now, with FDR’s always-on tracing, scalable data collection, and analysis, we believe that systems management in the next 20 years can assume that we do know and can analyze what is happening on every machine. We believe that this is a key step to removing the “dark arts” from systems management.

The next step is to get some documentation posted on how to operationally use FDR, which is apparently in Vista. Comments are appreciated!

Update: MBB and MBF are concepts I learned from Visible Ops.

Saturday, April 14, 2007

Exaggerated Insider Threats

I got a chance to listen to Adam Shostack's talk at ShmooCon. When I heard him slaughter my name my ears perked up. (It's "bate-lik".) :) I hadn't seen his slides (.pdf) until now, but I noticed he cited my post Of Course Insiders Cause Fewer Security Incidents where I questioned the preponderance of insider threats. I thought Adam's talk was good, although he really didn't support the title of his talk. It seemed more like "security breaches won't really hurt you," rather than breaches benefitting you. That's fine though.

When he mentioned my post he cited a new paper titled A Case of Mistaken Identity? News Accounts of Hacker and Organizational Responsibility for Compromised Digital Records, 1980–2006 by Phil Howard and Kris Erickson. Adam highlighted this excerpt

60 percent of the incidents involved organizational mismanagement

as a way to question my assertion that insiders account for fewer intrusions than outsiders.

At the outset let me repeat how my favorite Kennedy School of Government professor, Phil Zelikow, would address this issue. He would say, "That's an empirical question." Exactly -- if we had the right data we could know if insiders or outsiders cause more intrusions. I would argue that projects like the Month of 0wned Corporations give plenty of data supporting my external hypothesis, but let's take a look at what the Howard/Erickson paper actually says.

First, what are they studying?

Our list of reported incidents is limited to cases where one or more electronic personal records were compromised through negligence or theft... For this study, we look only at incidents of compromised records that are almost certainly illegal or negligent acts. For the purposes of this paper, we define electronic personal records as data containing privileged information about an individual that cannot be readily obtained through other public means.

So, they are studying disclosure of personal information. They are not analyzing theft of intellectual property like helicopter designs. They are not reviewing cases of fraud, like $10 million of routers and other gear shipped to Romania. They are not reviewing incidents where hosts became parts of botnets and the like. All of that activity would put weight in the external column. That's not included here.

Let's get back to that 60% figure. It sounds like my hypothesis is doomed, right?

Surprisingly, however, the proportion of incident reports involving hackers is smaller than the proportion of incidents involving organizational action or inaction. While 31 percent of the incidents reported clearly identify a hacker as the culprit, 60 percent of the incidents involve missing or stolen hardware, insider abuse or theft, administrative error, or accidentally exposing data online.

Now we see that the 60% figure includes several categories of "organizational action or inaction". Hmm, I wonder how big the insider abuse or theft figure is, since that to me sounds like the big, bad "insider threat." If we look at this site we can access the figures and tables for the report. Take a look at Figure 2. (It's too wide to print here.) The Insider Abuse or Theft figure accounts for 5% of the incident total while Stolen - Hacked accounts for 31%. Sit down, insider threat.

Wait, wait, insider threat devotees might say -- what about Missing or Stolen Hardware, which is responsible for 36% of incidents? I'll get to that.

The numbers I cited were for incidents. You would probably agree you care more about the number of records lost, since who cares if ten companies each lose one hundred records if an eleventh loses one hundred thousand.

If you look at the corresponding percentages for numbers of records lost (instead of number of incidents), Insider Abuse or Theft accounts for 0%, Missing or Stolen Hardware counts for 2% while Stolen - Hacked rings in at 91%. Why are we even debating this issue?

Wait again, insider fans will say. Why don't you listen when the authors exclude Axciom from the data? They must be an "outlier," so ignore them. And now ignore TJX, and... anyone else who skews the conclusion we're trying to reach.

The authors state:

Regardless of how the data is broken down, hackers never account for even half of the incidents or the volume of compromised records.

To quote Prof Zelikow again, "True, but irrelevant." I'll exclude Acxiom too by looking at previous periods. In 1980-1989 Stolen - Hacked accounts for 96% of records and 43% of incidents, while Unspecified accounts for the remaining 4% of records, along with 43% of incidents. Insider Abuse or Theft was 14% of incidents and zero records, meaning no breach. In 1990-1999 Stolen - Hacked accounts for 45% of incidents but the number of records is dwarfed by the number of records lost in Unspecified Breach.

In brief, this report defends the insider threat hypothesis only in name, and really only when you cloak it in "organizational ineptitude" rather than dedicated insiders out to do the company intentional harm.

I recommend reading the report to see if you find the same conclusions buried behind the numbers. It's entirely possible I'm missing something, but I don't see how this report diminishes the external threat.

Friday, April 13, 2007

Bejtlich Teaching at USENIX Annual

USENIX just posted details on USENIX Annual 2007 in Santa Clara, CA, 17-22 June 2007. I'll be teaching Network Security Monitoring with Open Source Tools and TCP/IP Weapons School (layers 2-3) day one and day two. I will most likely teach layers 4-7 for USENIX at USENIX Security in Boston, MA, 6-10 August 2007. Register before 1 June to get the best deal. I hope to see you there or at another training event this year!

It Takes a Thief

Yesterday I watched an episode of the Discovery Channel series It Takes a Thief. This is the essence of the show:

  1. Business or homeowners agree to have the physical security of their property tested.

  2. Former thieves case the target, then rob it blind.

  3. Victims review videotape showing how thieves accomplished their task.

  4. Victims exhibit shock and awe.

  5. Hosts help victims improve the physical security of their property.

  6. Former thieves conduct a second robbery to assess the improved security measures.

I have mixed feelings about the show. First, I'm not thrilled by the attention given to the former thieves. Reading this question and answer session with them made me uneasy. I justify watching the show and mentioning it here because the lessons for security are helpful. However, it seems to be rewarding criminal behavior and glorifying theft. I would feel better if these guys acted more like Frank Abagnale (who has had to deal with controversy in our industry). Mr. Abagnale always expresses great regret for his crimes and has worked tirelessly for decades to improve security.

Second, I was disappointed to see how naive the business owners were with respect to security. They expected a door latch weaker than the one pictured at left to "secure" a door on their property. The host of the showed just pulled hard on the door and yanked the latch right off the frame! This reminded me of Web site owners expecting a hidden directory to hide files not meant for the public. I don't mean security by obscurity; I mean expecting ridiculously weak measures to have any effect beyond those who simply follow the rules. I guess my take-away from that reaction was the idea that security measures meant to be effective for law-abiding citizens, but no one else, aren't really security measures at all.

Third, it took a "penetration test" -- step 2 -- to demonstrate the weak security posture of the property. The owners were shocked to see the intrusion occur on videotape. What's worse, their reaction still emphasized not inconveniencing their guests. What?!? Why should I want to even stay at their hotel or do business with them if my personal information, property, or safety could be so easily compromised? The reality is I feel safer knowing the business takes security seriously, and it doesn't take bars on windows or guards with guns to improve security postures.

Fourth, I was really glad to see a strong emphasis on monitoring as part of the new security plan. The host and team deployed nine video cameras across the property, along with improved door locks and the like. Also note that it took reviewing videotape of the original (staged) intrusion to understand the property's weaknesses. Sure, a "vulnerability test" could have enumerated all or most weaknesses, but knowing how the criminals in the case actually operate can be more valuable. When the second pen test happened, the property owner detected the intrusion attempt and confronted the testers. (In real life the police might have been called instead.)

If you want other thoughts on this show, read Marcin's post.

Brief Thoughts on Security Education

Once in a while I get requests from blog readers for recommendations on security education. I am obviously biased because I offer training independently, in private and public forums. However, I've attended or spoken at just about every mainstream security forum, so I thought I would provide a few brief thoughts on the subject.

First, decide if you want to attend training, briefings, or classes. I consider training to be an event of at least 1/2 day or longer. Anything less than 1/2 day is a briefing, and is probably part of a conference. Some conferences include training, so the two topics are not mutually exclusive. Classes include courses offered by .edu's.

Training events focus on a specific problem set or technology, for an extended period of time. Training is usually a stand-alone affair. For example, when I prepared for my CCNA, took a week-long class by Global Net Training. If I choose to pursue the CCNP I will return to GNT for more training. I seldom attend training because I do not usually need in-depth discussions of a single topic.

Briefings also focus on specific problems or technologies, but their scope is usually narrow due to their time constraints. The content is typically fresher because it takes less work to prepare a briefing compared to a 1/2 day or longer training session. Briefings are more likely to contain marketing material because you can be halfway through the talk before realizing it's a pitch piece. I attend briefings more often than training because they tend to fit my schedule and I can quickly learn something new.

Classes are the forums offered by institutions over an extended period of time. Traditional colleges and universities provide classes, although some non-traditional teaching vehicles exist. I've never taken any of these although I would like to pursue my PhD some point soon.

With that background, here are a few thoughts on popular education venues:

  • USENIX: USENIX is my favorite venue. USENIX offers 1/2, 1, and 2-day training, plus briefings. I usually train at the three major conferences they offer: Annual, Security, and LISA (Large Installation System Administration). Training tends to be very practical, with strong preferences for operational information for system administrators. The briefings especially tend to be more academic, with lots of research by students and/or professors. People-wise, I tend to like USENIX for connecting with the university community.

  • Black Hat: Black Hat is the best place to learn the newest public attack tools and techniques. Defense is usually secondary. Black Hat offers 1 and 2-day training, plus briefings. I've trained through Foundstone at Black Hat, and I'll be training at Black Hat in Las Vegas this summer. If you want to get very technical information on attacks (and some countermeasures), Black Hat is a great venue. People-wise, I've decided to begin attending Black Hat regularly because the most interesting people are there.

  • SANS: SANS offers a wide variety of material, through training, briefings, classes, newsletters, and webcasts. I taught the SANS IDS track in 2002 and 2003, then returned to teach Enterprise Network Instrumentation late last year. I'll be back teaching ENI at SANSFIRE 2007. In my opinion some SANS training is woefully out-of-date, while other training is very good. SANS tracks are usually six days. SANS also offers shorter training like the log management summit I attended last year. Other times SANS offers very short briefings on a single topic, like the SANS Software Security Institute. People-wise, SANS tracks tend to involve more people at the beginning of their security careers.

  • RSA: I mention RSA because it's big and people might want to know more about it. I spoke at RSA 2006. That was enough for me. RSA is the place to be if you're a vendor, but otherwise I found the talks less inspiring than other venues. If you're a cryptographer you might find RSA's cryptography track to be helpful, since that subject is usually not emphasized elsewhere. People-wise, I met lots of people trying to attract business at RSA last year.

  • Niche Public Events: A lot of other venues fill this space. Among those I've attended or spoken at, CanSecWest is one leader. I delivered a Lightning Talk there in 2004. The best part of CSW is the fact it's a single track. By the end of the event, some sense of community has been built. ShmooCon is similar to CSW, although it has multiple tracks. Techno Security and Techno Forensics are two great sources of education, generally heavy on Feds and forensics. I'll be teaching at Security and probably later at Forensics this year. If you're in Europe take a look at CONFidence in Poland.

  • Niche Government or Government-Centric Events: I include conferences usually sponsored or mainly attended by law enforcement, government, and military audiences here. FIRST and GFIRST fit these bills. I speak there to meet people and less to hear about what's happening. The Telestrategies ISS World events are similar. For those of you in Australia, AusCERT looks like a good bet; I'll be there this year.

That's all I have time to discuss now. Good luck spending your security education dollars.

FISMA Dogfights

My favorite show on The History Channel is Dogfights. Although I wore the US Air Force uniform for 11 years I was not a pilot. I did get "incentive" rides in T-37, F-16D, and F-15E jets as a USAFA cadet. Those experiences made me appreciate the rigor of being a fighter pilot. After watching Dogfights and learning from pilots who fought MiGs over North Vietnam, one on six, I have a new appreciation for their line of work.

All that matters in a dogfight is winning, which means shooting down your opponent or making him exit the fight. A draw happens when both adversaries decide to fight another day. If you lose a dogfight you die or end up as a prisoner of war. If you're lucky you survive ejection and somehow escape capture.

Winning a dogfight is not all about pilot skill vs pilot skill. Many of the dogfights I watched involved American pilots who learned enemy tactics and intentions from earlier combat. Some of the pilots also knew the capabilities of enemy aircraft, like the fact that the MiG 17 was inferior to the F-8 in turns below 450 MPH. Intelligence on enemy aircraft was derived by acquiring planes and flying them. In some cases the enemy reverse engineered American weapons, as happened with the K-13/AA-2 Atoll -- a copy of the Sidewinder.

All of this relates to FISMA. Imagine if FISMA was the operational theme guiding air combat. Consultants would spend a lot of time and money documenting American aircraft capabilities and equipment. We'd have a count of every rivet on every plane, annotated with someone's idea that fifty rivets per leading edge is better than forty rivets per leading edge. Every plane, every spare part, and every pilot would be nicely documented after a four to six month effort costing millions of dollars. Every year a report card would provide grades on fighter squadrons FISMA reports.

What would happen to these planes when they entered combat? The FISMA crowd would not care. American aircraft could be dropping from the sky and it would not matter to FISMA. All of the FISMA effort creates a theoretical, paper-based dream of how a "system" should perform in an environment. When that system -- say, a jet fighter -- operates under real life combat conditions, it may perform nothing like what the planners envisioned. Planners range from generals setting requirements for a new plane, engineers desiging the plane, and tacticians imagining how to use the plane in combat.

Perhaps the guns jam in high-G turns. Perhaps the missiles never acquire lock and always miss their targets. Maybe the enemy has stolen plans for the aircraft (or an actual aircraft!) and know that the jet cannot perform as well as the enemy plane doing vertical rolling scissors.

Furthermore, the enemy may not act like the planners imagined. This is absolutely crucial. The enemy may have different equipment or tactics, completely overpowering friendly capabilities.

Maybe FISMA would address these issues in three years, the next time a FISMA report is due. Meanwhile, the US has losts all its pilots and aircraft, along with control of its airspace.

Maybe this analogy will help explain the problems I have with FISMA. I already tried an American football analogy in my post Control-compliant vs Field-Assessed Security. My bottom line is that FISMA involves control compliance. That is a prerequisite for security, since no one should field a system known to be full of holes. However, effective, operational security involves field assessment. That means evaluating how a system performs in the real world, not in the mind of a consultant. Field-assessed security is absolutely missing in FISMA. Don't tell me the tests done prior to C&A count. They're static, controlled, and do not reflect the changing environment found on real networks attacked by real intruders.

Incidentally, I also really liked the BBC series Battlefield Britain and I may check out the other History Channel series Shootout!.

Thursday, April 12, 2007

Month of Owned Corporations

Thanks to Gadi Evron for pointing me towards the 30 Days of Bots project happening at Support Intelligence. SI monitors various data sources to identify systems conducting attacks and other malicious activity. Last fall they introduced their Digest of Abuse (DOA) report which lists autonomous system numbers of networks hosting those systems.

SI published the latest DOA report Monday and they are now using that data to illustrate individual companies hosting compromised systems. They started with 3M, then moved to Thomson Financial, AIG, and now Aflac. For these examples SI cites corporate machines sending spam, among other activities. Brian Krebs reported on other companies exhibiting the same behavior based on his conversations with SI.

This is the kind of metric I like to see. Who cares about percentage of machines with anti-virus, blah blah. Instead, consider these: is my company -- or agency -- listed on the SI DOA report? If so, how high? Is that ranking higher this week than last? And so on... Metrics for AV coverage is like reporting on the number of band-aids on a fencer who continues to be poked by an opponent.

FISMA 2006 Scores

There are FISMA scores for 2006, along with 2005, 2004, and 2003 -- some of which I discussed previously. What I wrote earlier still stands:

Notice that these grades do not reflect the effectiveness of any of these security measurements. An agency could be completely 0wn3d (compromised in manager-speak) and it could still receive high scores. I imagine it is difficult to grade effectiveness until a common set of security metrics is developed, including ways to count and assess incidents.

I still believe FISMA is a joke and a jobs program for so-called security companies without the technical skills to operationally defend systems.

The only benefit I've seen from FISMA is that low-scoring agencies are being embarrassed into doing more certification and accreditation. C&A is a waste of time and money. However, if security staff can redirect some of that time and money into technical security work that really makes a difference, then FISMA is indirectly helping agencies with poor scores. Agencies with high scores are no more secure than agencies with low scores. High-scoring agencies just write good reports, because FISMA is a giant paperwork exercise that makes no difference on the security playing field.

If you believe otherwise you're welcome to your opinion. You're also welcome to the lack of a future job when the FISMA consulting boondoggle ends and report jockeys are left without any marketable technical skills. If you want to know more about this, reading my old FISMA posts is sufficient. I don't need to restate my arguments when they're archived.

If I sound bitter, it's because I've seen my taxpaying dollars wasted for the past five years while various unauthorized parties have their way with these agencies. FISMA is not working.

Wednesday, April 11, 2007

Bejtlich Speaking at Secure Development World

On 13 September 2007 at 0915 I will discuss the Self-Defeating Network at the Secure Development World conference in Alexandria, VA. I was invited to speak even though I am not exactly "active" in the secure programming arena. The conference organizers asked me to speak from the operational point of view so developers understand what end users want and need. The list of speakers already looks good -- check it out.

Training an IDS

Thanks to the newly named Threat Level I read Women at Love Field 'Acting Suspiciously' and Airport Watch Figure Confirms Terrorist Tie. You can obviously make up your own mind about these two, but I'm glad the police were alert enough to grab them. Here's a few choice quotes. I promise to tie this to digital security.

"I'm a trained sniper and proud of it," Ms. Al-Homsi said in an interview Thursday after first refusing to comment on whether she has any terrorism ties. She then said no.

Unless this is a lie, I doubt this lady received training in the US military. So where else would she be trained to be a sniper?

She said that she practices her rifle skills at the Alpine Shooting Range in Fort Worth. An employee confirmed that she's been going there for years.

"In all the Muslim garb, shooting an assault weapon, it seemed at first like she was trying to draw attention," said Dave Rodgers. "But then she came out so much, it became normal."

Hmm, like that back door installed before you started looking for it? Assuming the "sniper" really is a threat, it sounds like she trained shooting range employees to accept her as normal simply by being a frequent customer -- like that regular 2 am data transfer out of your site. It must be an authorized backup activity, right? It's always happening. That makes it normal... I hope?

Burning CDs on Ubuntu

Sometimes this blog is just a place for me to take notes on tasks I want to repeat in the future, like burning CDs. In this case I'm running Ubuntu and using the new portable Sony DRX-S50U Multi-Format DVD Burner I bought to accompany my Thinkpad x60s on the road.

First I created an .iso of the files I wanted on the CD-R.

richard@neely:/var/tmp$ mkisofs -J -R -o /data/shmoocon2007hack.iso shmoocon2007/
INFO: UTF-8 character encoding detected by locale settings.
Assuming UTF-8 encoded filenames on source filesystem,
use -input-charset to override.
Using shmoo000.pca;1 for /shmoocon_hack_rd2_timeadj.pcap (shmoocon_hack_rd1_timeadj.pcap)
1.68% done, estimate finish Wed Apr 11 21:23:45 2007

Second I asked cdrecord to find the burner.

richard@neely:/var/tmp$ sudo cdrecord -scanbus
Cdrecord-Clone 2.01.01a03 (i686-pc-linux-gnu) Copyright (C) 1995-2005 Joerg Schilling
NOTE: this version of cdrecord is an inofficial (modified) release of cdrecord
and thus may have bugs that are not present in the original version.
Please send bug reports and support requests to .
The original author should not be bothered with problems of this version.

cdrecord: Warning: Running on Linux-2.6.17-11-generic
cdrecord: There are unsettled issues with Linux-2.5 and newer.
cdrecord: If you have unexpected problems, please try Linux-2.4 or Solaris.
Linux sg driver version: 3.5.33
Using libscg version 'debian-0.8debian2'.
cdrecord: Warning: using inofficial version of libscg (debian-0.8debian2 '@(#)scsitransp.c
1.91 04/06/17 Copyright 1988,1995,2000-2004 J. Schilling').
0,0,0 0) 'ATA ' 'TOSHIBA MK6032GS' 'AS31' Disk
0,1,0 1) *
0,2,0 2) *
0,3,0 3) *
0,4,0 4) *
0,5,0 5) *
0,6,0 6) *
0,7,0 7) *
4,0,0 400) 'Optiarc ' 'DVD RW AD-7540A ' '1.D0' Removable CD-ROM
4,1,0 401) *
4,2,0 402) *
4,3,0 403) *
4,4,0 404) *
4,5,0 405) *
4,6,0 406) *
4,7,0 407) *

Third I burned them to the CD-R.

richard@neely:/var/tmp$ sudo cdrecord -v dev=4,0,0 driveropts=burnfree -eject
-data /data/shmoocon2007hack.iso
cdrecord: No write mode specified.
cdrecord: Asuming -tao mode.
cdrecord: Future versions of cdrecord may have different drive dependent defaults.
cdrecord: Continuing in 5 seconds...
Cdrecord-Clone 2.01.01a03 (i686-pc-linux-gnu) Copyright (C) 1995-2005 Joerg Schilling
NOTE: this version of cdrecord is an inofficial (modified) release of cdrecord
and thus may have bugs that are not present in the original version.
Please send bug reports and support requests to .
The original author should not be bothered with problems of this version.

cdrecord: Warning: Running on Linux-2.6.17-11-generic
cdrecord: There are unsettled issues with Linux-2.5 and newer.
cdrecord: If you have unexpected problems, please try Linux-2.4 or Solaris.
TOC Type: 1 = CD-ROM
scsidev: '4,0,0'
scsibus: 4 target: 0 lun: 0
Linux sg driver version: 3.5.33
Using libscg version 'debian-0.8debian2'.
cdrecord: Warning: using inofficial version of libscg (debian-0.8debian2 '@(#)scsitransp.c
1.91 04/06/17 Copyright 1988,1995,2000-2004 J. Schilling').
Driveropts: 'burnfree'
SCSI buffer size: 64512
atapi: 1
Device type : Removable CD-ROM
Version : 0
Response Format: 2
Capabilities :
Vendor_info : 'Optiarc '
Identifikation : 'DVD RW AD-7540A '
Revision : '1.D0'
Device seems to be: Generic mmc2 DVD-R/DVD-RW.
Current: 0x0009
Profile: 0x002B
Profile: 0x001B
Profile: 0x001A
Profile: 0x0016
Profile: 0x0015
Profile: 0x0014
Profile: 0x0013
Profile: 0x0012
Profile: 0x0011
Profile: 0x0010
Profile: 0x000A
Profile: 0x0009 (current)
Profile: 0x0008 (current)
Profile: 0x0002
cdrecord: This version of cdrecord does not include DVD-R/DVD-RW support code.
cdrecord: See /usr/share/doc/cdrecord/README.DVD.Debian for details on DVD support.
Using generic SCSI-3/mmc CD-R/CD-RW driver (mmc_cdr).
Supported modes: TAO PACKET SAO SAO/R96R RAW/R96R
Drive buf size : 890880 = 870 KB
FIFO size : 4194304 = 4096 KB
Track 01: data 583 MB
Total size: 670 MB (66:23.13) = 298735 sectors
Lout start: 670 MB (66:25/10) = 298735 sectors
Current Secsize: 2048
ATIP info from disk:
Indicated writing power: 5
Is not unrestricted
Is not erasable
Disk sub type: Medium Type A, high Beta category (A+) (3)
ATIP start of lead in: -11634 (97:26/66)
ATIP start of lead out: 359846 (79:59/71)
Disk type: Short strategy type (Phthalocyanine or similar)
Manuf. index: 3
Manufacturer: CMC Magnetics Corporation
Blocks total: 359846 Blocks current: 359846 Blocks remaining: 61111
Starting to write CD/DVD at speed 24 in real TAO mode for single session.
Last chance to quit, starting real write 0 seconds. Operation starts.
Waiting for reader process to fill input buffer ... input buffer ready.
BURN-Free is ON.
Performing OPC...
Starting new track at sector: 0
Track 01: 583 of 583 MB written (fifo 100%) [buf 100%] 8.3x.
Track 01: Total bytes read/written: 611805184/611805184 (298733 sectors).
Writing time: 523.078s
Average write speed 7.8x.
Min drive buffer fill was 100%
Fixating time: 42.065s
BURN-Free was never needed.
cdrecord: fifo had 9637 puts and 9637 gets.
cdrecord: fifo was 0 times empty and 9555 times full, min fill was 79%.

Last I checked the files on the CD.

richard@neely:/var/tmp$ ls -alh /media/cdrom0/
total 584M
drwxr-xr-x 2 richard richard 2.0K 2007-03-26 16:27 .
drwxr-xr-x 6 root root 1.0K 2007-04-05 15:33 ..
-rw-r--r-- 1 richard richard 149M 2007-03-26 16:19 shmoocon_hack_rd1_timeadj.pcap
-rw-r--r-- 1 richard richard 435M 2007-03-26 16:27 shmoocon_hack_rd2_timeadj.pcap

Looks good!

Network Security Monitoring History

Recently a network forensics vendor was kind enough to spend some time on a WebEx-type session describing their product. I try to stay current with technology so I can offer suggestions to clients with budgets for commercial products.

During the talk the presenter was very excited by his company's capability to collect all traffic and examine it later for troubleshooting and security purposes. He implied this was a "new capability in this space," so I asked if he had read any of my books. He said no, but he did read my blog. It occurred to me that it might be helpful to reprint the history of NSM I wrote for Tao of Network Security Monitoring.

I'm doing this for three reasons. First, I want people to know that the ideas I've been publicly evangalizing since 2002 actually date back 10, perhaps 13 years earlier. I take credit for paying attention to smart people with whom I worked when I first started in this field. I don't take credit for inventing the idea that we need high quality network traffic to perform security investigations!

Second, I want to provide a public record of these historical capabilities. As I talk to more vendors I don't want them to think I'm "stealing their ideas," since many of "their ideas" were invented before some of their programmers graduated from elementary school.

Third, one day (perhaps in 2008 or 2009) I would like to blog again and link back to this post. Hopefully I'll have commercial tools providing these capabilities to anyone who wants them, and plenty of companies will be declaring themselves the "world's first blah" and "pioneers of blah" and so forth. I'll be happy that customers will finally have the data they need to understand what is happening in their enterprise, whatever weird, long, and contentious road was followed.

I can testify to the following history of network security monitoring because I participated in these events or have spoken directly with the participants who made the events happen. I base my understanding of the early days of NSM on information learned from Todd Heberlein and on my work with pioneers like Larry Shrader and Roberto Garcia.

NSM began as an informal discipline with Todd Heberlein’s development of the Network Security Monitor. The Network Security Monitor was the first intrusion detection system to use network traffic as its main source of data for generating alerts. Heberlein and others worked at the University of California at Davis from 1988 through 1995 on the Network Security Monitor, although by 1991 initial Network Security Monitor system research and development was complete.

The Air Force Computer Emergency Response Team (AFCERT) was the first organization to informally follow NSM principles. The AFCERT was created on October 1, 1992, partially as a result of the 1988 Morris Worm. The team began work as part of the Air Force Cryptologic Support Center at Kelly Air Force Base in San Antonio, Texas. When the Air Force Information Warfare Center (AFIWC) was activated on September 10, 1993, the AFCERT joined that unit. The AFCERT’s mission during the 1990s was to conduct Computer Network Defense (CND) operations to secure and protect the global Air Force communication and computer (C2) weapon system.

The Air Force had long recognized the need for intrusion detection systems, initially funding the Haystack host-based audit trail intrusion detection system. In 1993 the AFCERT worked with Heberlein to deploy a version of the Network Security Monitor as an Automated Security Incident Measurement (ASIM) system. The Air Force’s intent was
to measure the level of malicious activity on its networks as a way to perform threat assessment. By gaining an accurate idea of the capabilities and intentions of its adversaries, the AFCERT could position itself to acquire the funding, personnel, and responsibilities needed to properly monitor Air Force networks.

In the mid-1990s the Air Force’s network consisted of well over 100 Internet points- of-presence, but by the end of 1995 the AFCERT monitored only 26 installations. By the end of 1996 coverage had doubled to 52 Air Force bases and three “Joint” or multi-service locations. By mid-1997 ASIM sensors watched all officially sanctioned Air Force Internet points-of-presence. (Like any large organization, the AFCERT struggled to deal with local base commanders, or “management,” who bypassed authorized Internet connections by installing their own Internet links.) In 1998 the AFCERT added the Wheel Group’s NetRanger sensors to its toolbox, using them at the request of Central Command to monitor its forward locations in the Middle East.

The AFCERT implemented network security monitoring through products, people, and processes. ASIM was the tool used to generate indications and warnings. AFCERT analysts worked in real-time or batch cells, either reviewing near-real-time alerts or daily session records. Both teams had access to full content or transcript data collected by ASIM for certain high-value services, such as Telnet, rlogin, FTP, HTTP, and other protocols. Analysts escalated evidence of suspected intrusions to the Incident Response Team (IRT), which validated and investigated intrusions. After the Melissa virus hit in March 1999, the AFCERT formed a dedicated virus team to specifically handle malware outbreaks.

In late 2000, Ball Aerospace & Technologies Corporation (BATC) asked Robert “Bamm” Visscher and myself to help transition intrusion detection techniques to the commercial sector. Bamm and I had worked with Larry Shrader in the AFCERT, and we set about creating an NSM operation from scratch. Working on a tight budget, and realizing available commercial IDS products didn’t suit our needs, Bamm developed the Snort Personal Real-time Event GUI (SPREG).

SPREG began its life as a Tcl/Tk program to watch attacks on Bamm’s cable modem connection. As I trained analysts to take on 24 by 7 monitoring duties, Bamm refined SPREG to meet our NSM needs. SPREG relied on Snort for its alert and full content data. John Curry, acting as a consultant, wrote code to collect session data. All three elements were integrated, and by the spring of 2001 BATC offered the first true commercial NSM operation to nongovernment customers. Our 12 analysts interpreted alert, session, and full content data to discover intruders.

In June 2001 I “hacked” a copy of Congressman Lamar Smith’s Web page while Bamm demonstrated our monitoring capability. On July 13, 2001, one of our analysts, LeRoy Crooks, detected the Code Red worm -- six days before it struck the general Internet population. I posted his findings to the SecurityFocus Incidents list on July 15, 2001.

In April 2002, I left BATC to become a consultant with Foundstone. While performing incident response duties I employed emergency NSM to investigate intrusions against several Fortune 100 companies. I began using Argus to collect session data because I no longer had access to the proprietary code BATC bought to collect session data. I began teaching NSM principles to students of Foundstone’s “Incident Response” and “Ultimate Hacking” classes. I also taught NSM to two sessions’ worth of SANS intrusion detection track attendees who responded to my request to abandon the formal material in favor of something more relevant.

On December 4, 2002, Bamm and I presented a Webcast for titled “Network Security Monitoring” ( This presentation offered the first formal definition of NSM as “the collection, analysis, and escalation of indications and warnings (I&W) to detect and respond to intrusions.” At the time I was only theorizing about the use of statistical information and limited NSM to event, session, and full content data. (I began using the term “alert” rather than “event” data when writing this book in fall 2003.)

In late 2002 Bamm began work on an open source NSM product called the Snort GUI for Lamerz (SGUIL). (Sguil’s name was born in an IRC session and was not designed with marketing in mind!) Bamm registered and announced Sguil’s initial availability in January 2003. At the time the most popular open source GUI for Snort was ACID. Throughout 2003 Sguil gained momentum, and it appeared in a second NSM Webcast on August 21, 2003. During 2003 the fourth edition of Hacking Exposed was published. It featured a case study I wrote, which included the NSM definition and this nod to the “father of NSM”:

“Inspired in name by Todd Heberlein’s ‘Network Security Monitor,’ NSM is an operational model based on the Air Force’s signals intelligence collection methods. NSM integrates IDS products, which generate alerts; people, who interpret indications and warning; and processes, which guide the escalation of validated events to decision makers."

I'd like to add a few more points to that original script. First, in 1999-2000, I remember using the AFCERT's Common Intrusion Detection (CID) Java console to right-click and call Ethereal to decode Libpcap data for alerts or sessions of interest. The Libpcap data was collected by our ASIM (Automated Security Incident Measurement) sensors independent of the alerts or sessions. This year you are going to see IDS/IPS vendors tying into network forensic appliance application programming interfaces to do this same trick, only eight years later.

I may try to add to this as I remember more details. Any old Air Force guys out there with memories to add, please feel free to leave comments. Thank you.