NoVA Sec Meeting Memory Analysis Notes

On 24 April we were lucky to have Aaron Walters of Volatile Systems speak to our NoVA Sec group on memory analysis.

I just found my notes so I'd like to post a few thoughts. There is no way I can summarize his talk. I recommend seeing him the next time he speaks at a conference.

Aaron noted that the PyFlag forensics suite has integrated the Volatility Framework for memory analysis. Aaron also mentioned FATkit and VADtools.

In addition to Aaron speaking, we were very surprised to see George M. Garner, Jr., author of Forensic Acquisition Utilities and KnTTools with KnTList. George noted that he wrote FAU at the first SANSFIRE, in 2001 in DC (which I attended too) after hearing there was no equivalent way to copy Windows memory using dd, as one could with Unix.

George sets the standard for software used to acquire memory from Windows systems, so using his KnTTools to collect memory for analysis by KnTList and/or Volatility Framework is a great approach.

While Aaron's talk was very technical, George spent a little more time on forensic philosophy. I was able to capture more of this in my notes. George noted than any forensic scenario usually involves three steps:

  1. Isolate the evidence, so the perpetrator or others cannot keep changing the evidence

  2. Preserve the evidence, so others can reproduce analytical results later

  3. Document what works and what does not


At this point I had two thoughts. First, this work is tough and complicated. You need to rely upon a trustworthy party for tools and tactics, but also you must test your results to see if they can be trusted. Second, as a general principle, raw data is always superior to anything else because raw data can be subjected to a variety of tools and techniques far into the future. Processed data has lost some or all of its granularity.

George confirmed my first intution by stating that there is no real trustworthy way to acquire memory. This reminded me of statements made by Johanna Rutkowska. George noted that whatever method he could use, running as a kernel driver, to acquire memory could be hooked by an adversary already in the kernel. It's a classic arms race, where the person trying to capture evidence from within a compromised system must try to find a way to get that data without being fooled by the intruder.

George talked about how nVidia and ATI have brought GPU programming to the developer world, and that there is no safe way to read GPU memory. Apparently intruders can sit in the GPU, move memory to and from the GPU and system RAM, and disable code signing.

I was really floored to learn the following. George stated that a hard drive is a computer. It has error correction algorithms that while "pretty good" are not perfect. In other words, you could encounter a situation where you cannot obtain a reliable "image" of a hard drive from one acquisition to the next. He contributed an excellent post here which emphasizes this point:

One final problem is that the data read from a failing drive actually may change from one acquisition to another. If you encounter a "bad block" that means that the error rate has overwhelmed the error correction algorithm in use by the drive. A disk drive is not a paper document. If a drive actually yields different data each time it is read is that an acquisition "error." Or have you accurately acquired the contents of the drive at that particular moment in time. Perhaps you have as many originals as acquired "images." Maybe it is a question of semantics, but it is a semantic that goes to the heart of DIGITAL forensics.

Remember that hashes do not guarantee that an "image" is accurate. They prove that it has not changed since it was acquired.


I just heard the brains of all the cops-turned-forensic-guys explode.

This post has more technical details.

So what's a forensic examiner to do? It turns out that one of the so-called "foundations" of digital forensics -- the "bit-for-bit copy" -- is no such foundation at all, at least if you're a "real" forensic investigator. George cited Statistics and the Evaluation of Evidence for Forensic Scientists by C. G. G. Aitken and Franco Taroni (pictured at left) to refute "traditional" computer forensics. Forensic reliability isn't derived from a bit-for-bit copy; it's derived from increasing the probability of reliability. You don't have to rely on a bit-for-bit copy. Increase reliability by increasing the number of evidence samples -- preferably using multiple methods.

What does this mean in practice? George said you build a robust case, for example, by gathering, analyzing, and integrating ISP logs, firewall logs, IDS logs, system logs, volatile memory, media, and so on. Wait, what does that sound like? You remember -- it's how Keith Jones provided the evidence to prove Roger Duronio was guilty of hacking UBS. It gets better; this technique is also called "fused intelligence" in my former Air Force world. You trust what you are reporting when independently corroborated by multiple sources.

If this all sounds blatantly obvious, it's because it is. Unfortunately, when you're stuck into a world where the process says "pull the plug and image the hard drive," it's hard to introduce some sanity. What's actually forcing these dinosaurs to change is their inability to handle 1 TB hard drives and multi-TB SAN.

As you can tell I was pretty excited by the talks that night. Thanks again to Aaron and George for presenting.

Comments

H. Carvey said…
If this all sounds blatantly obvious, it's because it is.

I was just thinking the same thing myself...

...when you're stuck into a world where the process says "pull the plug and image the hard drive," it's hard to introduce some sanity.

How so? I really think that this argument is loosing steam. Who are we trying to force to change? Guys like George and AAron have done a fantastic job of showing the community what's possible and many have jumped on board, realizing the vast potential of things like physical memory analysis.

Are you trying to say that "these dinosaurs" are...who...exactly? Anyone who lacks the ability to change is going to go the way of the dodo and (apparently) the California monk seal.
Hi Harlan,

Hanging out at Techno Security reminded me that a lot of people only think in terms of "bit-for-bit" and little else.
hogfly said…
Indeed, and most of them dwell on the gov't and LE side of the house. I spoke with several folks from that side and they all conveyed the same thing.
H. Carvey said…
Gents...

Why do you think that is? For those who maintain that way of thinking, why do you think they cling to it the way they do?
jth said…
@keydet89: I would guess Richard was referring to corporate policy-slash-IT/helpdesk, not the forensic analysts.

You're still faced, though, with the challenge of keeping the running system in-tact and untouched until a trained responder can arrive and acquire the volatile evidence. If your user population is spread out across a state, doing so becomes extremely difficult. Which is part of why you still see the "pull & acquire" tactic in use. At least you know they didn't perform any extraneous actions on the system in that situation.
Ronald Weiss said…
Great topic and here some thoughts (As former LE and a trainer of gov types)

Why does the pull the plug mentality exist?

I think it comes down to serval factors:

1. Courts and the process of accepting evidence......... having a whole system image makes the lawyers and everyone feel more "secure", especially when it is a crucial piece of evidence. If there are missing parts to the evidence it creates holes that can be used by opposing parties.

2. The perspective on digital evidence in the criminal justice system is derived from the collection of evidence at an actual physical crime scene. So the idea that are taught to much of law enforcement are about "control the crime scene" and "protect the evidence" and this boilds down to stopping the system in its tracks.

Now since I have been doing this (the late 90's) agents have always been taught to grab volatile data... just to different levels of sophistication.

3. "Building a Robust Case"

I think this is an excellent point and something that is already done. A digital case or even a digital security investigation that will not go criminal trial should depend on multiple points of data to ensure accuracy and reliability. This is generally already done.


So why does the LE pull the plug mentality persist? Because the standard government model is to push people into these positions with a limited amount of training. But I see it in the private sector too..........it is not just law enforcement.

It is also lawyers and juries who have a vague understanding of the technology and they seem to expect this method to "perfectly freeze" any digital evidence.

Everyone wants the smoking gun when I work cases... sometimes there is one. But often it is made up of multiple pieces of evidence.

I think the answer is about teaching investigators (LE or Private Sector) how to assess the matter they are investigating and all the potential sources of evidence and then the best methods of collections for all those various sources in a way to maximize the reliability of each piece of the case they collect.
Anonymous said…
This comment has been removed by a blog administrator.
DanPhilpott said…
Can anyone recommend a reliable, up-to-date book or legal reference on the current rules of evidence related to digital evidence? I keep getting roped into doing incident response type forensics but am not interested in supporting civil or criminal investigations. Which means I need to know when to take my hands off a system, how to document what I have done up to that point and how to pass it off to someone more focused on supporting investigations. I seem to recall sea changes occurring in the rules of evidence as relates to digital media in the past few years so an up to date book or reference would be best.
Anonymous said…
Part of the mentality of pull the plug IS the training. We are taught by places such as NW3C to pull the plug. Not their fault. They provide excellent training and they give a caveat on the loss of potential evidence. They say develop your own agency policy. So the investigator looks at the brass and the brass says do what you were taught. Pull the plug.

Besides your arrays and servers, what about bitlocker?

Another problem is, who is the person that will stand up and say I know how to analyze volatile memory and processes? Not in your world, but in the world of the local LE.

Court opinions can be changed. Computer forensics is evolving and someone (the prosecutor) needs to step up and educate the Courts on the changes. But which Prosecutor?

I beleive that early on in your Incident Response Planning, a decision should be made on when to involve LE. When the times comes, call them in and pass on what you got. Then let them acquire the rest. Besides obtaining the data or evidence, is Chain-of-Custody. That IMHO is more important that how you got it. You can explain to the Courts the reasons for not following the "Dinosaurs", but not having a chain-of-custody will get your stuff tossed.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics