Triaging Malware Incidents

Tuesday, September 24, 2013 Posted by Corey Harrell 8 comments
Triage is the assessment of a security event to determine if there is a security incident, its priority, and the need for escalation. As it relates to potential malware incidents the purpose of triaging may vary. A few potential questions triaging may address are: is malware present on the system, how did it get there, and what was it trying to accomplish. To answer these questions should not require a deep dive investigation tying up resources and systems. Remember, someone needs to use the system in question to conduct business and telling them to take a 2 to 4 hour break unnecessarily will not go over well. Plus, taking too much time to triage may result in the business side not being happy (especially if it occurs often), the IT department wanting to just re-image the system and  move on, and you limit your ability to look in to other security events and issues. In this post I'm demonstrating one method to triage a system for a potential malware incident in less than 30 minutes.

The triage technique and the tools to use is something I've discussed before. I laid out the technique in my presentation slides Finding Malware Like Iron Man. The presentation also covered the tools but so has my blog. The Unleashing auto_rip post explains the RegRipper auto_rip script and the Tr3Secure Data Collection Script Reloaded outlines a script to collect data thus avoiding the need for the entire hard drive. The information may not be new (except for one new artifact) but I wanted to demonstrate how one can leverage the technique and tools I discussed to quickly triage a system suspected of being infected.

The Incident


As jIIr is my personal blog, I'm unwilling to share any casework related to my employer. However, this type of sharing isn't even needed since the demonstration can be conducted on any infected system. In this instance I purposely infected a system using an active link I found on URLQuery. Infecting the system in this manner is a common way systems are infected everyday which makes this simulation worthwhile for demonstration purposes.

Responding to the System


There are numerous ways for a potential malware incident to be detected. A few include IDS alerts, antivirus detections, employees reporting suspicious activity, or IT discovering the incident while trying to resolve a technical issue. Regardless of the detection mechanism, one of the first things that have to be done is to collect data for it to be analyzed. The data is not only limited to what is on the system since network logs can provide a wealth of information as well. My post’s focus is on the system's data since I find it to be the most valuable for triaging malware incidents.

Leverage the Tr3Secure Data Collection Script to collect the data of interest from the system. The command below assigns a case number of 9-20, collects both volatile and non-volatile data (default option), and stores the collected data to the drive letter F (this can be a removable drive or a mapped drive).

tr3-collect.bat 9-20 F

The second script to leverage is the TR3Secure Data Collection Script for a User Account to collect data from the user profile of interest. Most of the time it's fairly easy to identify the user profile of interest. The detection mechanism may indicate a user (i.e. antivirus logs), the person who uses the system may have reported the issue, the IT folks may know, and lastly whoever is assigned the computer probably contributed to the malware infection. The command below collects the data from the administrator and stores it in the folder with the other collected data.

tr3-collect-user.bat F:\Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40 administrator

The benefit to running the above collection scripts over taking the entire hard drive is twofold. First, collection scripts are faster than removing the hard drive and possibly imaging it. Second, it limits the impact on the person who uses the system in question until there is a confirmation about the malware incident.

Triaging the System


For those who haven't read my presentation slides Finding Malware Like Iron Man I highly recommend you do so to fully understand the triage technique and what to look for. As a reminder the triage technique involves the following analysis steps:

        - Examine the Programs Ran on the System
        - Examine the Auto-start Locations
        - Examine File System Artifacts

When completing those steps there are a few things to look for to identify artifacts associated with a malware infection. These aren't IOCs but artifacts that occur due to either the malware characteristics or malware running in the Windows environment. Below are the malware indicators to look for as the analysis steps are performed against the data collected from the system.

        - Programs executing from temporary or cache folders
        - Programs executing from user profiles (AppData, Roaming, Local, etc)
        - Programs executing from C:\ProgramData or All Users profile
        - Programs executing from C:\RECYCLER
        - Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)
        - Programs with random and unusual file names
        - Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
        - Other activity on the system around suspicious files

Examine the Programs Ran on the System


The best way to identify unknown malware on a system is by examining the program execution artifacts. For more information about these artifacts refer to my slide deck, Harlan's HowTo: Determine Program Execution post, and Mandiant's Did It Execute? post. To parse most of the program execution artifacts run Nirsoft's WinPrefetchView against the collected prefetch files and auto_rip along with RegRipper against the collected registry hives. Note: the analysis should be performed on another system and not system being analyzed. Below is the command for WinPrefetchView:

winprefetchview.exe /folder H:\ Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40\preserved-files\Prefetch

Below is the command for auto_rip to parse the program execution and auto-start artifacts:

auto_rip.exe -s H:\Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40\nonvolatile-data\registry -n H:\ Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40\nonvolatile-data\registry\lab -u H:\ Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40\nonvolatile-data\registry\lab -c execution,autoruns

Reviewing the parsed prefetch files revealed a few interesting items. As shown below there was one executable named 5UAW[1].EXE executing from the temporary Internet files folder and another executable named E42MZ.EXE executing from the temp folder.


Looking at the loaded modules for 5UAW[1].EXE prefetch file showed a reference to the E42MZ.EXE executable; thus tying these two programs together.


Looking at the loaded modules for the E42MZ.EXE prefetch file showed references to other files including ones named _DRA.DLL, _DRA.TLB, and E42MZ.DAT.


These identified items in the prefetch files are highly suspicious as being malware. Before moving on to other program execution artifacts the prefetch files were  sorted by the last modified time in order to show the system activity around the time 09/20/2013 15:34:46. As shown below nothing else of interest turned up.


The parsed program execution artifacts from the registry are stored in the 06_program_execution_information.txt report produced by auto_rip. Reviewing the report identified the same programs (E42MZ.EXE and 5UAW[1].EXE) in the Shim Cache as shown below.

C:\Users\lab\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5\I87XK24W\5uAw[1].exe
ModTime: Fri Sep 20 15:34:46 2013 Z
Executed

C:\Users\lab\AppData\Local\Temp\7zS1422.tmp\e42Mz.exe
ModTime: Fri Sep 20 15:34:37 2013 Z
Executed

So far the program execution artifacts revealed a great deal of information about the possible malware infection. However, there are still more program execution artifacts on a Windows system that are rarely discussed publicly. One of these artifacts I have been using for some time and there is nothing about this file on the Internet (not counting the few people who mention it related to malware infections). The artifact I'm talking about is the C:\Windows\AppCompat\Programs\RecentFileCache.bcf file on Windows 7 systems. I'm still working on trying to better understand what this file does, how it gets populated, and the data it stores. However, the file path indicates it's for the Windows application compatibility feature and its contents reflect executables that were on the system. The majority of the time the executables I find in this artifact were ones that executed on the system. The Tr3Secure Data Collection Script preserves this file and viewing the file with a hex editor shows a reference to the 5uAw[1].exe file.



Examine the Auto-start Locations


The first analysis step of looking at the program execution artifacts provided a good indication the system is infected and some leads about the malware involved. Specifically, the step identified the following items:

- C:\USERS\LAB\APPDATA\LOCAL\MICROSOFT\WINDOWS\TEMPORARY INTERNET FILES\CONTENT.IE5\I87XK24W\5UAW[1].EXE
- C:\USERS\LAB\APPDATA\LOCAL\TEMP\7ZS1422.TMP\E42MZ.EXE
- C:\USERS\LAB\APPDATA\LOCAL\TEMP\7ZS1422.TMP\_DRA.DLL
- C:\USERS\LAB\APPDATA\LOCAL\TEMP\7ZS1422.TMP\_DRA.TLB
- C:\USERS\LAB\APPDATA\LOCAL\TEMP\7ZS1422.TMP\E42MZ.DAT

The next analysis step to perform in the triage process is to examine the auto-start locations. When performing this step one should not only look for the malware indicators mentioned previously but they should also look for the items found in the program execution artifacts and the activity on the system around those items. To parse most of the auto-start locations in the registry run auto_rip against the collected registry hives. The previous auto_rip command parsed both the program execution and auto-start locations at the same time. The parsed auto-start locations from the registry are stored in the 07_autoruns_information.txt report produced by auto_rip. Reviewing the report identified the following beneath the browser helper objects registry key:

bho

        Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects
        LastWrite Time Fri Sep 20 15:34:46 2013 (UTC)

        {BE3CF0E3-9E38-32B7-DD12-33A8B5D9B67A}
                Class     => savEnshare
                Module    => C:\ProgramData\savEnshare\_dRA.dll
                LastWrite => Fri Sep 20 15:34:46 2013

This item stood out for two reasons. First, the key's last write time is around the same time when the programs of interest (E42MZ.EXE and 5UAW[1].EXE) executed on the system. The second reason was because the file name_dRA.dll was the exact same as the DLL referenced in the E42MZ.EXE's prefetch file (C: \USERS\LAB\APPDATA\LOCAL\TEMP\7ZS1422.TMP\_DRA.DLL).

Examine File System Artifacts


The previous analysis steps revealed a lot of information about the potential malware infection. It flagged executables in the temp folders, a DLL in the ProgramData folder, and identified a potential persistence mechanism (browser helper object). The last analysis step in the triage process uses the found leads to identify any remaining malware or files associated with malware on the system. This step is performed by analyzing the file system artifacts; specifically the master file table ($MFT). To parse the $MFT there are a range of programs but for this post I'm using TZworks NTFSWalk. Below is the command for NTFSWalk. Note: the -csvl2t switch makes the output into a timeline.

ntfswalk.exe -mftfile H:\ Data-9-20\WIN-556NOJB2SI8-09.20.13-11.40\ nonvolatile-data\ntfs\$MFT -csvl2t > mft-timeline.csv

Reviewing the $MFT timeline provides a more accurate picture about the malware infection. After importing the csv file into Excel and searching on the keyword 5UAW[1].EXE brought me to the following portion of the timeline.


The cool thing about the above entry is that the 5UAW[1].EXE file is still present on the system and it was the initial malware dropped onto the system. Working my way through the timeline to see what occurred after the 5UAW[1].EXE file was dropped onto the system showed what was next.


Numerous files were created in the C:\ProgramData\savEnshare folder. The file names are the exact same that were referenced in the E42MZ.EXE's prefetch file. The last entries in the timeline that were interesting are below.


These entries show the program execution artifacts already identified.

Confirming the Malware Infection


The triage technique confirmed the system in question does appear to be infected. However, the last remaining task that had to be done was to confirm if any of the identified items were in malicious. The TR3Secure Data Collection Script for a User Account collected a ton of data from the system in question. This data can be searched to determine if any of the identified items are present. In this instance, the ProgramData folder was not collected and the temp folder didn't contain the E42MZ.EXE file. However, the collected the Temporary Internet Files folder contained the 5UAW[1].EXE file.

The VirusTotal scan against the file confirmed it was malicious with a 16 out of 46 antivirus scanner detection rate. The quick behavior analysis on the file using Malwr not only shows the same activity found on the system (keep in mind Malwr run the executable on XP while the system in question was Windows 7) but it provided information - including hashes - about the files dropped into the ProgramData folder.

Malware Incidents Triaging Conclusion


In this post I demonstrated one method to triage a system for a potential malware incident. The entire triage process takes less than 30 minutes to complete (keep in mind the user profile collection time is dependent on how much data is present). This is even faster than a common technique people use to find malware (conducting antivirus scans) as I illustrated in my post Man Versus Antivirus Scanner. The demonstration may have used a test system but the process, techniques, tools, and my scripts are the exact same I've used numerous times. Each time the end result is very similar to what I demonstrated. I'm able to answer the triage questions: is malware present on the system, how did it get there, what's the potential risk to the organization, and what are the next steps in the response.

Tr3Secure Data Collection Script Reloaded

Sunday, September 15, 2013 Posted by Corey Harrell 16 comments
There are a few movies I saw in my childhood that had an impact on me. One of those movies was Back to the Future. To this day I still have vivid memories leaving the theater after watching it and being filled with wonder and excitement. The final scene in the movie is relevant to the discussion about triage scripts. In the scene, Doc reversed his time-traveling DeLorean onto the road. Marty sitting in the passenger seat says "hey Doc you better back up we don't have enough road to get up to 88". Marty's comment was based on his previous experience with the DeLorean. The car had to reach a speed of 88mph to time travel and to reach that speed required enough road to drive on. Doc said to Marty in response "Roads? Where we're going we don't need roads". Then the time-traveling DeLorean lifted off of the road and flew back at the screen. Whenever I think about triage scripts I paraphrase Doc to myself saying "Hard drives? Where we're going we don't need hard drives". My updated Tr3Secure collection script makes this statement a reality for triaging systems; it makes it possible to go in a direction where we "don't need hard drives".

Re-introducing the Tr3Secure Volatile Data Collection Script


Sometime ago I released the Tr3Secure Volatile Data Collection Script and accompanied the release with the blog post Dual Purpose Volatile Data Collection Script describing it. The script's focus was on the collection of volatile data only and it served a dual purpose. "First and foremost it had to properly preserve and acquire data from live systems". "The second required function was the tool had to help with training people on examining volatile data". The script served its dual purpose but it had its limitations. As a result, I overhauled the script with a focus on improving its capability as a triage tool. In the process as it evolved its name change to properly reflect what the tool is; meet the Tr3Secure Data Collection Script.

Tr3Secure Data Collection Script


It's probably easier to say what in the script remained the same than it is to say what is new. For the practically usage the script retained its: flexibility, organized output, documentation in a collection log, and preservation according to RFC 3227. For the training usage, the script retained the ordered output reports and references pointing to the books Windows Forensic Analysis, 2nd edition and Malware Forensics: Investigating and Analyzing Malicious Code for the volatile data collection. Before going into the changes I have to give a shout out to Troy Larson. Some of the new functionality in this script where inspired by his ideas and the wicked cool For loop to grab the user profile registry hives is his. Now let's move on to the changes starting with the minor updates followed by the significant upgrade.

Minor Updates


The first noticeable modification is the way the script executes. I dropped the need to interact with the script to make it execute with command-line syntax for complete automation. Now you can enter one command to collect volatile data, non-volatile data, or image the memory. Speaking about imagining memory leads me to my next change. I dropped Memoryze and went with the winpmem program. The last minor update I wanted to highlight was an addition to the preservation activities. When the script runs it tries to preserve certain data to prevent evidence from being overwriting. I added the collection of two more items; one of which is the NTUSER.DAT registry hive of the user account running the script. For the other minor updates refer to the change_log.txt accompany the scripts.

Significant Upgrade


The original Tr3Secure Volatile Data Collection Script focused on collecting volatile data such as open files, network connections, and running processes. The one area that I felt was lacking was the script's ability to collect non-volatile data. When I approached upgrading the script I asked myself one simple question. What data would I want from the hard drive if I couldn't have the entire hard drive? The end result is very telling by my paraphrasing the Back to the Future quote: "Hard drives? Where we're going we don't need hard drives". Below is a highlight of the new data collected by the Tr3Secure Data Collection Script.

        - Grabs the partition information
        - Images the MBR (to help with MBR infectors)
        - Images the hard drive from the MBR to the sector of the first partition (to help with MBR infectors)
        - Collects all registry hives. By all I mean the ones in the config folder, Regback folder (for Windows 7), and the hives from every user loaded user profile
        - Grabs select Windows event logs and in Windows 7 the entire log folder
        - Grabs the scheduled tasks
        - Grabs the NTFS artifacts $MFT and $LogFile. I opted to go with RawCopy from my post Tools to Grab Locked Files
        - Grabs the group policies applied to the system
        - Grabs the McAfee logs and quarantine folder (this is for demo purposes and should be customized for your environment)

Tr3Secure Data Collection Script Syntax


Viewing the script with a text editor shows the syntax on how to use the script and all of my detailed comments. Below is syntax to run the script:

tr3-collect.bat [case number] [drive letter for storing collected data] [menu selection #]

[case number] = the unique identifier for the case

[drive letter for storing collected data] = drive letter of where the collected data is to be stored

[menu selection] = optional field and can be used to collect the following:

        1 = Acquire Memory Forensic Image
        2 = Acquire Volatile Data
        3 = Acquire Non-Volatile Data
        4 = Acquire Volatile and Non-Volatile Data (default)
        5 = Acquire Memory Forensic Image, Volatile, and Non-Volatile Data

i.e.

tr3-collect.bat 2012-09-14_1 F
tr3-collect.bat 2012-09-14_1 F 3

A cool thing to keep in mind. The drive letter to store the collected data can either be a removable media attached to the system or a mapped drive to a network share.

Tr3Secure Data Collection Script for User Account


In my talk Finding Malware Like Iron Man I walked through a mock scenario responding to a system and triaging it for malware. One of the comments I made was that it is faster and more efficient to collect data either by going over the wire or using collection script. Being an incident responder time is of the essence so taking the time to remove and image a hard drive takes too long. Some may see the new functionality in the Tr3Secure Data Collection Script and say to themselves. Wait a second, you aren't collecting certain data so the hard drive is still needed. Those who said this to themselves are correct and my response to them is to check out the new script that accompanies the Tr3Secure Data Collection Script. The Tr3Secure Data Collection Script for User Account collects data from a specified user profile on the system. Below is a highlight of the data collected.

- Grabs the Recent folder contents to including LNK files and jump lists
- Grabs the LNK files in the Office Recent folder
- Grabs the Network Recent folder contents
- Grabs the entire temp folder (great location to find attack vector artifacts)
- Grabs the entire Temporary Internet Files folder
- Grabs the PrivacIE folder (to see why check out my post Malware Root Cause Analysis)
- Grabs the Cookie folder
- Grabs the Java Cache folder contents (Java anyone)

One thing I wanted to be clear about why this second script was needed. In corporate environments and to a certain extent systems used by home users there are multiple loaded user profiles on a system. Pretty much on ever single examination I've done over the last five years my interest has only been on one or two user profiles. The other profiles were old and left on the system. Trying to collect the above data from every loaded user profile is not only inefficient but takes way too much time. Time that is better spent responding to the system as opposed to waiting for the collection script to finish. As such, I put the collection of the user profile data in a separate script so it can be run against the one or two user profiles of interest.

Tr3Secure Collection Script for User Account Syntax


Viewing the script with a text editor shows the syntax on how to use the script and all of my detailed comments. Below is syntax to run the script:

tr3-collect-user.bat [path to store collected data] [user profile name]

[path to store collected data] = the path to store the collected data without any quotes or spaces

[user profile name] = the user account's profile name to collect data from

i.e.
tr3-collect-user.bat F:\Data-demo2\computername-08.12.13-19.14 jsmith

Similar to the Tr3Secure Data Collection Script, the path to store the collected data can either be an attached removable media or a mapped network share.

Where Are We Going


When I made my comment in my Finding Malware Like Iron Man presentation it was because of the capability I have with these triage scripts. I first run the Tr3Secure Data Collection Script to grab the volatile and non-volatile data followed by running the Tr3Secure Data Collection Script for User Account to collect the user data. Both scripts are pretty fast and they provide me with all of the data I would need to triage a system. I can even leverage the triage technique I outlined in the presentation against the collected data to find malware and do root cause analysis in less than 20 minutes. Not bad and hopefully my Back to the Future reference now makes a little more sense: "Hard drives? Where we're going we don't need hard drives".


You can download the TR3Secure Data Collection Script from the following download site.

 

Labels: , ,

Tools to Grab Locked Files

Thursday, September 12, 2013 Posted by Corey Harrell 1 comments
Sometime ago I released my Tr3Secure Volatile Data Collection Script which is a dual purpose triage script. The script can not only be leveraged “to properly preserve and acquire data from live systems” but it can also help to train people on examining volatile data. I have completely overhauled the Tr3Secure collection script including collecting non-volatile data. I wanted to release the updated script to the community but I encountered a small issue.

At the time my updated script was collecting locked files using HBGary’s FGET tool. FGET is a handy little tool. It can collect locked files such as registry hives both locally and remotely. It can natively collect a collection of files such as the registry hives or it can collect any file or NTFS artifact specified by file path. The best part about FGET was the ability to use it in scripts. FGET was freely available that at first was downloadable from the HBGary website then downloadable from the registered users’ portion of the HBGary website. Unfortunately, FGET is no longer available for download and this was my small issue. How could I release a script that depended on a tool no longer available? I can’t so I set out to find a FGET replacement so I can have ability to collect locked files and NTFS artifacts while also scripting with it in a Windows batch file. This post outlines the items I came across as I searched for my replacement.

Invoke-NinjaCopy

The first item up came from a recommendation by Jon Turner (@z4ns4tsu).  Invoke-NinjaCopy is a powershell script that according to its Github home “copies a file from an NTFS partitioned volume by reading the raw volume and parsing the NTFS structures. This bypasses file DACL's, read handle locks, and SACL's”.  The clymb3r blog post Using PowerShell to Copy NTDS.dit / Registry Hives, Bypass SACL’s / DACL’s / File Locks explains why the author created the script and demonstrates how they were able to grab the NTDS.dit (aka Active Directory) off a live system. Out of everything I came across Invoke-NinjaCopy was the only script/tool capable of grabbing locked files either locally or remotely like FGET can. Towards the top of my to-do list is to take a closer look at Invoke-NinjaCopy since I think it could be helpful in incident response activities in addition to pen testing.

RawCopy

Lately it seems like if I need anything related to the NTFS file system I first check Joakim Schicht’s mft2csv’s website. Joakin’s site is a gold mine and anyone doing forensics on the NTFS file system should become familiar with his site. One of his available tools is RawCopy which is an “application that copy files off NTFS volumes by using low level disk reading method”. RawCopy can copy out either the data ($DATA) or all attributes from the file’s MFT entry. It can copy files using either the file path or MFT record number. Download RawCopy from here.

TZWorks NTFSCopy

Next up is a tool from the folks over at TZWorks called NTFSCopy. NTFSCopy is a “tool that can copy any file (or alternate data stream) from a NTFS file system. This can be from either from a live system or from an imaged NTFS volume or drive”. Similar to the other items, the tool is able to bypass locks and permissions to grab files and it can copy NTFS artifacts. To copy a file you can specify the file name, cluster, or MFT record number. NTFSCopy does work as described and quickly can copy NTFS artifacts and locked files from live systems. For anyone wanting to copy files from a live system should take a close look at NTFSCopy (downloaded the tool from here). Just keep in mind the free version is for non-commercial use only but there is a commercial version available.

ircollect

The next tool up is a Python script developed by David Kovar. ircollect “is a Python tool designed to collect files of interest in an incident response investigation or triage effort”. David’s blog post IRcollect – collect incident response information via raw disk reads and $MFT parsing provides additional information about the script. I think this is an interesting project since everything is done using Python and it’s one I’m going to keep my eye on.

OSTriage

The last item may be overkill as a FGET replacement since it is a complete triage tool. Eric Zimmerman’s OSTriage is still in development and I was afforded the opportunity to test it. The tool is able to parse artifacts and presents a range of information. Some of the presented information includes: P2P, network information (ARP cache and open ports), basic system information, browser history, browser searches, and USB devices. OSTriage even has the capability to image RAM. This is a tool to be on the look for.


For those wondering what I ended up deciding to replace FGET with will have to wait until my next post when I release the new and improved  TR3Secure collection script.
Labels: ,

My Journey into Academia

Monday, September 2, 2013 Posted by Corey Harrell 9 comments
The frequency of my blog posts was slowly decreasing until I finally reached the point when I decided to take a hiatus from jIIr. My decision to stop blogging wasn’t because my heart is no longer in it, I ran out of ideas, or I lost interest in sharing with others. My decision was the result of a time management issue. I’ve been focused on another endeavor that has left me with very little time for blogging. This endeavor has been my journey into academia. As I recently reached a milestone on this journey (developed my first course) I wanted to take the time to talk about why I went from DFIR practitioner to DFIR educator.

Why Even Bother with Academia


To be honest academia wasn’t even on my radar. An opportunity presented itself and after careful consideration I decided to pursue it. However, before saying what my final deciding factor was for starting this journey it’s necessary to reflect on our DFIR field and how academia supports it.

There has been an issue within our field that seems to be growing with each passing year. The issue is obvious for those who are active on DFIR forums, mailing lists, and conducting interviews to fill positions. Eric Huber (A Fistful of Dongles) addressed this issue in his post Ever Get The Feeling You’ve Been Cheated? Eric made a lot of great points in the post so it’s well worth the read. I wanted to pull out two quotes to highlight the issue I referenced.

“During the early years, it was rare to see applicants who had degrees in digital forensics, but I’m finding it increasingly common in recent years. One of the things that I have been struck by is how poorly most of these programs are doing in preparing students to enter the digital forensics fields.”

“One of the core issues that I see with the programs that aren’t turning out prepared students are the people who are teaching them.”

The issue is some academic programs are not preparing their students for a career in the digital forensic and incident response fields. I’m not talking about skills such as students not being able to run tool XYZ since this can be easily addressed through training. The deeper issue is students not being able to analyze and evaluate DFIR problems to come up with solutions. Like Eric, I don’t fault the students to a certain degree. The blame goes to the academic programs that are hastily putting together information security and digital forensics programs to jump on the bandwagon.

As practitioners in this field we have a choice to make. We can either continue on with not hiring students coming out of these programs, ignoring their requests for homework answers in forums, or be irritated about those doing a disservice to our field by being unqualified and doing casework. Or we can do something different; we can try to change it by being involved with academia and sharing our insight/expertise to improve the curriculum. When I was presented with the opportunity this is what my decision came down to. My choice was simple; to use my ability to put together a course that helps students in their careers in the digital forensic and incident response fields. In the words of Jon Rajewski about why this should matter to all of us; “they are the future generation of digital forensic / incident responders”.

Why Academia and Not Training


My decision to start my journey into academia wasn’t solely to help those entering the DFIR field. I also wanted to help provide curriculum to benefit those already in the field. At the time I had an idea about why training wasn’t an option but I couldn’t quite put my finger on it. That was until I started looking into the differences between education and training. The difference is illustrated in Peter Fabri’s story when he went back to graduate school. He contrasted the two by saying “training is concerned with acquiring a skill” while “the aim of education is broader than training”. He went on to say education “strives to prepare learners to be analytical thinkers and problem solvers by facilitating the learning of principles, concepts, rules, facts, and associated skills and values/attitudes”.

It might be more helpful to put the difference between education and training in the context of DFIR. The paper Computer Forensics: Training and Education compared the difference as saying training "has the goal of training students for an occupation within the computer forensics field". The paper further states “training is also limited in that it focuses students’ attention on current techniques and methods rather than processes”. On the other hand, the paper explained education “destines to educate students on the needed capabilities but goes a step further in attempting to teach the students a greater level of detail on the goings on behind the scene”.

Continuing on exploring this difference is the article Education versus Training: Selecting the Right Lifelong Learning Experience (I highly recommend reading this article). As it relates to training the article explains:

“The bottom line is to seek training to acquire skills and knowledge for short-term advantage. Training brings the learner up to the level of others in the industry and will tend to make them the same as the experts they seek to emulate”

As it relates to education the article says:

“Education is different. It should be used to acquire a mindset not currently owned or to deepen a mindset already possessed”

“Education broadens the learner, makes him different from everyone else and helps him think in his own way to solve problems that have not been solved before. Of course educational programs include training in the skills and knowledge of the discipline, but they go further to develop thinking abilities, attitudes and behavior patterns that might be classified as a mindset. In this sense, training programs do not include education but education programs often include training.”

The key difference between education and training as it relates to digital forensics and incident response is one’s goal is to equip the learner with the skills, techniques, and methods to tackle a known problem while the other’s goal is to develop the learner into an analytical problem solver to tackle any problems they may face. To illustrate this point it might be helpful to share two experiences I’ve seen in my career. Numerous people in DFIR have attained most of their skills and knowledge through trainings and they weren't developed into an analytical thinker through a formal education. At times this puts them at a disadvantage.

One day I was leading a local forensic group meeting on walking them through an analysis  on a test image. I wanted everyone to participate so I provided an option to use free or open source digital forensic tools. As I was going through the analysis someone in attendance said “I could do this if I had “insert commercial forensic tool here”. This person wasn’t approaching the analysis as a problem solver and saying what tools can help me carry out my process. Instead they fell back on their training and without the tool they were trained on they were helpless.

Another example is one I see online. In these instances it’s people who are new to finding malware on systems but they have recently completed some training on the topic. They have a system where they must find malware. In an effort to use their newfound memory forensic skills they try to virtualize the system, dump the memory, and then try to analyze the memory to find the malware. This is a good technique but they never take a step back to look at the problem they must solve and the process to use to solve the problem. Again, they fall back on their training to try to solve what they are faced with.

This key difference is why I felt more aligned with academia with trying to educate others into the DFIR mindset as opposed to instructing others on a specific skill. As the Education versus Training: Selecting the Right Lifelong Learning Experience article states I wanted the learners to be “acting after deep thought and analysis; broad” instead of “acting out of new habits and skills; narrow”. I wanted the end result to be “makes you different from others, thoughtful and mindful, educated” and not “make you the same as others with the same training, measure up”.

These were the two primary reasons why I started my journey into academia; why I’m using my DFIR practitioner mindset and skill set to be a DFIR educator. The other perks such as research resources and extra income were just icing on the cake.
Labels: