Write-up Magnet Weekly CTF

It is time for some fun and time to sharpen up my Mobile Forensics skills. Magnet Forensics has decided to organize a weekly CTF challenge, every Monday a new challenge will be published for the last quarter of 2020. This gives everyone a week to work on a challenge and then it will be closed and a new challenge will be published. I really like this setup, as it is a lot easier to combine with work life. More information about the CTF can be found on the Magnet website. I will use and update this article to write down my methodology to solving the challenge and hopefully the answer as well.   

Quick navigation 

To navigate to the write-up of a certain week, use the links below:

CTF Setup 

For the month October a mobile Android Image is used download here
For the month November a Linux image is used download here.
For the month December a Windows memory image is used download here


I used several tools for this CTF:
  • Magnet Axiom, thanks to Jessica 
    (Twitter) and Trey (Twitter) for the trial. 
  • Autopsy, can be downloaded here 
  • ALEAPP can be downloaded here by Alexis Brignoni (Twitter)
  • Volatility Framework can be downloaded here

Week 1

Question
What time was the file that maps names to IP's recently accessed?
(Please answer in this format in UTC: mm/dd/yyyy HH:MM:SS

Methodology
After giving this question some thought I thought the answer was something with DNS, because that is what a DNS systems does right mapping IP's to (host)names. I performed several keyword searches related to DNS, this resulted in lots of hits, but I did not find anything that seemed to be related to the question. After some more Googling and thinking I stumbled upon the hosts file, which is also a file that can be used to map names to IP-addresses. This turned out to be a winner, when searching for the hosts file I got several hits in the Downloads section of the Android device as well as in the /etc directory. Both files had the same timestamp, last step was converting the timestamp to UTC timeformat 





Answer

Week 2

Question
What domain was most recently viewed via an app that has picture-in-picture capability?

Methodology
A very interesting question this week around the use of picture-in-picture (PIP) capabilities by Android applications. Because this was a new topic for me, I had to do some research on it some references that were useful for me, the official docuemntation by Android developer and a list of some apps that support PIP capabilities as of April 2020 can be found here

Next step was finding out what domains were recently viewed. I used the 'Chrome Web History' and 'Chrome Web Visits' views in Axiom. 




Both artefact views showed that the malliesae.com domain was most recently viewed.


At this point I wasn't really sure that this was the correct answer it felt a bit too easy, so I took some extra steps to prove/disprove my current line of thinking.
In Android there are several locations that are really useful for finding out what recently happened on a device. The following locations are of interest for our investigation:


Filename Location Description
Recent Images data\system_ce\0\recent_images The recent image directory contains screenshots/snapshots of recently used applications, however it's possible for apps to opt-out of this in case there is sensitve data being captured on the screen. Files are in .png format
Recent Tasks data\system_ce\0\recent_tasks The recent tasks directory contains tasks that were recently exexcuted on the device. Files are in .xml format and can be analyzed to get further information such as the app that is associated with a task
Snapshots data\system_ce\0\snapshots The snapshots directory contains snapshots of apps that were recently moved to the background. Three files are present for each snapshot the snapshot in .jpg a reduced/compressed version and a proto file for the snapshot which can be used to find out what the application was for the snapshot.

Let's test the above artefacts. Looking at Recent Images you will find that the images captured start with a certain (random) number for instance on this device we can see several images starting with 329 as shown below.


At that point you can use the information in Recent Tasks to find out more about which tasks relates to a certain image. The tasks are being capture with a number as well which is the same across the various artefacts.

So if we want to find out what task a certain image belongs to we can open the associated XML file to find out more. Let's look at task 329 it contains the following information:

Which shows that the images we found in the Recent Images are associated with the Twitter application. 

Lastly we will look at the Snapshots, which will show us an actual screenshot of the activity. Each snapshot contains 3 files, the snapshot itself, a reduced version of it and a proto file.

The snapshot shows that this person was in a Twitter DM conversation with a certain person name Alan Brunswick. 



That was a short lesson on the recent activity that you can find on an Android device, but how does this relate to the Weekly Challenge question you might ask. 

Well if we follow the same process, but then for the task with task_id 320, we can see the following information in the associated task XML that this task relates to Chrome activity. 

And the Snapshot for the task with task_id 320.


The Snapshot shows that Chrome was used to open the domain 'malliesea.com'. 

Answer

Week 3

Question
Which exit did the device user pass by that could have been taken for Cargo?

Methodology
This CTF has been a lot of fun so far and the question for Week 3, the Magnet Team came up with a very interesting question. The starting point for this week was the hint that came with the question on a recent talk done by the Magnet Team, you can find it here. The talk discusses different types of evidence on Android and iOS devices.
It was an interesting talk and one the topics discussed was Google Maps artefacts and how you can basically plot on the map the route someone's device made in the past. This was in line with my thoughts on how to solve this challenge. So I started looking at the images and the associated EXIF data to find out where certain images were taken. I already knew that the device and its user had travelled to Norway so that was my starting point, and I thought it had something to do with airports because there was a cargo exit. I made a short timeline of the different images and searched for the coordinates on Google Maps I quickly found the airport. 

Google Maps analysis Part I

I started plotting the coordinates and looking around in Google Maps. That didn't lead to immediate success. Then I rewatched some parts of the webinair and learned something new. On Android devices pictures with the prefix MVIMG_ are actually Motion Pictures which include a brief video embedded in the image file. Next step was digging into that and finding out how to extract the videos from the pictures. I used this script to extract all videos from the images.  

Timelining

Next step was making a short timeline of the device, to understand the direction it took and which roads were taken. 

IMG_20200307_053704.jpg taken on 07/03/2020 10:37:06, shows a picture from inside a plane,without geo information available. Most likely Oslo Airport based on the next pictures.
MVIMG_20200307_130221.jpg taken on 07/03/2020 12:02:24 shows a short moving image of someone travelling and a train track on the right side (keep in mind). The coordinates are: 60°11'38.7"N 11°5'46.65"E which is at Oslo Airport. 
MVIMG_20200307_130237.jpg taken on 07/03/2020 12:02:39 shows a short moving image of someone travelling in a so called Flybussen which is a shuttle bus that can be taken from Oslo airport, the coordinates are still the same. More info on the Flybussen can be found here
MVIMG_20200307_130326.jpg taken on 07/03/2020 12:03:28 shows a short moving image of someone travelling and cars passing in the other direction the coordinates are still the same.
IMG_20200307_185206.jpg which was taken on 07/03/2020 17:52:08 is a picture in Oslo itself with the coordinates 59°55'26.47"N 10°47'39.86"E

This gave me an idea for the direction that the user and device took.

Google Maps analysis Part II

Next up I opened my Google Maps again and started at 60°11'38.7"N 11°5'46.65"E and followed the highway in the right direction and then I saw this picture:

Now if you compare that to the short video in MVIMG_20200307_130221.jpg:



For me this was evidence that I was on the right track (no pun intended). Following this highway on Google Maps an exit comes up rather soon:


If you follow this exit you will find the following sign:


So I thought the answer was 2 or the name that is on the sign, however on the 3rd try I managed to put in the right answer :)

Answer


Week 4

Question 
Chester likes to be organized with his busy schedule. Global Unique Identifiers change often, just like his schedule but sometimes Chester enjoys phishing. What was the original GUID for his phishing expedition?

Methodology
When you first read the above this might sound like a weird question, but if you have been working on this case it makes more sense. For this challenge we are analyzing the phone of Chester. There are several Twitter DMs where Chester is discussing with his hacker friend Alan Brunswick that they are going to attack Mallie Sae. As part of the discussions Alan instructed Chester to write a phishing email to the CEO of Sallie Mae. On 24-03-2020 on 00:15:01 in a Twitter DM Chester shares the following phishing text.



Based on this text I continued my search and found that the Evernote application was used to write the phishing text. I found 3 unique Evernote Notes in the Evernote database located at:

/data/data/com.evernote/databases/user213777210-1585004951163-Evernote

I opened the database with my SQLite DB Browser and found that there were 3 notes in the 'notes' table:

The first thing I saw was that the Notebook GUID was the same for all Notes, which led me to believe that the starting note was modified. If you convert the created times which are in EPOCH format that the order is the same as represented in the above picture. 

First I tried the common notebook GUID ebe60554-09c2-4583-bd9b-170930d1a5aa
My second try was the GUID belonging to the phishing attempt c80ab339-7bec-4b33-8537-4f5a5bd3dd25.
Then I did some more digging and found the oldest GUID which is 0a826c39-ba5c-4772-944d-a96dd0e90eeb and belongs to the first note.

This was the end of Week 4, because you only get 3 attempts, unfortunatley my answers were incorrect. Later on I also found the table 'guid_updates', which contains old GUIDs for notes.


You will see that there is an old_guid for the phishing attempt with GUID c80ab339-7bec-4b33-8537-4f5a5bd3dd25. So it might very well be that this was the correct answer. 

Answer
As mentioned I did not solve this weeks challenge, however for completeness sake I've added the correct answer here based on other people's write-ups. 


Week 5

Question
What is the original filename for block 1073741825?

Methodology 
This week's question was the first one on the Linux image that will be used for the upcoming challenges. The images are created by Ali Hadi (Twitter) (Github) and for this challenge we will use the images related to case 2 'Compromised HDFS Cluster', which was also presented at the OSDFCon19, slides here.

When I saw this question the first thing I thought was that I needed to brush up my filesystem forensics skill. The question sounded like you would need to take several step to convert a block ID to an actual name. I knew that the filesystem for this challenge was EXT4, I read through several resources, which very useful here are some examples, 1,2,3,4,5,6.
After reading and studying those references, I thought it was time to look at some of the filesystem artefacts, but first I needed to mount the various images. This turnt out to be the main challenge for this week, we received three E01 files, which can be easily mounted with ewfmount, I used the SANS SIFT workstation, which is a Linux distribution with lots of forensic tools. 

The mounting process I used contains two steps, the first step is mounting the images that are still in the E01 format. I used the following commands to mount all three images:

$ewfmount HDFS-Master.E01 /mnt/efw_master/
$ewfmount HDFS-Slave1.E01 /mnt/ewf_slave1/
$ewfmount HDFS-Slave2.E01 /mnt/ewf_slave2/

All three of the above directory will contain the image which you can then inspect with 'mmls' to determine what partitions are present as shown below:



The 'mmls' output for the two slave drives is very similair with the same partitions and offsets. The normal next step would be to mount the Linux partition and include an offset to the start of the partition. You can calculate the offset by multiplying the number of bytes in a sector 512 with the start of the partition which is 2048 the result of that calculation is 1048576. The resulting command would look like this:

$mount -t ext4 -o ro,loop,offset=1048576 /mnt/ewf_master/ewf1 /mnt/linux_master 

The -t flag is used to set the filesystem type which in our case is ext4
The -o flag is used to mount in read-only 
The loop option is used to create a loop device 

The above command failed for me and after troubleshooting and trying out different settings I found that the norecovery option was required to succesfully mount the images. The commands that I used to mount all three images

$mount -t ext4 -o ro,norecovery,loop,offset=1048576 /mnt/ewf_master/ewf1 /mnt/linux_master
$mount -t ext4 -o ro,norecovery,loop,offset=1048576 /mnt/ewf_slave1/ewf1 /mnt/linux_slave1
$mount -t ext4 -o ro,norecovery,loop,offset=1048576 /mnt/ewf_slave2/ewf1 /mnt/linux_slave2

Next step was searching for the answer, which turned out to be pretty straightforward. I did a simple grep search for the block number that was part of the question:

$grep -i -R 1073741825 /mnt/linux* 

Which resulted in several hits on the Hadoop logs as shown below:


The master node of a Hadoop cluster contains a so-called namenode master log, which contains information about the allocation status of blocks and files. I studied the output of the grep command again and saw a filename was allocated to the block. This turned out to be the correct answer. 

Grep FTW!

Answer

Week 6

Question Part One 
Hadoop is a complex framework from Apache used to perform distributed processing of large data sets. Like most frameworks, it relies on many dependencies to run smoothly. Fortunately, it's designed to install all of these dependencies automatically. On the secondary nodes (not the MAIN node) your colleague recollects seeing one particular dependency failed to install correctly. Your task is to find the specific error code that led to this failed dependency installation. [Flag is numeric]

Methodology 
The first thought that came to my mind with this question is how do I find out more about installation/dependencies and ofcourse the answer was APT logs. Next I opened up the APT logs which are stored in both Linx-Slave images at the following location /var/logs/apt

The directory contains two files:
- History.log
- Term.log

The history log stores information about packages installed with APT.
The term log stores information about what happens during the installation of a package and its dependencies.

I searched both files for 'error' there are several hits for Java/Hadoop in the history log. In the term.log there is a very specific and famous error code that was thrown when the user tried downloading Oracle Java.


On to part 2...

Question Part Two
Don't panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land? In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0's don't count). There are three in particular whose name share a common word between them. What is the word?

Methodology 
It might be, because English is not my first language, but it took me a very long time to understand this question. I'm glad that Magnet changed the wording, the original questions was even harder to understand. 

My first step when looking at something like this, my methodology is to split up the question and answer the individual parts to get to the final answer. 

2.1 - First piece of the puzzle
Don't panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land?

We can see in the term.log that a few minutes after the failed download a newer version of Java was succesfully downloaded and installed. 


It seems both Java version 8 and 9 were downloaded. To answer the question 'Where did it land' I searched for jdk* in the root directory of the Linux-Slave images and found several hits. One interesting hit was for a .bash_history file in the /home/hadoop/ directory. The history file contains commands executed by the user. I found several entries related to the downloading/installation/unpacking of the Java package pointing towards /usr/local/jdk1.8.0_151. 

Further inspection of this directory is part of the second piece..

2.2 - Second piece of the puzzle
In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. 

There are some hints/puns in this piece of the puzzle. The first one is 'binary neighbours' I think this refers to the bin directory in /usr/local/jdk1.8.0_151. Opening up that directory we find a lot of ELF files, which relates to the 'ELFant' part of the question. 

For those of you that don't know what an ELF file is. ELF stands for the Executable and Linkable Format and it is a file format for Linux executables. If you want to learn more about ELF files this is a fantastic resource. 

At this point I knew was on the right track, but there is still a piece missing..

2.3 - Final piece of the puzzle
Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0's don't count). There are three in particular whose name share a common word between them. What is the word?

To solve the final piece you will probably need to have some prior knowledge of ELF files. This year I started teaching cyber security courses parttime and by total coincidence we just wrapped up a chapter on Malware Analysis. One of the topics was analysis of malicious ELF files. Therefore, I knew how to analyze ELF files and their components. For analysis of an ELF file I used a very handy utility called readelf. With the following command I extracted all symbol tables from the ELF files in the /usr/local/jdk1.8.0_151/bin directory and searched for the error code.

readelf -s * |grep 404 

When you run this command you will get the following output, which should lead you to the answer. 


One of the most important things I learned from this week's challenge was that when you get a question like this you need to take your time and dissect the question into pieces and solve it part by part. 
  



Week 7

Questions
This week's challenge consists of 3 separate questions, each sub question was unlocked after solving the previous one. 

Question Part One
What is the IP address of the HDFS primary node?

Methodology
I remembered from the previous weeks that the bash_history file contained several IP-addresses. So naturally this is where I started my investigation on the Master node. I opened the image in FTK Imager and navigated to the location of the (hidden) bash_history file which is located in: /home/hadoop/.bash_history 
Closer inspection of the typed commands led me to the following interesting entries:



It seems like the user is setting up networking, the interfaces is used in various Linux distributions to assign IP-addresses to network interfaces. The interfaces file is located at /etc/network/interfaces and contains the following information:

I have marked the IP-address in yellow, Question 1 solved onto the next one..

Question Part Two
What is the interface name for the primary HDFS node?

Methodology
The interfaces file also displays the interface name, marked in yellow below:


That was easy, onto the next part....

Question Part Three
Is the IP address on HDFS-Primary dynamically or statically assigned?

Methodology
Again it's right there in the interfaces file :)


And that was it all three challenges solved for this week. 




The challenges for this week turned out to be fairly easy, but isn't that always the case if you know where to look. 

Week 8

Question Part One
What package(s) were installed by the threat actor? Select the most correct answer!

Methodology
If you have been following along with the write-ups, in Week 6, we had another question about installed packages/dependencies on the slave nodes. This is where I started my search looking at the installation logs in /var/log/apt on the main node. Based on the installation times of the packages there was one package standing out.



The correct answer is in the above picture, can you spot it? 

Question Part Two
Why? 
  1. Hosting a database
  2. Serving a webpage
  3. To run a php webshell
  4. Create a fake systemd service
Methodology
At this point we know PHP was installed by the threat actor, but why was that required, we've been given 4 options. Just looking through the options, at first glance I thought it must be option 3 or 4, because that would make the most sense from a threat actor perspective. However, I didn't want to guess to much, because we only have 2 attempts so time to dig into the PHP artefacts.

Next I returned to one of my favourite artefactst, that I've been using this challenge to solve some challenges the user's bash_history. Let's see if we can find some activity related to PHP. The history file is located at /home/hadoop. There is one hit on 'php': 



Let's inspect this 'cluster.php' file a bit further, the file is located in /usr/local/hadoop/bin/cluster.php 


Now I'm no PHP expert, but Googling a bit on the socket_bind and shell_exec function I quickly learned that we might be looking at a webshell. I thought this was the answer to the question, but this was incorrect.

Only 1 attempt left, back to the drawing board for me, I searched for 'cluster.php' on the whole image of the master node and found more hits. One that immediately sparked my interest was the 'cluster.service' which was using the cluster.php file. 



The file is located in etc/systemd/system/ and has the following content


For the people that are not Linux experts (like me) systemd is an init system used in a variety of Linux systems. Below a shameless copy-paste that describes two of the purposes of systemd

The fundamental purpose of an init system is to initialize the components that must be started after the Linux kernel is booted (traditionally known as “userland” components). The init system is also used to manage services and daemons for the server at any point while the system is running. With that in mind, we will start with some basic service management operations. Source

If you look at some additional documentation for ExecStart, you will find the following:

Commands with their arguments that are executed when this service is started. The value is split into zero or more command lines according to the rules described below (see section "Command Lines" below).

So this means that upon execution of  PHP, it will also load the cluster.php (aka. the webshell). So the reason for having PHP installed from a threat actor perspective is to have it run under a fake systemd service called 'Deamon Cluster Service'. 




This week's challenge was the final one for Linux, I learned a lot about because of this challenge about Hadoop, package management, systemd services and more, onto memory forensics. 

Week 9 

Time to get the party started with memory forensics, this week's challenge had seven(7!!) parts. The good thing is they are all related so let's dive into it.

Question Part One
The user had a conversation with themselves about changing their password. What was the password they were contemplating changing too. Provide the answer as a text string.

Methodology
This part took me a while to figure out, mostly because I was thinking way too difficult. I started with looking at the processes in Volatility and thought this had to do something with Slack (messaging app), because the question says something around talking to themselves. However that rabbit hole didn't yield any results. In the end I thought why not run strings on the memory file and grep through the output looking for password. Sure enough this shows you a lot of hits, but this one stood out:


Question Part Two
What is the md5 hash of the file which you recovered the password from?

Methodology
This one took me the longest, I started out with something that I rarely used before but I did remember that you can do something cool with strings and Volatility. There is a plugin called strings in Volatility, which you need to supply with the output of the strings and then it will map the strings to a process. The command I used to do this was:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 strings -s C:\Users\Korstiaan\Downloads\strings.txt --output-file=win7_strings.txt

Where the strings.txt file is the one containing the output from the strings application and win7_strings is the version generated by Volatility with the associated processes. This took quite some time to generate, but you end up with a huge file :)

Then I grepped for 'wow_this_is_an_uncrackable_password' 

Several hits were found for the Winword.exe process, this is based on the PID 3180 shown in the above output. This was very useful, now I know which process this is about. I then used a tool, that I never used before Memprocfs to mount the memory image and look for the files. I really like the tool and it's super easy to navigate around processes, because it mounts the memory as a filesystem. This is what it looks like for PID 3180. 


What I did next was grep for 'wow_this_is_an_uncrackable_password' and found that it belonged to the file with the name 'fffffa80326de810-AutoRecovery save of Document1'. The first part of the name is autogenerated by Volatility/MemprocFS. If you open the file you'll see the password in there. Next I calculated the hash and at first this wasn't accepted as the answer, but with the help of the Magnet Forensics team (thanks Jessica) this was corrected. 

Question Part Three
What is the birth object ID for the file which contained the password?

Methodology
A little bit easier this question, I answered this by parsing out the MFT from the memory image with the following command:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 mftparser --output-file=mft_parser.txt  

Then I searched for the filename AutoRecovery and the answer is right there:



Question Part Four
What is the name of the user and their unique identifier which you can attribute the creation of the file document to? 

Methodology
Based on the location of the file I made a little assumption that it was created by Warren. Then I dumped the Security Identifiers with the Volatility plugin getsids I used the following command:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 getsids

Then I filtered for Warren, which showed that Word.exe was used by Warren.

For the answer you only need the last part of the SID, more info on SIDs

Question Part Five
What is the version of software used to create the file containing the password? 

Methodology
Regripper is a tool that can be used to parse registry information, I really like it and I used it before. For this challenge I used the Software Hive from Warren. I used Memprocfs, which already made a listing of all the hives, so loading it in regripper was very straightforward. Regripper creates a report and if you search for Word you can easily find the version as shown below:



Question Part Six
What is the virtual memory address offset where the password string is located in the memory image? 

Methodology
Interesting question, but then I thought I already know this, based on the strings I ran, remember question 2, where I created a strings output based on the proces. I dumped only the strings with the password string in it:

If you look at the strings plugin documentation you will find:

physical_address [kernel_or_pid:virtual_address] string
So if we look at the image we need to focus on the address on the right. Lucky for me the first hit was also the correct answer :)

Question Part Seven
What is the physical memory address offset where the password string is located in the memory image? 

Methodology
This one took me a while, because I thought, the answer has to be the adress on the left in the picture, which should be the physical address. It is the physicall address, but it's not the offset of the memory image. When I realized that, took me a while :O, but the actual offset was easy to find if you load the memory image in a hexeditor. Then you can search for the password string you will get the address and offset as shown below in a small utility called hexedit:


Conclusion
Wow what a week, very interesting challenges and learned a lot especially about addresses and offsets. Thanks again for creating this Magnet Forensics and Aaron Sparling (@OSINTlabworks).










Week 10

Question Part One
*At the time of the RAM collection (20-Apr-20 23:23:26- Imageinfo) there was an established connection to a Google Server. *
What was the Remote IP address and port number? format: "xxx.xxx.xx.xxx:xxx"

Methodology
When I see a network related question, the first thing I do is run the netscan plugin in Volatility. I run the plugin and write the output to a file with the following command:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 netscan > netscan.txt

There are multiple connections with Google IP-addresses I did some manual WHOIS searches, but then I looked at the question again and the magic word here is established connections. When you search for established connections you will only get one with a Google IP address and port.


Question Part Two
What was the Local IP address and port number? same format as part 1

Methodology
This is a follow up question and it's in the same output as the one above, just the left part of it :)

Question Part Three
What was the URL?

Methodology
This question took me quite some time, I started by looking for the process responsible for the connection. I found this using the netstat output, which is created by default with MemprocFS
opening the netstat and searching for the IP from Part one shows the following:



I focussed on the artefacts from this process, in MemprocFS you can find all the handles associated with this process, which show several browsing artefacts (Cookies, Safe Browsing). 

I opened the Cookies with DB browser for SQlite, which I used earlier on in this challenge. I found a lot of hits for Google related websites:

Which makes sense since the IP address is owned by Google. I tried several of the hits and no luck, however this was due to formatting, because the correct answer is actually the Google website. 

Question Part Four
What user was responsible for this activity based on the profile?

Methodology
I didn't perform any additional analysis, I know this had to be 'Warren' based on the browser analysis for this user and the location of the files. 

Question Part Five
How long was this user looking at this browser with this version of Chrome? *format: X:XX:XX.XXXXX * Hint: down to the last second

Methodology
This was the hardest one by far for me, I went into a very deep rabbit hole around Google Chrome web forensics. I parsed the session files for the user to see whether there were artefacts on how long the session lasted. I almost went into full time analysis to make statements on that. I found a cookie and calculated the time difference between created and last accessed time. In the end I gave up and bought a hint. 

The hint:
Solving this challenge takes some FOCUS & time :)

Which was an immediate give away, there is something called focus time and run count in forensics. These values are stored as part of the userassist key in the registry some references 1,2. For this challenge I already extracted the usersassit, but for completeness this is how you can get the userassist with Volatility and write it to a file:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 userassist > userassist.txt

Opening the userassist file and searching for Chrome will provide you with the answer




I've said it before if you know where to look it's so easy :) In this case I needed a hint to tell me where to look. Great challenge again and thanks for the fun.







Week 11

It is the eleventh week already of this challenge, time flies when you're having fun. A relatively short challenge this week with 'only' 2 questions. 

Question Part One
What is the IPv4 address that myaccount.google.com resolves to?

Methodology
This one took me a lot of time and honestly I didn't really know where to start. I looked at the netscan and netstat output that was created earlier by Volatility, but no match could be made between IP and the hostname. 

Then I searched through my strings output for hits on 'myaccount.google.com' and found hits on basically all Google processes including the PID 3604,3384 and 1160. Which also led me nowhere, I did some analysis of the handles for each process, but I couldn't find any more leads. 

From there on it gets a bit messy, to the point where I searched through my strings output looking for IP-addresses staring with '172'. I did this, because last week we were also asked for a Google IP-address. After a few incorrect Google IP-addresses guesses I found the right one. To be honest this was a bit lucky and I'm looking forward to seeing how this challenge can be solved in a proper way. 

Question Part Two
What is the canonical name (cname) associated with Part 1?

Methodology
This required some creative thinking which took me a while. I first started researching some of the DSN basics, what was the CNAME how can you see it for a domain. I learned a lot from this so it was useful I stumbled upon some tooling that could help me do all kinds of DNS request and get me the required info such as Whatsmydns and a browser based version of Dig hosted on Google Toolbox. However using the IP-address and the hostname yielded no results for the CNAME. 

Eventually I thought what if I just do a DNS request myself and monitor the Wireshark traffic. This turned out to be exactly what I needed. I performed several DNS lookups with 'nslookup' on Windows and Wireshark capture running. Afterwards I simply filtered for 'dns' in Wireshark and got the following results.






Week 12

The final challenge of the awesome #MagnetWeeklyCTF is here, going to miss writing the blogs against the Monday deadline.

Question Part One
What is the PID of the application where you might learn "how hackers hack, and how to stop them"?

Format: #### Warning: Only 1 attempt allowed!

Methodology
Seems like a straightforward question to answer and it is. First I searched for the string "how hackers hack" in my generated strings file I wrote the output to a file, command used:
$ findstr "How Hackers Hack" win7_strings.txt > hackers.txt 

The challenge here is that there are hits for two different processes as shown below:

There are hits for 4480 (iexplore.exe) and 2672 (explorer.exe), now we only get one chance. So I couldn't just try both answer to see which one got accepted. Upon further inspectation of the strings I saw that the hits related to Explorer looked like web links. Furthermore, hits on the Internet Explorer process were more obvious. 

Question Part Two
What is the product version of the application from Part 1?

Format: XX.XX.XXXX.XXXXX

Methodology
This was probably the most frustrating question of the whole challenge, which was largely due to the formatting of the answer. It took me two days to figure this one out and approximately 100 wrong guesses to finally come to the realization, that I was probably looking in the wrong direction.

Based on the previous question I started to look at artefacts related to Internet Explorer. I started with registry analysis to figure out the version, some good research can be found in one of these places. It turns out that version information is stored in the following registry location:
HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Internet Explorer

Using AccessData's Registry Viewer, we can see the following information:

Here is where it gets interesting according to the official documentation for the ProductVersion property. The format for ProductVersion is major.minor.build, which is definitely not the same format as the format requested for this challenge. Another official Microsoft source says that version information is stored in svcVersion looking at the above picture this would be 11.0.9600.18860.
Which is almost the same format so I added an extra '0' in there but no luck. Because I was sure somewhere in this registry location the right information was contained. I tried all possible combinations and after a few more hours of double checking and researching I was very confused and left it there for a day. I saw that a few people solved it so I knew the answer had to be there. 

The next day I went to the Volatility Command Reference page and just started looking for plugins that could help me solve this challenge and I found the verinfo plugin. The official documentation for the plugin states:

"To display the version information embedded in PE files, use the verinfo command. Not all PE files have version information, and many malware authors forge it to include false data, but nonetheless this command can be very helpful with identifying binaries and for making correlations with other files."

I never used this plugin before, but this sounded like an interesting approach extracting the version info from the actual executables. I ran the plugin and wrote the output to a file, with the following command:

$volatility_2.6_win64_standalone.exe -f C:\Users\Korstiaan\Downloads\memdump.mem --profile=Win7SP1x64 verinfo > verinfo.txt

Opening the output file I searched for the process 'iexplore.exe' and I got 5 hits on two distinct Internet Explorer versions:


You can see in the output that there is an executable in the normal program files directory but also in the x86 directory which is used in Windows for 32-bit variants of executables. Inspection of the 2 executables shows the following output:




In the first picture you'll see the ProductVersion, we already discovered based on the registry analysis. However, the second picture shows the ProductVersion for the 32-bit variant with the correct ProductVersion and in the right format.



Conclusion

That was the end of the MagnetWeeklyCTF, what a great challenge. I learned a lot and ended up 5th in the world, so I'm also really happy with the end result. 

Popular posts from this blog

Importing Windows Event Log files into Splunk

CyberDefenders - Series (Malware Traffic Analysis 2 - Packet Analysis)