Saturday, March 16, 2013

A Cold Day in E-Commerce - Guest Post


This guest post appears courtesy of one of my team mates, Jonathan Spruill, and shows some of the extremely cool work we get to do in our Incident Response practice at Trustwave's SpiderLabs.

Recently an investigation led me to research the vulnerability described in the following CVEs: CVE-2013-0625, CVE-2013-0629, CVE-2013-0631, and CVE-2013-0632.  From the Adobe Security Bulletin  (http://www.adobe.com/support/security/bulletins/apsb13-03.html): “vulnerabilities that could permit an unauthorized user to remotely circumvent authentication controls, potentially allowing the attacker to take control of the affected server”.  This sounds particularly ominous and in a moment we will see just how damaging the attack was in this case.  As with most e-Commerce cases, this investigation relied very heavily on the review of web access logs.  Using basic tools to analyze the log files, I quickly found the entries where the compromise occurred. 





Here the attacker checked to see if the site has been compromised already.  I found this source: https://www.it.cornell.edu/services/alert.cfm?id=2419&back=security
which indicated that h.cfm, help.cfm and others were variations of the name used for a webshell in this attack.


Here the attacker accessed the page at the heart of this exploit.  By accessing the administrator.cfc page using with the specific options, the server returned the base64 encoded version of the administrator password hash.


The attacker then passed this encoded and hashed password back to the server.  The server allowed access to the mappings.cfm page without complaint.  I was puzzled for a bit as to why the attacker would go directly to this page.  In recreating the exploit, I discovered that this page holds the path information the attacker would need to know in order to upload his malicious files.


The attacker accessed the page sheduleedit.cfm and created the task which would upload the malicious files.


In this step of the attack we presume the attacker has generated the task to upload the webshell.  We must presume this is the case as there is no record of the successful task creation in the web access logs, or any other log for that matter.  By default, there is no logging enabled for scheduled tasks.


An interesting feature of these scheduled tasks is that they can be run either on a time schedule or launched directly.  ColdFusion can be configured to use these tasks to generate web content from static data sources.   However, an attacker can arbitrarily load any file from any location.  In this case the file uploaded was a webshell.


Here the attacker forced the newly created task to run.


And in the next transaction, the attacker deleted the task.  This very effectively covered his tracks within the ColdFusion management interface, since, as mentioned previously, logging of scheduled tasks is not enabled by default.
   

The last step in this sequence, the attacker accessed the newly uploaded webshell.  In the postmortem analysis of the site’s codebase, I was able to find and further examine the webshell. 

The webshell was not encoded or obfuscated as we typically see in our investigations.  It includes a fair bit of functionality: upload and download files, search for filename or file content, and issue SQL queries.  If you are interested in the code used in the webshell, I found a copy on pastebin: http://pastebin.com/2v3PMx4M

The conclusion of the investigation showed a customized database scraping script was uploaded and successfully downloaded a significant amount of card holder data.

Not quite satisfied with seeing the attack in the logs, I wanted to further understand how this exploit worked.  I was able to duplicate the attack in a test environment using a browser and with the help of my new favorite proxy tool, ZAP from OWASP, I could see in better detail the key data elements passed from browser to server and back again.  I couldn’t help but feel that this was still not a best representation of the attack seen in the logs.  I first searched all the typical places for a proof of concept to try in my test environment.  I exhausted my Google-Fu with no success. Left with no other option I decided I would write it myself. 

So why would a forensics guy want to write an exploit?  First, like folks who climb Mt. Everest, because it’s there.  Second, since I couldn’t find an example of the exploit code, I wanted to see if there was something else that could be learned and identified as an Indicator of Compromise (IOC) for future investigations.  Third, it gave me the opportunity to work on my scripting skills and learn a few things along the way.

Rather than post the script itself for all the kiddies to sponge up, here's a video demo of the exploit against a default install of ColdFusion 9.0.2.




The script duplicates the most important behavior in this attack, which is the POST request to scheduleedit.cfm, forcing a task to run, followed by immediately deleting the task.  These IOCs along with unusual filenames, such as h.cfm, a.cfm,  help.cfm or similar files in the /CFIDE directory should cause any web administrator to investigate their logs more closely.  An interesting IOC missing from this attack is directory traversal as seen in other ColdFusion exploits. 

As for mitigating the vulnerabilities, Adobe recommends setting a password for the RDS user account, preventing access to the CFIDE path, and applying the patch per APBS13-03.  In my testing, setting the RDS password was sufficient to prevent this attack.

Tuesday, February 5, 2013

The End Game: Part 1

Last weekI posted about some of the reconnaissance tools that attackers are using against E-Commerce sites, then about what some of the evidence looks like in the logs. Now I want to go over what they are doing with their ill-gotten access.

Attackers aren't just in it for the fun anymore. While we still see our share of political defacement's and attacks that are pulled off just to prove a point, most of the cases that forensics firms like mine are working involve the theft of data. Attackers are stealing Personally Identifiable Information and selling it to crooks that use it to defraud Medicare/Medicaid and other social programs. The same data can be used to commit classic "Identity Theft" and open accounts under other peoples names.

Even easier is the theft of Cardholder Data, there is a sophisticated black market built around the sale of credit card numbers. I talk about it in my conference presentation "Hunting Carders for Fun and Profit" (coming to a con near you in 2013) and it really blows people away how readily available the hardware, plastics and card numbers are. It's really easy for an attacker to gather card numbers and sell them in bulk to a middleman that specializes in parting out these "dumps" for a set price.

All of this data capture and sale really is the "End Game". It's how they get there that I want  to talk about.

The top way I see data being exfiltrated is SQL injection. I talked about this in my last post and put up a quick example. I usually see an attacker hammer away at a site for a couple of days with different tools, but once they find that vulnerable page, it's over in a matter of minutes or hours. This is a very direct kind of attack. They poke around until they find a way to directly access your DB and just suck all the records right out. It's very effective but not terribly sophisticated (usually, see Hunting Carders for a very sophisticated attack).

Saturday, February 2, 2013

New Year, New Look, New Post: How did they find me? Part 2.

Last post we went through some of the free utilities available to attackers for reconnaissance purposes.  The utilities I talked about in that post are all things that I have seen used over and over again in successful attacks. What I did not touch on was what these attacks look like in Apache and IIS log-files.

Let's start with some basic search methodology. The idea here is to "read" through a log-file and search it for specific terms. You can use grep by itself or sed, awk, gawk or a dozen other commands. If you use a Linux workstation or the windows ports of Linux utilities it will look something like this:

grep -i "keyword" -r *

If the output doesn't look the way you want it to or you are having trouble targeting specific files with grep alone, you can refine somewhat by stacking commands like so:

Strings -s *.log |grep -i "keyword"


I guess the big secret here is the keywords. They will vary slightly from case to case but, generally speaking, SQL injection can be identified by searching for union select, xp_cmdshell, concat and also by looking for specific database table names in the logs. The last of these is especially true if you know what type of data is at risk and where it resides. One of my favorite PCI related searches is to look for "cvv" in the logs or "cc_number".  If you are concerned about data being snatched from a particular database, grab the table names and run a search. It's very common to see fields like "First_Name, Last_Name, Address"


 OUCH!

Friday, May 25, 2012

How did they find me?

I wanted to learn more about E-commerce and the type of breaches that take place so I volunteered to take the bulk of the E-comm cases for my team.  Over the last 18 months I went from zero to "go-to guy" and I learned a lot. Now it's time to share.

From what I've seen; there are 3 main phases to a successful website breach:

1. Reconnaissance - An attacker singles out your site and begins to hammer away with port scans, nessus plugins, automated SQL injection attacks, etc.

2. Infiltration- This is the actual attack. They exploit a vulnerability to upload code, bypass credentials, or brute force their way in to an admin console or SSH, etc.

3. Exfiltration- Attackers access your data and take what they want. In my line of work I see a lot of financial data gathered and stolen, but I have also worked defacement's,  theft of Personally Identifiable Information (PII) and breaches of copyrighted information.

I'm going to tackle these 3 points 1 blog post at a time. The first one on reconnaissance is below.

Sunday, February 12, 2012

Pulling timelines from unsupported filesystems.

The conference season is closing in and workload is finally easing up enough to put together a couple of blog posts. This one has been in draft status for months.

If you use timelines for investigative purposes as much as I do, you have no doubt run into occasions where fls does not support the filesystem you are working with.  I have been working a lot of E-commerce lately and bumping heads with several different variants of Linux filesystems.


Here's the list of filesystems supported by fls:
 

Notice that ext4, xfs and several others are not in the list.

I had to go digging for a way to do this for an xfs filesystem  that I was working with. Fortunately, I have a decent library of books on filesystems, forensics, security, etc.. I remembered reading about using the find command to generate timeline data and I was able to find it eventually in "Incident Response & Computer Forensics." (Mandia, Prosise & Pepe) Thanks guys!

They described the process of using the "find" command to generate the file but not all the steps to getting there.




The steps:

1.  mount your filesystem read-only. Here's the command on  a *nix box.  (this can be done for some filesystems in Windows with a range of different utilities)

mount –t xfs –o ro,loop,noexec /media/USB/image.001 /mnt/apachesdc


2. Change directories into your newly mounted fs:

 cd /mnt/apachesdc


3. Run your command and output it to a file for later use:

Find . –printf  ‘%T+ , %A+ , %C+ , %p \r’ > /path/to/output_file.csv

This may require a little massaging to get it to work with your particular distro. The man pages are your friend.




This find command will output Mtime, Atime, Ctime, full path and filename, as well as end each line with a carriage return. The commas in-between add a common delimiter, for later use with excel.

Using the capital T, A, and C with the find command allows you to specify the time/date output format you want. In this case I wanted  yy-mm-dd+hh:mm:ss . This format is natively understood by excel and makes the spreadsheet much easier to sort.

Open the csv file with excel and set “comma” as the delimiter. It will neatly organize each row with MAC and filename.  Add a row at the top and label each field, lock that row, and you have a spreadsheet that is sort able by M,A,C, or filename. 

You don’t get all the extras that you get from using fls and mactime. No GUID, *, *. But what you do get is a very functional timeline for multiple unsupported filesystems.


I've decided to write a talk that focuses a little heavier on E-commerce this year. My co-workers at Trustwave have the POS malware well covered and DEFCON/SECTOR were fantastic events last year.

My working title is "Ask me about hunting carders for fun and profit" and I plan to include some stupid hacker tricks, some malware, and a quick look at data reduction using the command line.

Stay tuned for a series of posts on some E-comm breaches. I have drafts in work.


On a more personal and non-technical note, my whole family has been involved with the St. Baldrick's charity for 5 years now. We co-organize a head shaving event to raise money for childhood cancer research. My little man has been shaving his head (totally voluntarily) since he was 4 years old. Do me a favor and take a look at the charity, see if there are any local events you can support or drop a couple bucks in one of our donation boxes.

My donation page.
Max's donation page.


Thanks,
Grayson



Saturday, October 15, 2011

MAC(b) Daddy at SecTor

I'm proud to announce that I was invited to deliver I'm your MAC(b) Daddy at SecTor 2011 as well as take part in a full day of training for the Royal Canadian Mounted Police.  If you haven't heard about SecTor, read here.  It's Canada's largest security conference and is described as "The Canadian DEFCON"



I feel honored to have my talk accepted and I'm looking forward to meeting new peeps.

Hope to see you there!

Tuesday, August 16, 2011

I'm your MAC(b) Daddy at DEFCON 19


It's hard for me to believe that I haven't updated this blog sine March 26th. The last 5 months may have been the busiest of my entire life.  Three members of my team(including me) worked a nationwide breach on over 80 locations all using the same Point of Sale software.  I also submitted to, and was accepted for two conferences. DEFCON 19 and SecTOR in Toronto.  I have been working diligiently to make sure my presentation slides were clear and up to date and to make sure I was ready to get up there and speak in front of hundreds of people. I'm making excuses for myself here, the better thing is probably just to get on with it.

For a little background on Timestomping and why attackers are doing it, see Chris's post "Timestomping is for Suckers".


I presented a talk on Supertimelines and identifying anti-forensics at DEFCON this year.  Aside from some minor issues trying to pull off a live demo, the talk went pretty well.  I had the unceremonious duty of sharing a time slot with Dan Kaminsky so I’m very happy that I managed to fill 2/3 of the room.  I’ve already started receiving a number of questions, links to others research and a number of other queries related to MAC(b) Daddy.  Keep them coming, I’m more than happy to help out where I can.

I am presenting MAC(b) Daddy again at SECTor in October. Once that conference is over I will post the full content here on my blog.

The first communication I received was a link to a blog that was exploring different Timestomping methods using the Windows Powershell and the timestomp utility I mentioned in my talk.  There is some great research and concise info about manipulating timestamps.  The link was sent to me as a “Hey, this guy was able to modify the $MFT, and you said that couldn’t be done”   Well, sort of. What I’m saying is that the $File_Name attribute can’t be modified by anything known, the blog above proves my point. Manipulation of the $Standard_Info attribute is, dare I say, easy?, at this point.  The second set of attributes in the $MFT is still untouched by timestomp, powershell, perl scripts….anything.  By comparing the two you can see the changes made to the system by these measures.

Directly after the talk I was approached by two Chinese gentleman that had a number of questions about trying to modify the $MFT with a kernel mode driver. I asked, “Why, what are you working on?” They replied with a smile and said “nothing”.  This is DEFCON after all.  This is certainly an interesting project but one that would require extreme caution. Modifying the $MFT on the fly could be extremely detrimental to a system. Move the file table entries by one block and you just turned your system into a brick.   Not to mention that you would gave to query a protected file, make the change, leave the sequence number undisturbed and release the $MFT before the system itself tried to write to it again. Past experience suggests that you have 10 – 15 milliseconds to perform these actions.

Third, and maybe the most fulfilling for me, I was contacted by another forensicator who is working on a homegrown utility for parsing the $MFT and auto-comparing the entries for time anomalies.  This functionality is included in the latest version of Log2Timeline as well. I have not used this particular module yet but I plan to in the next couple of weeks. His questions related to some anomalies in a number of the core OS files (like ntldr).  I am by no means an expert here, but the way I understand the anomalies in these files follows:  One of the only ways to actually pull off any “manipulation” of the $F_N attribute is to create a file on a second volume (D:\), modify the $S_I timestamps, and then move that file to the main volume (C:\).  In this case the M attribute of $F_N will match the M attribute of the moved file.  The same does not hold true for the B attribute, which creates a whole other anomaly in of itself.  When we are talking about these core OS files, I think there are two things going on here.
1)   The system isn’t a system yet, it has not gotten to the point where the system time has been determined. This is the same reason that you see registry entries in a supertimeline start in 1969 and 1970. The system has no baseline to set those registry write times to and the possibility exists for the same issue with the $MFT.
2)   These files are not being created out of thin air, they are being moved from another volume (The install CD/DVD) and some of the timestamps from when this code was written is being maintained.

I promised the crowd to start updating my blog more frequently in support of MAC(b) Daddy. So here I am, a man of my word.

More as the questions and comments flow in.