Thursday, September 27, 2012

Antibiotic Resistant security

I was reading an article recently about how some of the sterilization requirements in factory farms actually encourage more damaging infections which then led me to think about antibiotic resistant strains of diseases popping up due to overuse of antibiotics. This finally led me to think about similarities in computer security.

Since I started officially working in security around 1996 a number of us have suffered from a Cassandra complex; providing warnings and gloomy predictions, which have usually come true, and being generally ignored. Now, over a decade later, it's too late to do some of what we should have done back then. Everything is owned. We have to retrofit now instead of building security in from the ground up. Its MUCH more expensive and difficult today than if we would have started then.

One of those predictions I was making back in the early 2000's was the following:

  • We should move away from standardized IT environments where everything is centralized and the same
  • We should stop trying so hard to stop the 80% of low sophistication attackers and focus on the 20% of attackers we really care about and who can really hurt us
Recently I have been doing a lot of incident response work and every organization I have dealt with is suffering from bullet number one. Everything centrally authenticates, everyone is running the same OS image, usernames are conventionalized and standardized, networks are flat and everything is hacked. I consistently see an attacker take over an entire network because once they had 1 machine, they had them all. Does a scientist need the same environment as a secretary? Should the sales department windows desktop be able to touch the production SQL database? Don't know, don't care, everyone gets the standard image. (And the spread of an attack is massively higher)

That the industry has tried hard to solve the low hanging 80% attacks is obvious from looking at the "solutions" that are provided such as IDS, AV, Firewalls, failure logging, scan-exploit-report penetration tests etc. These have done a decent job of stopping scans, worms and mass malware for the most part and have failed miserably at stopping the remaining 20%. So why is this a problem? 80% is pretty good right? 

Well lets look at what the differences between the two types of attackers are:

  • Goals
    • Might steal your SSN or CC
    • Might use your system as a bot in a DDOS
    • Might redirect you to advertisements
    • Might strip your WoW character
    • Might deface your website / embarrass you
  • Techniques
    • Mass scans
    • 1day exploits (often available patch)
    • Exploiting poor web coding 
    • SQLinjection
    • Mass malware
  • Goals
    • Will try to steal your intellectual property and us it for strategic advantage
    • Will gather intelligence against you to gain an edge in negotiations, legislation, bids, etc.
    • Will destroy the master boot record of all your desktops to financially damage your country
    • Will use you to attack your customers to achieve the above
    • Will steal your source code to find 0day, insert backdoors or sell it to competitors
  • Techniques
    • 0day
    • Targeted spear phishing
    • Sophisticated post exploitation & persistence
    • Covert channels
    • Anti-analysis & evasion
    • Malicious insiders, supply chain, implanted hardware
    • Mass data exfiltration
    • Crypto key stealing
    • Trust relationship hijacking
So what we have effectively done is build an environment where all target hosts are uniformly the same, and ensure that the only "germs" who can get in are the ones who we can't detect, can't stop and can't deal with. Superbugs. 

Whats worse is the more we get compromised and hurt by the 20% the more money and resources we throw at trying to solve the 80% and the more we put our head in the sand about the attackers that really want to hurt us and are good at doing it. We've pushed the motivated attackers way from using the easy to deal with techniques towards the ones we can't solve very well and are very expensive.

There are a few possible solutions:
  • Build active response capabilities (offense). This is messy and will cause a lot of problems but no one ever won a war with high walls and defense only. (Maginot line?)
  • Start throwing money and resources at the 20% problem. PCI is not going to do it. Compliance pen tests are not going to do it. Researching virtualizing every process, location aware document formats, degradation of service for anomalous connections, better intelligence, data sharing and correlation, in short making it increasingly expensive for the sophisticated attacker is what we should be looking at.
We have to stop popping antibiotics and figure out how to cut out the flesh eating bacteria.



The Ubiquitous Mr. Lovegroove said...

IT standardisation reduces costs and increases efficiency and often creates stronger controls, e.g. everyone in AD has to change password every 30 days.
It just may leave more money for security.

valsmith said...

Yeh thats the line I always here from corp security people, and you do definitely have a point. It comes down to your particular business needs. You can standardize in enclaves rather than across the whole enterprise. Scientists can be standardized one way while HR staff another, and execs a third. However every fortune 100/500 I've penetrated who was flat across everything I've taken over completely in ~2days or less. Every one I've attacked that had mixed and segregated business units took me much longer and was more likely to detect me. Security may not be everyone's #1 business priority, and that's a valid risk assessment to make.