Last Friday, I had the opportunity to introduce some aspects of software security and threat modeling to the UT Student Software Engineering Group, which included a mix of undergraduate and graduate students as well as faculty. The presentation format was more of an open discussion where I would answer questions as I spoke, and we would engage in conversation about the topic of the question. I enjoy this format, because the presentation evolves with the group, and not the presenter. The presentation is up on the site and located here.
During the course of the presentation, several interesting questions came up that I was not prepared to answer completely. Additionally, I feel it would be good to share the questions and my thoughts with others after I researched them. One question was about the economics of software security and whether or not the security integrated into the software development life cycle is worth the effort. Another question posed wanted to know if threat modeling and software security were effective in reducing vulnerabilities and other unwanted issues such as bugs. The final question was specific to software developers and things besides that can be done besides writing good code. There are a number of great references on the topic of software security, and my comments may only scratch the surface. If you want to learn more, I have provided some URLs and books I have used to get you started. As a starting point, I used a presentation given by Chris Peterson who presented on Microsoft Windows 7 Security at XCon put on by XFocus [1].
The first topic that came up during the presentation concerned metrics and how security helps improve the software engineering process. Additionally, there were questions about cost savings, specifically does security make the software engineering process more expensive. My answer to these questions depends.
First off, security is one ingredient to the software engineering process. If everything in the process is done correctly and security is integrated into it, as the SDL describes, the over time I see the cost of the software lower than the without security. Cost can be driven down in a number of ways. In 2002, RTI published a report about poor testing standards and the impact on the economy [2]. They published costs due to poor QA and testing and then potential savings. Given this fact, lets look at QA and testing. These are some of the hardest and most laborious tasks in the software engineering process, outside of the actual development. When tools such as threat modeling or fuzzing are employed these costs can be lowered. Threat modeling can be used to identify how the application will be used and abused (e.g., test cases and abuse cases) along with identifying more sensitive and critical areas in the software or areas in the software where automated testing can be performed. One inherent benefit is identifying and performing testing in areas that need it most, rather than testing the entire product equally.
Automated testing frameworks can also be developed or augmented to meet the automated testing needs of the project. From the TM, test patterns and cases can arise, and these can then be fed into the testing framework. This aspect helps save money because machine clock cycles are cheaper than human man-hours. Additionally, the framework and test patterns can be kept in a library for future use, so the fuzzing investment can be reused and even built into other projects. So in this case there may be a higher overhead due to threat modeling and automated security testing framework development, but there is also a potential savings over the life of the project, and other projects as well. As far as money or cost savings from these activities, I do not have figures. But a question did arise about the cost of a security breach, and I found a figure that was about $202 per record [3,4]. But there is no comparison or metric for money saved. There are other places where money can be saved like a streamlined patching process or reliability as a result of security, but for brevity we will continue on to the other questions that arose.
While my first issue infers better security is possible, it does not prove it with empirical data. The second discussion we had was about improved product security. Since Microsoft began using the SDL in 2002, they have seen a sharp decrease in the number of critical vulnerabilities in their operating systems [5]. The following figure is excerpted from H1 2008 Desktop OS Vendor Report.
Image From H1 2008 Desktop OS Vendor Report p. 13, Vulnerabilities By Product, Severity (Reduced Linux Configuration)
The figure shows vulnerabilities (critical, medium, and low) by OS, comparing Windows Vista, Windows XP SP2, Mac OS X, Ubuntu, and RHEL. The figure shows Windows Vista with much fewer vulnerabilities than other desktop platforms. Additionally, on Microsoft’s Malware Protection Center Site, there is a graph on page 15 of [6], which shows the infection rates of each of their operating system platforms with Microsoft Vista touting much less than most of the others. The only OS with fewer infections is Microsoft Server 2003 SP2, which could be for more than one reason:1) it has fewer deployments, 2) it is not used for everyday activities that expose it to threats seen by consumer desktops (e.g. no changes to default security settings), or 3) its more secure. Microsoft’s Vista OS is one of the flagship products for the SDL process. When Microsoft XP and Microsoft Vista are compared in vulnerabilities and infection rates, a conclusion can be drawn that a successful SDL can help build a successfully secure product.
Another issue that came up is the insider threat and how to model them, more specifically the byzantine user who has some motive to do harm. Insider threats are the most expensive and dangerous aspects of a security system. In these cases, threat modeling can help identify critical assets, data, systems, etc. and identify mitigation strategies. First of all the Principle of Least privilege should be used. This can help knock down most significant risks, because users are only privileged to do the role they fill. For example, a banking clerk should not be given administrative access to their host. Technical and human checks and balances (e.g. controls) should be integrated into architectures, designs, and implementations. The controls might require multiple authoritative personnel to sign-in and allow critical changes. The controls might come in the form of policy, but given a rogue user with ulterior motives may circumvent the system to meet their own objectives, so logging and review should also be heavily integrated into the system. As an example, any work a network engineer does should be checked by a peer to prevent malicious or catastrophic events. In one case, a major corporation was spared millions by a review of server management scripts [7]. However, their security policies should be heavily scrutinized and rewritten. There are as many ways to circumvent security issues as there are to mitigate them. TM will help assess the risk and help place a value of how much mitigation is necessary and where the mitigation needs to be employed to thwart these types of attackers. In any environment, Defense-in-Depth is key in ensuring overlapping security coverage and positive failure. Like software engineering, there is no silver bullet.
One final topic that came up in the presentation was technology. Specifically what else is there besides code security. On the developer side of the fence, there are a number of exercises that can be performed to ensure code security, which might include code analysis (e.g. static or dynamic analysis), code reviews, policies regarding unsafe APIs, input validation, code signing and obfuscation, etc from development up to deployment of the product. There are also technologies and policies that can be used to supplement code security when the software is in production. As I mentioned Defense-in-Depth is the key to a successful security plan. Technologies and policies need to be chosen to accommodate and secure software products. For example, when a product is RTM, all debugging symbols should be stripped and stack checking should be enabled to prevent arbitrary control from stack overflows. The deployment platform should be secure by default, meaning features such as DEP and memory randomization are enabled. Depending on the deployment scenario, other steps may be taken. If this is a large IT project, the systems involved can be reviewed for secure configuration guidelines, network technologies for logging and access control can be used, etc. There is an entire laundry list of things that can be done out side of the code level.
Security is all about engineering. There are a vast number of things that can be done to ensure a successful and product development cycle. I feel that a TM is the keystone of this success. It helps everyone understand the goals of the product, each component, leading up to identifying and understanding how to contain or handle threats. I equate threat models for software engineers to a battle plan for war fighters. The threat model provides insight into the security landscape, it helps flush out logistical and strategic details, and everyone should come out with an understanding of what they need to do to make the project a success. I could go on about this topic, because I love to discuss this and educate others about security. I have been doing this for a while, and I really do have much to share, but given this has all been said at one time or another, I will simply present some links of interest.
Links to OWASP regarding information and application security:
OWASP
OWASP Security Principles (for Developers and Designers) (not just for software folks)
OWASP How-to Articles
Microsoft SDL and Software Security Information:
Microsoft’s SDL Home Page
Microsoft’s Threat Modeling Tool
Microsoft’s Security Intelligence Reports and Malware Protection Group
Here are just a few books I have read or keep available in my library:
Coding Standards and SDL Practices
M. Howard and D. LeBlanc, Writing Secure Code, ed. 2. Redmond: Microsoft Press, 2003.
M. Howard and S. Lipner, The Security Development Lifecycle. Redmond: Microsoft Press, 2006.
G. McGraw, Software Security Building Security In. Upper Saddle River: Addison Wesley, 2006.
Software Testing and Assessment
G. McGraw and G. Hoglund, Exploiting Software How to Break Code. Boston: Addison Wesley, 2003.
M. Sutton, A. Green, and P. Amini, Fuzzing Brute Force Vulnerability Discovery. Upper Saddle River: Addison Wesley, 2007.
M. Down, J. McDonald, and J. Shuh, The Art of Software Security Assessment Identifying and Preventing Vulnerabilities. Upper Saddle River: Addison Wesley, 2007.
Bibliography:
1. C. Peterson. “Windows 7 Security Overview.” XCon2008 XFocus Information Security Conference. November, 2008.
2. RTI. “Planning Report 02-3: The Economic Impacts of Inadequate Infrastructure for Software Testing (2009).” NIST [Online], Available, http://www.nist.gov/director/prog-ofc/report02-3.pdf.
3. Walt. “Cost of a Security Breach (2009).” PCI DSS News and Information [Online]. Available, http://www.treasuryinstitute.org/blog/index.php?itemid=227.
4. “2008 Annual Study: Cost of a Data Breach (2009).” Ponemon Institute. [Online]. Available, http://www.encryptionreports.com/2008cdb.html.
5. J. Jones, H1 2008 Desktop OS Vendor Report (2009). Technet.com. Blogs.Technet.com [Online], Available, http://blogs.technet.com/security/attachment/3140955.ashx.
6. Microsoft Security Intelligence Report volume 5 (January – June 2008) (2009). Microsoft [Online]. Available, http://www.microsoft.com/downloads/details.aspx?FamilyId=B2984562-47A2-48FF-890C-EDBEB8A0764C.
7. K. Poulsen. “Fannie Mae Logic Bomb Would Have Caused Weeklong Shutdown (2009).” Wired [Online], Available, http://blog.wired.com/27bstroke6/2009/01/fannie.html
Post originates from
here.