Testing in the SDL
“You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL). When we get security wrong, the first criticism we almost always hear is, “Didn’t you guys test this thing?” It is no great stretch to say that many of the most famous industry security folks made their reputation by finding vulnerabilities (through, no doubt, testing). You simply can’t avoid the subject of testing when you talk about security, and you can’t be sure you’re secure without testing.
We often get questions about SDL-required security testing and too often these questions deal exclusively with fuzz testing. Equating fuzz testing and security testing couldn’t be further from the truth when it comes to how it is treated inside Microsoft. With this post, I want to shed some light on what Microsoft actually does for security testing. In a follow up post on this blog, Rob Roberts will talk about our privacy testing practices.
To begin, it is difficult to confine testing activity within a single SDL phase. At Microsoft, we don’t try. Testers are involved in architecture review, security design reviews, threat modeling, code reviews and many other things that happen both before and after the actual testing phase. In each of these instances, testers bring a valuable how-I-would-break-this slant to these endeavors. This contribution has been valuable enough to spawn a big push around the company to move testing activity to earlier phases of the lifecycle and, though some might not agree, I think the practice of threat modeling can be ascribed to this movement. The idea of thinking through threats and understanding attack vectors has been our focus in security testing for years, and threat modeling represents the extraction of this process as its own standalone entity.
Our overall goal is clear – whenever an engineer designs or writes code, we want that person to think about how the code might be exploited. When attack scenarios, threats and test cases are swirling around in a developer’s mind as they architect, design or write code, chances are he or she will write more secure code and plan better defenses. Clearly there is an overwhelming amount of stuff to think about, requiring a healthy amount of due caution and discourse with teammates and outside experts. Being careful and consulting colleagues is rarely a bad thing!
But, no matter how successful we are in spreading testing wisdom throughout the SDL, at some point we need to check that such wisdom actually made its way into the shipping product. I trust developers to do the right thing, but as a tester myself, you better believe I’m going to check that they actually did it.
Microsoft uses a three-pronged approach to security testing. During these tests we may refer to a threat model or security design review document, but we may also choose to ignore these documents for an independent assessment of an application or service. This decision is at the discretion of the security test lead, and depends on how independent he or she wants the test team to be.
1. Attacks against the application’s environment.
The environment, the sum total of all OS components, runtime libraries, environment variables, network activity, file system configurations, registry keys and so forth, is probably the biggest unknown when fielding an application. For some environments the application will work securely, for others it may fail miserably. We train our testers to map out the environment, identify components subject to modification or variation and test as many configurations of these as possible. These attack scenarios are recognition that our applications work in unpredictable environments where we have to work out the trust relationships very carefully. It takes only one insecure component to put an entire machine or network at risk. We need to ensure that our own applications work securely despite the presence of these environment insecurities.
2. Direct attacks against the application itself.
Inputs are dangerous and inputs that cross trust boundaries are crucial targets of this class of security testing. Our testers must build and maintain lists of illegal, ill-formed and improper inputs that are consumed by their application’s interfaces. Code, scripts, SQL queries, special characters, long strings and the like must be gathered in large numbers and used to pummel the application under test mercilessly. Large scale automated testing comes into play here in a big way., Our goal is for our applications to be able withstand targeted and sustained attacks – whether it’s a regression suite of past and potential exploits or fuzz testing using both random or format-aware logic. These tests are crucial to prevent repeat exploits and to test against targeted attacks scenarios.
3. Indirect attacks against the application’s functionality.
Application features need to be cataloged for potential bad effect. All features clearly have intended functionality for good effect or they wouldn’t be features, our concentration as security testers is to understand what ways those features can be misused to the misery or inconvenience of our users. We must look at our application’s functionality and ask whether any of it can be ‘turned against itself.’ Are there ways that the software can be easily misconfigured? Can security features be circumvented? Is there some function whose purpose is benign and even useful that under certain circumstances has undesirable consequences? A feature-by-feature assessment is necessary to ensure we’ve covered all the bases.
Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.