SDL Team, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/author/sdlteam/ Expert coverage of cybersecurity topics Tue, 16 May 2023 06:13:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Secure Credential Storage http://approjects.co.za/?big=en-us/security/blog/2012/01/16/secure-credential-storage/ Mon, 16 Jan 2012 11:55:00 +0000 Pop security quiz: What’s the most secure way to store a secret? a)      Encrypt it with a strong symmetric cryptographic algorithm such as AES, using a 256-bit key. b)      Encrypt it with a strong asymmetric cryptographic algorithm such as RSA, using a 4096-bit key. c)      Encrypt it using a cryptographic system built into your platform, like […]

The post Secure Credential Storage appeared first on Microsoft Security Blog.

]]>
Pop security quiz: What’s the most secure way to store a secret?

a)      Encrypt it with a strong symmetric cryptographic algorithm such as AES, using a 256-bit key.

b)      Encrypt it with a strong asymmetric cryptographic algorithm such as RSA, using a 4096-bit key.

c)      Encrypt it using a cryptographic system built into your platform, like the Data Protection API (DPAPI) for Windows.

Have you made your choice? The correct answer is actually:

d)      Don’t store the secret at all!

Ok, it was a trick question. But the answer is valid: thieves can’t steal what you don’t store. Let’s apply this principle to the action of authentication – that is, logging into a web site. If a site never stores its users’ passwords, then even if the site is breached, those passwords can’t be stolen. But how can a site authenticate users without storing their passwords? The answer is for the site to store (and subsequently compare) cryptographic hashes of the passwords instead of the plaintext passwords themselves. (If you’re unfamiliar with the concept of hashes, we recommend reading http://msdn.microsoft.com/en-us/library/92f9ye3s.aspx#hash_values before continuing.) By comparing hashes rather than plaintext, the site can still validate that the user does indeed know his or her password – otherwise, the hashes wouldn’t match – but it has no need to ever actually store that password. It’s an elegant solution, but there are a few design considerations you’ll need to implement to ensure you don’t inadvertently weaken the strength of the system.

The first design issue is that simply hashing the passwords alone isn’t enough protection: you also need to add a random salt to each password before you compute its hash value. Remember that for a given hash function, an input value will always hash to the same output value. With enough time, an attacker could compute a table of plaintext strings and their corresponding hash values. In fact, many of these tables (known as “rainbow tables”) already exist and are freely downloadable on the Internet. Armed with a rainbow table, if an attacker could manage to gain access to the list of password hashes on the web site by any means, he could use that table to easily determine the original plaintext passwords. When you salt hashes, you take this weapon out of the attackers’ hands. It’s also important to generate (and store) a unique salt for every user – don’t just use the same salt for everyone. If you did always use the same salt, an attacker could build a new rainbow table using that single salt value, and eventually extract out the passwords.

The next important design issue to take is to be sure to use a strong cryptographic hash algorithm. MD5 may be a popular choice, but cryptographers have demonstrated weaknesses in it and it’s been considered an unsafe, “broken” algorithm for years. SHA-1 is stronger, but is beginning to show cracks and now cryptographers recommend avoiding SHA-1 as well. The SHA-2 family of hash algorithms is currently considered the strongest, and is the only family of hash algorithms approved for use in Microsoft products per the Microsoft Security Development Lifecycle (SDL) cryptographic standards policy.

Instead of hardcoding your application to use SHA-2, an even better approach would be to implement a “cryptographic agility” that would allow you to change the hash algorithm even after the application has been deployed into production. After all, cryptographic algorithms go stale over time; cryptographers find weaknesses and computing power increases to the point where brute force approaches become feasible. Someday SHA-2 may be considered just as weak as MD5, so planning for this eventuality early may save you a lot of trouble down the road. An in-depth look at hashing agility is beyond the scope of this post, but you can read more about a proposed solution in the MSDN Magazine article Cryptographic Agility. And just as the SDL mandates the use of strong cryptographic algorithms in Microsoft products, it also encourages product teams to use crypto agility where feasible so that teams can more nimbly migrate to new algorithms in the event that a current strong algorithm is broken.

So far, we’ve talked about what to hash (the password and a random unique salt value) and how to hash (using a cryptographically strong hash algorithm in the SHA-2 family, and preferably configurable to allow for future change), but we haven’t talked about where to hash. You might think that performing the hashing on the client tier would be a significant improvement in security, since you’d only need to send the hash over the wire to the server and never the plaintext password itself. However, this doesn’t buy you as much benefit as you’d think. If an attacker has a means of sniffing network traffic, he could still intercept the call and pass the hash to the server himself, thus spoofing the user and taking over his session. At this point, the hash essentially becomes the plaintext password. The only real benefit to this approach is that if the victim is using the same password on multiple web sites, the attacker won’t be able to compromise the victim’s account on those other sites as well, since knowing the hash of a password tells you nothing about the password itself. A better way of defending against this attack is just to perform the hashing on the server side, but to ensure that the password and all credential tokens such as session cookies are always transmitted over SSL/TLS. We’ll explore the topic of secure credential transmission (and other aspects of password management such as password complexity and expiration) in future blog posts.

By following a few simple guidelines, you can help to ensure that your application’s users’ credentials remain secure, even if your database is compromised:

  • Always store and compare hashes of passwords, never the plaintext passwords themselves.
  • Apply a random, unique salt value to each password before hashing.
  • Use a cryptographically strong hash algorithm such as one from the SHA-2 family.
  • Allow for potential future algorithm changes by implementing a cryptographically agile design.
  • Hash on the server tier and be sure to transmit all passwords and credential tokens over HTTPS.

The post Secure Credential Storage appeared first on Microsoft Security Blog.

]]>
Writing Fuzzable Code http://approjects.co.za/?big=en-us/security/blog/2010/07/07/writing-fuzzable-code/ Wed, 07 Jul 2010 10:56:25 +0000 Adam Shostack here.  One of the really exciting things about being in the Microsoft Security Engineering Center is all of the amazing collaborators we have around the company.  People are always working to make security engineering easier and more effective.  When we talk about security testing, we often focus on what it can’t do.  “You […]

The post Writing Fuzzable Code appeared first on Microsoft Security Blog.

]]>
Adam Shostack here.  One of the really exciting things about being in the Microsoft Security Engineering Center is all of the amazing collaborators we have around the company.  People are always working to make security engineering easier and more effective.  When we talk about security testing, we often focus on what it can’t do.  “You can’t test security in,” and “test will never find everything.”  But much like there’s code that’s easy to get wrong, there’s code that’s hard to test.  Writing code to be testable has a long history, and one we don’t often talk about in security.  Today’s post is from Hassan Sultan, who’s responsible for one of our internal fuzzing tools.   We hope it inspires you to think about the question “How can I make the security of my code more easily tested?”

And here’s Hassan:

Security testing is an integral part of the software development lifecycle. At Microsoft, the biggest part of the security testing done is usually implemented through a technique called fuzz testing: sending unexpected input to the product and checking whether it behaves in an acceptable way (i.e. it doesn’t crash, hang, leak memory…). We also use other techniques such as static source code analysis but today we’re going to focus on fuzz testing and how you can best make use of it.

Almost every software company and every software project has to perform within constraints, they can be financial, the project has to be completed within a set budget, or time-driven, the project has to ship within a specific timeframe. The corollary is that the product must be of the highest quality possible within those constraints. How then can you perform efficient, quick and cheap security testing?

One approach we have started using at Microsoft is to change our engineering and test engineering practices to make fuzz testing easier, it’s a little bit of additional upfront work but with great savings in terms of time and resources quickly appearing over the life of the project.

There are two popular approaches to fuzz testing, considering data exchanges between a producer (the software sending data) and the consumer (the target software processing the data):

  • Generation fuzzing : You create unexpected input from scratch and send it to the target, the fuzzer is effectively the producer
  • Man-in-the-middle(MITM) fuzzing: You intercept the data as it flows from an existing producer to the consumer and modify it on the fly before it reaches the consumer

A couple of things are obvious when comparing these two approaches:

  • Generation fuzzing requires quite a bit of work to create entirely the data exchange as the generated data must be conforming enough to be accepted by the target, but you have complete control and freedom to generate anything you want, hence you control the fuzzing quality end-to-end
  • MITM fuzzing requires little upfront effort but depends on existing producers to generate data that can be modified, hence the quality of the fuzz testing depends on the producer and the data it generates

The approach I’m going to talk about is based on MITM fuzzing; the goal is to develop functionality tests in a way that makes them easily reusable as producers for MITM fuzz testing, as well as to provide a bit of functionality in the actual product to make fuzz testing more efficient. This approach makes security testing much cheaper to implement, is quite efficient and allows improving the fuzzing over time without having to rewrite numerous security tests.

MITM fuzzing using functionality tests has the following drawbacks:

  • Tests tend to not behave correctly when they receive an unexpected response from the target following a fuzzed exchange as they never were meant to deal with such thing, they can hang or crash
  • Tests rely on a long timeout to receive a response, and thus block sometimes for many seconds when data is fuzzed and the consumer drops the request without responding
  • Tests bail out at the first failure they encounter and do not execute the following test cases, the fuzz testing can thus not be applied to those exchanges
  • Test setup code is mixed with test code and causes the fuzzer to corrupt the data exchange required for setup, thus preventing the actual test from running,
  • Modifying data on the fly efficiently is cumbersome to impossible when the following is present in the data exchanged:
    • Encrypted data
    • Compressed data
    • Checksums & signatures

The approach here is thus to fix all these problems at the source, we have listed the steps required along with each step’s priority, obviously the more you do, the better, but if in a crunch, start from the top of the list and go down as far as you can.

Test engineering changes for functionality tests
  • P1: Harden functionality tests to handle unexpected responses from the consumer gracefully
  • P1: Have configurable timeouts that can be shortened when the tests are used for fuzzing
  • P2: Isolate setup code in tests so that the setup part can be signaled to the fuzzer which will then pause fuzzing until setup completes
  • P2: Tests containing multiple test cases should allow running an arbitrary test case individually without the need to run them all in a sequence
Engineering changes in the product
  • P1: Provide a test hook in the product to disable checksum validation in components
  • P1: Provide a test hook to turn compression and decompression routines into no-op : what comes in comes out unmodified, the communication will occur correctly using uncompressed data and the fuzzer can thus modify the content easily and efficiently
  • P2: Provide a test hook to disable signature validation
  • P2: Provide a test hook to turn encryption and decryption routines into no-op

(A test hook is a configuration option that modifies the product’s behavior when set, it can be removed before the product ships if needed)

These modifications to the way tests and products are engineered are minor and cheap to implement when planned early on and will produce tremendous benefits by:

  • Enabling security testing to be applied early on in products, which will allow to find and remedy issues early in the product cycle
  • Significantly reducing the cost of security testing as this can now be performed by reusing functionality tests, you can improve security testing simply by improving your functionality tests
  • Allowing all security tests to improve automatically as the fuzzing engine used gets smarter, since the security tests are decoupled from the fuzzer itself, you can thus stage your investment in security testing gradually by adding more intelligence into your fuzzer
  • Lessening the number of vulnerabilities and thus lessening servicing costs

Ultimately, using both Generation fuzzing and MITM fuzzing would be ideal, as generation fuzzing provides a few benefits that won’t be attained by MITM fuzzing(the ability to create very specific scenarios for example), but when dealing with time and resource constraints, the MITM fuzzing approach allows for efficient fuzzing that can be improved over time at a minimal cost.

 

The post Writing Fuzzable Code appeared first on Microsoft Security Blog.

]]>
The Trouble with Threat Modeling http://approjects.co.za/?big=en-us/security/blog/2007/09/26/the-trouble-with-threat-modeling/ Wed, 26 Sep 2007 15:11:00 +0000   Adam Shostack here.   I said recently that I wanted to talk more about what I do. The core of what I do is help Microsoft’s product teams analyze the security of their designs by threat modeling.   So I’m very concerned about how well we threat model, and how to help folks I work […]

The post The Trouble with Threat Modeling appeared first on Microsoft Security Blog.

]]>
 

Adam Shostack here.

 

I said recently that I wanted to talk more about what I do. The core of what I do is help Microsoft’s product teams analyze the security of their designs by threat modeling.   So I’m very concerned about how well we threat model, and how to help folks I work with do it better.   I’d like to start that by talking about some of the things that make the design analysis process difficult, then what we’ve done to address those things.  As each team starts a new product cycle, they have to decide how much time to spend on the tasks that are involved in security.  There’s competition for the time and attention of various people within a product team.  Human nature is that if a  process is easy or rewarding, people will spend time on it.  If it’s not, they’ll do as little of it as they can get away with.  So the process evolves, because, unlike Dr No, we want to be aligned with what our product groups and customers want

There have been a lot of variants of things called “threat modeling processes” at Microsoft, and a lot more in the wide world.   People sometimes want to argue because they think Microsoft uses the term “threat modeling” differently than the rest of the world.  This is only a little accurate.  There is a community which uses questions like “what’s your threat model” to mean “which attackers are you trying to stop?”  Microsoft uses threat model to mean “which attacks are you trying to stop?”  There are other communities whose use is more like ours.  In this paragraph, I’m attempting to mitigate a denial of service threat, where prescriptivists try to drag us into a long discussion of how we’re using words.)   The processes I’m critiquing here are the versions of threat modeling that are presented in Writing Secure Code, Threat Modeling, and The Security Development Lifecycle books.

In this first post of a series on threat modeling, I’m going to talk a lot about problems we had in the past.  In the next posts, I’ll talk about what the process looks like today, and why we’ve made the changes we’ve made.   I want to be really clear that I’m not critiquing the people who have been threat modeling, or their work.  A lot of people have put a tremendous amount of work in, and gotten some good results.  There are all sorts of issues that our customers will never experience because of that work.  I am critiquing the processes,  saying we can do better, in places we are doing better, and I intend to ensure we continue to do better.

We ask feature teams to participate in threat modeling, rather than having a central team of security experts develop threat models.  There’s a large trade-off associated with this choice.  The benefit is that everyone thinks about security early.  The cost is that we have to be very prescriptive in how we advise people to approach the problem.  Some people are great at “think like an attacker,” but others have trouble.   Even for the people who are good at it, putting a  process in place is great for coverage, assurance and reproducibility.  But the experts don’t expose the cracks in a process in the same way as asking everyone to participate.

Getting Started

The first problem with ‘the threat modeling process’ is that there are a lot of processes.   People, eager to threat model, had a number of TM processes to choose from, which led to confusion.  If you’re a security expert, you might be able to select the right process.  If you’re not, judging and analyzing the processes might be a lot like analyzing cancer treatments.  Drugs?  Radiation?  Surgery?  It’s scary, complex, and the wrong choice might lead to a lot of unnecessary pain.   You want expert advice, and you want the experts to agree.

Most of the threat modeling processes previously taught at Microsoft were long and complex, having as many as 11 steps.  That’s a lot of steps to remember.  There are steps which are much easier if you’re an expert who understands the process.  For example, ‘asset enumeration.’  Let’s say you’re threat modeling the GDI graphics library.  What are the assets that GDI owns?  A security expert might be able to answer the question, but anyone else will come to a screeching halt, and be unable to judge if they can skip this step and come back to it.  (I’ll come back to the effects of this in a later post.)

I wasn’t around when the processes were created, and I don’t think there’s a lot of value in digging deeply into precisely how it got where it is.  I believe the core issue is that people tried to bring proven techniques to a large audience, and didn’t catch some of the problems as the audience changed from experts to novices.

The final problem people ran into as they tried to get started was an overload of jargon, and terms imported from security.  We toss around terms like repudiation as if everyone should know what it means, and sometimes implied they’re stupid if they don’t.  (Repudiation is claiming that you didn’t do something.  For example, “I didn’t write that email!,” “I don’t know what got into me last night!”  You can repudiate something you really did, and you can repudiate something you didn’t do.)  Using jargon sent several unfortunate messages:

  1. This is a process for experts only
  2. You’re not an expert
  3. You can tune out now
  4. We don’t really expect you to do this well

Of course, that wasn’t the intent, but it often was the effect.

The Disconnected Process

Another set of problems is that threat modeling can feel disconnected from the development process.  The extreme programming folks are fond of only doing what they need to do to ship, and Microsoft shipped code without threat models for a long time.  The further something is from the process of building code, the less likely it is to be complete and up to date.  That problem was made worse because there weren’t a lot of people who would say “let me see the threat model for that.”   So there wasn’t a lot of pressure to keep threat models up to date, even if teams had done a good job up front with them.  There may be more pressure with other specs which are used by a broader set of people during development.

Validation

Once a team had started threat modeling, they had trouble knowing if they were doing a good job.  Had they done enough?  Was their threat model a good representation of the work they had done, or were planning to do?  When we asked people to draw diagrams, we didn’t tell them when they could stop, or what details didn’t matter.  When we asked them to brainstorm about threats, we didn’t guide them as to how many they should find.  When they found threats, what were they supposed to do about them?  This was easier when there was an expert in the room to provide advice on how to mitigate the threat effectively.   How should they track them?   Threats aren’t quite bugs—you can never remove a threat, only mitigate it.  So perhaps it didn’t make sense to track them like that, but that left threats in a limbo.

“Return on Investment”

  The time invested often didn’t seem like it was paying off.  Sometimes it really didn’t pay off.    (David LeBlanc makes this point forcefully in “Threat Modeling the Bold Button is Boring”) Sometimes it just felt that way—Larry Osterman made that point, unintentionally in “Threat Modeling Again, Presenting the PlaySound Threat Model,” where he said “Let’s look at a slightly more interesting case where threat modeling exposes an issue.”  Youch!  But as I wrote in a comment on that post, “What you’ve been doing here is walking through a lot of possibilities.  Some of those turn out to be uninteresting, and we learn something.  Others (as we’ve discussed in email) were pretty clearly uninteresting”  It can be important to walk through those possibilities so we know they’re uninteresting.  Of course, we’d like to reduce the time it takes to look at each uninteresting issue.

Other Problems

Larry Osterman lays out some other reasons threat modeling is hard in a blog post: http://blogs.msdn.com/larryosterman/archive/2007/08/30/threat-modeling-once-again.aspx

One thing that was realized very early on is that our early efforts at threat modeling were quite ad-hoc. We sat in a room and said “Hmm, what might the bad guys do to attack our product?” It turns out that this isn’t actually a BAD way of going about threat modeling, and if that’s all you do, you’re way better off than you were if you’d done nothing.

 

Why doesn’t it work?  There are a couple of reasons:

It takes a special mindset to think like a bad guy.  Not everyone can switch into that mindset.  For instance, I can’t think of the number of times I had to tell developers on my team “It doesn’t matter that you’ve checked the value on the client, you still need to check it on the server because the client that’s talking to your server might not be your code.”.

Developers tend to think in terms of what a customer needs.  But many times, the things that make things really cool for a customer provide a superhighway for the bad guy to attack your code.

It’s ad-hoc.  Microsoft asks every single developer and program manager to threat model (because they’re the ones who know what the code is doing).  Unfortunately that means that they’re not experts on threat modeling. Providing structure helps avoid mistakes.

With all these problems, we still threat model, because it pays dividends.  In the next posts, I’ll talk about what we’ve done to improve things, what the process looks like now, and perhaps a bit about what it might look like either in the future, or adopted by other organizations.

The post The Trouble with Threat Modeling appeared first on Microsoft Security Blog.

]]>