Microsoft Security, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/author/mssecurity/ Expert coverage of cybersecurity topics Fri, 05 Dec 2025 20:50:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Governments recognize the importance of TPM 2.0 through ISO adoption http://approjects.co.za/?big=en-us/security/blog/2015/06/29/governments-recognize-the-importance-of-tpm-2-0-through-iso-adoption/ Mon, 29 Jun 2015 17:31:21 +0000 Earlier today, the Trusted Computing Group (TCG) announced in a press release the Trusted Platform Module (TPM) 2.0 Library Specification was approved by the ISO/IEC Joint Technical Committee (JTC) 1 and will be available later in the year as ISO/IEC 11889:2015. This landmark accomplishment is set to encourage worldwide adoption of the TPM 2.

The post Governments recognize the importance of TPM 2.0 through ISO adoption appeared first on Microsoft Security Blog.

]]>
Earlier today, the Trusted Computing Group (TCG) announced in a press release the Trusted Platform Module (TPM) 2.0 Library Specification was approved by the ISO/IEC Joint Technical Committee (JTC) 1 and will be available later in the year as ISO/IEC 11889:2015. This landmark accomplishment is set to encourage worldwide adoption of the TPM 2.0, which is critical for improving trust in information technology products and services.

TPM 2.0 builds on the achievements of its predecessor ISO/IEC 11889:2009, playing an important role in enhancing security by combining hardware and software features. It provides improvements to secure generation of cryptographic keys and to control their use. It includes a privacy protected mechanism that enables remote trust-verification of the software used to boot a particular system. Most importantly, TPM 2.0 supports cryptographic agility, allowing for effective management of cryptographic algorithms; including easier migration when a major weakness is found in an algorithm. Under the same technical framework, it also expands the use of additional publically available algorithms based on market requirements for TPM applications.

The fact that the standard was supported by a large number of countries, including Australia, Belgium, Canada, China, Czech Republic, Denmark, Finland, France, Ghana, Ireland, Italy, Japan, the Republic of Korea, Lebanon, Malaysia, Netherlands, Nigeria, Norway, the Russian Federation, South Africa, the United Arab Emirates, the United Kingdom and the United States, underlines the growing level of concern around cybersecurity, among both developed and emerging economies. It also stems from the inclusive and collaborative development process led by the TCG, which reflects its commitment to finding open and vendor-neutral technology solutions that address industry, consumer and government security requirements.

Microsoft, along with other technology companies, is an active participant in the TCG and over the years has invested in the innovation and promotion of the commercial adoption of trusted computing standards, including in developing TPM features as a part of Windows Vista, Windows 7, 8, 8.1, and most recently Windows 10. These actions and our customers’ feedback significantly advanced our understanding of the trusted computing technology, which in turn helped us deliver timely, market-driven solutions.

However, we recognize that we have to go further to address the security challenges posed by the explosive growth of mobile devices, society’s increasing reliance on wireless networks and the Internet of Things. To this end, Microsoft is providing more TPM functions in Windows 10 and enabling easier deployment of the TPM to achieve “secure by default” objectives for devices, such as mobile devices, servers, etc. TPM 2.0 implementations will include more algorithms and processes such as key generators, as well as onboard storage for cryptographic system measurements for validation and digital certificates. Moreover, Windows 10 hardware requirements enable tailored TPM 2.0 deployments for organizations, ensuring greater flexibility if so needed.

In our view TPM 2.0 represents a significant step forward as it effectively combines best practices from leading industry providers while also ensuring complete transparency of the specification through an open public review and consultation process. However, there is more to be done. The approval of TPM 2.0 offers a rare opportunity for countries to embrace and promote wider commercial adoption of this trusted computing technology in the near term. As technology evolves quickly new standards will be needed. The TPM 2.0 standard development, implemented through the PAS Transposition Process in ISO/IEC JTC 1, provides a template for a future collaborative security standards adoption. It provides ample opportunity for security experts to collaborate and reach an international consensus – one which will ensure user security and privacy and maintain trust in the Internet as a foundational platform of commerce and well-being in the long run.

The post Governments recognize the importance of TPM 2.0 through ISO adoption appeared first on Microsoft Security Blog.

]]>
SDL Training http://approjects.co.za/?big=en-us/security/blog/2008/05/29/sdl-training/ Thu, 29 May 2008 11:22:00 +0000 Hi everyone, Shawn Hernan here. Being a security guy is incredibly rewarding because you get to look at virtually any part of a product, from kernel drivers to web services to user education to sales and servicing.

The post SDL Training appeared first on Microsoft Security Blog.

]]>
Hi everyone, Shawn Hernan here. Being a security guy is incredibly rewarding because you get to look at virtually any part of a product, from kernel drivers to web services to user education to sales and servicing. You have to do that because a failure in one of those areas can endanger the security of our customers. Microsoft’s SDL process reflects that reality. The process is structured so that you really do have to look at each piece before you can sign off. But sometimes when others want to emulate the success of the SDL, they want to skip steps. They try to boil the SDL down into its component parts, like training, or tooling, or security response. Maybe the most common form of that mistake is training, but you see that same thinking applied to code scanning, security response, and just about every phase of the SDL. “Let’s just train everyone, and all our security problems will go away.” If only it were so easy. I’d like to take a few minutes to try to explain why it’s not really that easy from my own experience.

Have you ever sat in a corporate training? Some are good, some are bad, but did you ever say, “man I can’t wait for training today.” What about mandatory training? What about mandatory training in a subject that you really don’t think is your area? What if you had to do it every year, and got harassed if you didn’t do it? What if you were, say, an audio engineer and were dragged into a security class?

I ran the SDL training program at Microsoft for a long time, and developed and taught a big chunk of the training. I spent hundreds of hours in front of thousands of developers, testers, and program managers. I got some really good reviews (and a few bad ones) on the classes I offered. And I tried to do a lot of things to try to make the trainings interesting. I handed out dozens of fresh peaches in an early class on fuzz testing, for example.  The room smelled really nice after that, and there are probably still a few people around Microsoft who think of fuzz testing when they see a peach.

But even on my best day, I was under no illusion that the majority of the audience was excited to be there, and I was certain that they weren’t going to go back to their offices and spend weeks applying the lessons from the class, setting aside other things that are causing present and immediate problems in favor of something that is far off into the future.

You have to work at getting people’s attention – especially as it relates to security and privacy. From time to time, I would see people reading their mail in class, and I would point to them and ask them a question. That did not endear me to the audience as much as the peaches, but embarrassment is always fresh and in season.  J

One student wrote of one of my classes, “the basics for secure design – could be replaced by non-anonymous site-wide exam with open material.”  He was not alone, I assure you.

Is that an indication that our training, or any training, is pointless? Hardly, but training alone is not a change agent.

Richard Derwent Cooke wrote,   “It is a first principle of Change Management that people will act in what they perceive as being their best interests.”

At best, training can provide people with insight into what they need to do to solve a security problem if they believe that solving that security problem is in their best interests.

To be effective, training needs to happen in an environment:

  • Where expectations are clearly set (the SDL sets specific minimum requirements).
  • People have appropriate incentives and consequences (security is a great career path at Microsoft, and nobody wants to be the one holding up a ship schedule for failure to meet a security requirement).
  • Where tools and resources to accomplish the goals are available (we build a whole variety of tools that map to the SDL requirements).
  • Where management models the behavior (recall the original BillG TWC memo).
  • Where the environment reflects and supports the values presented in the training (apparent in everything Microsoft does).

Don’t make the mistake of thinking that a bunch of training, even really high quality training done periodically, will result in actual behavior change. It won’t. You have to build an environment where people perceive solving security problems as being in their best interests. You have to make security their problem – not in the sense of passing the buck, but in the sense of changing their behavior so they will bring security problems to you.

To illustrate further, I’ll cite two examples. First, fuzz testing. Fuzz testing has been a success story here at Microsoft. Tools arise spontaneously to solve new fuzzing challenges, written by people who believe the challenges are their challenges. There are people who feel ownership for our fuzzing strategy and on-going research and science, there are specific goals and requirements, we have training (remember the peaches?), and internally developed fuzzers have won prestigious awards within the company, handed out by members of the executive staff, and all of this gets revisited periodically as part of the SDL.

 By contrast, I’ll choose a less successful area – defect estimation. On my own volition, I created (based mostly on some excellent material from Microsoft Research) and taught a class called “Defect Estimation and Management” and added it to the SDL curriculum. Microsoft is a great place to work in that regard. It was pretty close to the best-reviewed class I taught. But, we have not yet been able to establish a set of tools to estimate security defect density effectively, and establish a fair set of expectations, incentives, and consequences, or even  decide what we should do if we had the data. We discovered some things, though. For example, based on what I observed (which should not be construed as rigorous research), it does not appear as if the density of general defects correlates closely with the density of security defects.  And Microsoft Research found higher code coverage in testing correlates with higher bug rates in the field.

And so even though people like the idea of defect estimation, and we’ve got some interesting and surprising data, we’ve not yet been successful in changing people’s behavior.  Generally speaking, an individual test manager does not feel that establishing a high quality estimate of their defect density is in his or her best interests, as compared to, say, improving the time in which an established series of tests can be performed .  

We need to build an environment that has the tools, training, rewards and incentives, and expectations and consequences to change people’s behavior. Not that we’re not trying. But training won’t solve it alone, nor would tools, trophies, rants, testing, code review, or some edict from on high. The SDL is as much about changing the culture and influencing the behavior of individual engineers as it is anything else.

I’m convinced that Microsoft’s SDL process works because it addresses the end-to-end problem – from training through servicing, and provides a complete environment where people feel ownership of their part of the security problem and have the resources to solve it.

So the next time you find yourself sitting in some mandatory training, remember the lessons of the SDL (and most of the research on human performance management): training alone won’t cut it. If you want real behavior change, there have to be things outside the lecture room to influence people to change their behavior.

The post SDL Training appeared first on Microsoft Security Blog.

]]>
Training People on Threat Modeling http://approjects.co.za/?big=en-us/security/blog/2008/03/14/training-people-on-threat-modeling/ Sat, 15 Mar 2008 02:11:12 +0000  Adam Shostack here. Blogger Ian Grigg has an interesting response to my threat modeling blog series, and I wanted to respond to it.

The post Training People on Threat Modeling appeared first on Microsoft Security Blog.

]]>
 Adam Shostack here. Blogger Ian Grigg has an interesting response to my threat modeling blog series, and I wanted to respond to it. In particular, Ian says “I then would prefer to see the threat – property matrix this way:”

I wanted to share an additional table from our training, and talk about repudiation a bit more.

Actually, I’d like to repudiate the term “repudiation.” It’s an awful name that most people never run into in day-to-day life. It doesn’t hit the simplification bar the way say, “denial,” would. Unfortunately, STDIDE (Spoofing, Tampering, Denial, Information Disclosure, Denial of Service, Elevation of Privilege) doesn’t make a very memorable acronym. Memorable is important when training people. Our reviewers have raised this as an issue, and ’d love to get feedback from our readers. How can we ensure that the software we build has the right level of logging and audit-ability? What evocative words can we use, and can you help us come up with a word or phrase that starts with R? Let us know!

And then, here’s the chart:

ThreatPropertyDefinitionExample
SpoofingAuthenticationImpersonating something or someone else.Pretending to be any of billg, microsoft.com or ntdll.dll
TamperingIntegrityModifying data or codeModifying a DLL on disk or DVD, or a packet as it traverses the LAN.
RepudiationNon-repudiationClaiming to have not performed an action.“I didn’t send that email,” “I didn’t modify that file,” “I certainly didn’t visit that web site, dear!”
Information DisclosureConfidentialityExposing information to someone not authorized to see itAllowing someone to read the Windows source code; publishing a list of customers to a web site.
Denial of ServiceAvailabilityDeny or degrade service to usersCrashing Windows or a web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole.
Elevation of PrivilegeAuthorizationGain capabilities without proper authorizationAllowing a remote internet user to run commands is the classic example, but going from a limited user to admin is also EoP.

 

(Ian’s post is here https://financialcryptography.com/mt/archives/001013.html . IE users will see a warning about certificate authorities when visiting this site.  As I wrote this, Gunnar Peterson added commentary at “Threats, Mechanisms and Standards.”)

The post Training People on Threat Modeling appeared first on Microsoft Security Blog.

]]>
STRIDE chart http://approjects.co.za/?big=en-us/security/blog/2007/09/11/stride-chart/ Wed, 12 Sep 2007 02:18:00 +0000 http://marcbook.local/wds/playground/cybertrust/2007/09/11/stride-chart/ There are good reasons to optimize for different points on that spectrum (of better/faster/cheaper) at different times in different products.

The post STRIDE chart appeared first on Microsoft Security Blog.

]]>
Adam Shostack here.

I’ve been meaning to talk more about what I actually do, which is help the teams within Microsoft who are threat modeling (for our boxed software) to do their jobs better.  Better means faster, cheaper or more effectively.  There are good reasons to optimize for different points on that spectrum (of better/faster/cheaper) at different times in different products.   One of the things that I’ve learned is that we ask a lot of developers, testers, and PMs here.  They all have some exposure to security, but terms that I’ve been using for years are often new to them.

Larry Osterman is a longtime MS veteran, currently working in Windows audio.  He’s been a threat modeling advocate for years, and has been blogging a lot about our new processes, and describes in great detail the STRIDE per element process.  

I wanted to chime in and offer up this handy chart that we use.  It’s part of how we teach people to go from a diagram to a set of threats.  We used to ask them to brainstorm, and have discovered that that works a lot better with some structure.

PropertyThreatDefinitionExample
AuthenticationSpoofingImpersonating something or someone else.Pretending to be any of billg, microsoft.com or ntdll.dll
IntegrityTamperingModifying data or codeModifying a DLL on disk or DVD, or a packet as it traverses the LAN.
Non-repudiationRepudiationClaiming to have not performed an action.“I didn’t send that email,” “I didn’t modify that file,” “I certainly didn’t visit that web site, dear!”
ConfidentialityInformation DisclosureExposing information to someone not authorized to see itAllowing someone to read the Windows source code; publishing a list of customers to a web site.
AvailabilityDenial of ServiceDeny or degrade service to usersCrashing Windows or a web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole.
AuthorizationElevation of PrivilegeGain capabilities without proper authorizationAllowing a remote internet user to run commands is the classic example, but going from a limited user to admin is also EoP.

The post STRIDE chart appeared first on Microsoft Security Blog.

]]>
Testing in the SDL http://approjects.co.za/?big=en-us/security/blog/2007/05/24/testing-in-the-sdl/ Thu, 24 May 2007 16:13:00 +0000 “You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL).

The post Testing in the SDL appeared first on Microsoft Security Blog.

]]>
“You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL). When we get security wrong, the first criticism we almost always hear is, “Didn’t you guys test this thing?” It is no great stretch to say that many of the most famous industry security folks made their reputation by finding vulnerabilities (through, no doubt, testing). You simply can’t avoid the subject of testing when you talk about security, and you can’t be sure you’re secure without testing.

We often get questions about SDL-required security testing and too often these questions deal exclusively with fuzz testing. Equating fuzz testing and security testing couldn’t be further from the truth when it comes to how it is treated inside Microsoft. With this post, I want to shed some light on what Microsoft actually does for security testing.  In a follow up post on this blog, Rob Roberts will talk about our privacy testing practices.

To begin, it is difficult to confine testing activity within a single SDL phase. At Microsoft, we don’t try. Testers are involved in architecture review, security design reviews, threat modeling, code reviews and many other things that happen both before and after the actual testing phase. In each of these instances, testers bring a valuable how-I-would-break-this slant to these endeavors. This contribution has been valuable enough to spawn a big push around the company to move testing activity to earlier phases of the lifecycle and, though some might not agree, I think the practice of threat modeling can be ascribed to this movement. The idea of thinking through threats and understanding attack vectors has been our focus in security testing for years, and threat modeling represents the extraction of this process as its own standalone entity.

Our overall goal is clear – whenever an engineer designs or writes code, we want that person to think about how the code might be exploited. When attack scenarios, threats and test cases are swirling around in a developer’s mind as they architect, design or write code, chances are he or she will write more secure code and plan better defenses. Clearly there is an overwhelming amount of stuff to think about, requiring a healthy amount of due caution and discourse with teammates and outside experts. Being careful and consulting colleagues is rarely a bad thing!

But, no matter how successful we are in spreading testing wisdom throughout the SDL, at some point we need to check that such wisdom actually made its way into the shipping product. I trust developers to do the right thing, but as a tester myself, you better believe I’m going to check that they actually did it.

Microsoft uses a three-pronged approach to security testing. During these tests we may refer to a threat model or security design review document, but we may also choose to ignore these documents for an independent assessment of an application or service. This decision is at the discretion of the security test lead, and depends on how independent he or she wants the test team to be.

1.       Attacks against the application’s environment.

The environment, the sum total of all OS components, runtime libraries, environment variables, network activity, file system configurations, registry keys and so forth, is probably the biggest unknown when fielding an application. For some environments the application will work securely, for others it may fail miserably. We train our testers to map out the environment, identify components subject to modification or variation and test as many configurations of these as possible. These attack scenarios are recognition that our applications work in unpredictable environments where we have to work out the trust relationships very carefully. It takes only one insecure component to put an entire machine or network at risk. We need to ensure that our own applications work securely despite the presence of these environment insecurities.

2.       Direct attacks against the application itself.

Inputs are dangerous and inputs that cross trust boundaries are crucial targets of this class of security testing. Our testers must build and maintain lists of illegal, ill-formed and improper inputs that are consumed by their application’s interfaces. Code, scripts, SQL queries, special characters, long strings and the like must be gathered in large numbers and used to pummel the application under test mercilessly. Large scale automated testing comes into play here in a big way., Our goal is for our applications to be able withstand targeted and sustained attacks – whether it’s a regression suite of past and potential exploits or fuzz testing using both random or format-aware logic. These tests are crucial to prevent repeat exploits and to test against targeted attacks scenarios.

3.       Indirect attacks against the application’s functionality.

Application features need to be cataloged for potential bad effect. All features clearly have intended functionality for good effect or they wouldn’t be features, our concentration as security testers is to understand what ways those features can be misused to the misery or inconvenience of our users. We must look at our application’s functionality and ask whether any of it can be ‘turned against itself.’ Are there ways that the software can be easily misconfigured? Can security features be circumvented? Is there some function whose purpose is benign and even useful that under certain circumstances has undesirable consequences? A feature-by-feature assessment is necessary to ensure we’ve covered all the bases.

Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.

The post Testing in the SDL appeared first on Microsoft Security Blog.

]]>