Jeff Jones, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog Expert coverage of cybersecurity topics Tue, 16 May 2023 16:49:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Beginner’s Guide to BYOD (Bring Your Own Device) http://approjects.co.za/?big=en-us/security/blog/2012/07/17/beginners-guide-to-byod-bring-your-own-device/ Tue, 17 Jul 2012 23:19:16 +0000 The era of IT departments mandating specific hardware, operating systems, or technologies is quickly eroding.  In its place a new culture is growing where employees are granted more autonomy—and given more responsibility—for their own technology. If you’ve been to enough parties you’re probably familiar with the term BYOB—a common acronym of the phrase “bring your […]

The post Beginner’s Guide to BYOD (Bring Your Own Device) appeared first on Microsoft Security Blog.

]]>
The era of IT departments mandating specific hardware, operating systems, or technologies is quickly eroding.  In its place a new culture is growing where employees are granted more autonomy—and given more responsibility—for their own technology.

If you’ve been to enough parties you’re probably familiar with the term BYOB—a common acronym of the phrase “bring your own beer”. Well, a similar acronym has emerged in recent years as one of the hottest buzzwords in technology: BYOD, or “bring your own device”. Let’s take a deeper look at BYOD, what it is, and the forces that are driving it.

Bring Your Own Definition

The first question to ask is simply, “what is BYOD?”

In a nutshell, BYOD is the idea of allowing employees to use their own laptops, smartphones, tablets, or other devices in a work environment.  Instead of the IT department mandating specific hardware or technologies, users are free to use the platforms and gadgets they prefer.

BYOD vs. Consumerization of IT

BYOD is often confused with another trend—Consumerization of IT. Though related, these two topics really have a different pivot or focus.  Consumerization refers to consumer technology that bridges over into the workplace – with the original product features and function being optimized towards consumer needs. Broadly, this means that IT departments must manage devices that were not optimized for enterprise management requirements.

BYOD is part of consumerization in that it involves using consumer technologies in a work setting, but the focus is on the employee using devices originally purchased for personal use.  Because the devices are not employer purchased or owned, it raises significant questions about maintenance, as well as some tough policy questions concerning data and applications on the device.

Origins of BYOD

The popular perception is that the BYOD revolution was sparked by the advent of Apple’s iPhone. The iPhone, and subsequently the iPad, are certainly catalysts that have contributed to the accelerated adoption of BYOD policies in many organizations, but the concept of users wanting to choose their own devices, or use their own personal PCs to get work done predates these devices – it is just that recently, these percentage of these types of devices in use has grown significantly.  Corporate philosophy has had much to do with driving BYOD as well.

Companies’ IT support policies have been pushing employees to be more independent and autonomous for decades.  For years, IT Pros have opted to upgrade sooner and self-manage to get the benefits of new versions of products. It is frustrating for employees to know that a given task can be accomplished faster or easier using a different Web browser, or operating system, or application, but being handicapped by “supported products” dictated by the IT department.

In the wake of those traditional policies, mobility entered the picture for information workers.  Instead of being tethered to a desk sitting in a cubicle, workers increasingly getting work done remotely—from home offices, corner coffee shops, airports, and hotel rooms. Users outside of the office don’t have the same access to IT resources or support, and that has further fostered the need to be self-reliant.

Even in organizations where the IT department still mandates specific operating systems, hardware platforms, and mobile devices, rogue employees have worked around those requirements to get the job done. Nomadic employees embrace the concept of being independent and autonomous, and manifest it by sometimes ignoring company policy and choosing the tools that help them be more effective, and work more efficiently.

Pros and Cons

BYOD comes with distinct advantages, as well as unique drawbacks for both organizations and individuals. From the standpoint of the IT department, BYOD is generally seen as a cost-cutting measure because the burden of supplying the equipment is shifted to the employees. Some organizations subsidize BYOD policies with a per diem to offset the costs for users, but it still results in lower costs by relieving IT of its traditional role of maintenance and support.

Another advantage of BYOD is that individuals tend to upgrade and embrace new platforms and technologies much faster than businesses. The organization benefits from being able to take advantage of cutting edge tools and features without the pain of deploying a hardware refresh to the entire company.

From the user’s perspective, BYOD means using devices and applications that are more familiar, and which the user is more comfortable with. Being able to choose which hardware and platforms creates more satisfied and productive workers.

There are also some significant downsides to consider, though. The organization has to address the fact that rogue devices outside of the control of the IT department might connect with corporate data and network resources, and the users have to accept the fact that BYOD comes with some policies that may limit their freedom with their own device.

BYOD Risks

There are some hurdles that organizations need to cross in order to effectively implement BYOD. The risks associated with allowing users to bring their own computers or mobile devices into the work environment vary depending on geographic region, the industry the company works in, and even the specific job role within a company.

Businesses that operate in specific industries—like healthcare or finance—fall under strict regulatory compliance mandates. SOX, HIPAA, GLBA, PCI-DSS, and other compliance frameworks outline which data must be protected, and provide basic guidelines for how that data should be protected. The obligation to comply with these directives doesn’t change just because the data is moved from company-owned equipment to employee-owned devices in a BYOD situation.

There are frequently reports of sensitive customer or employee data being potentially compromised as a result of a laptop being taken from an unlocked car, or company data being compromised by an employee leaving a smartphone in a taxi. IT admins need to have BYOD policies in place to protect data no matter where it resides—even on devices that aren’t owned or managed by the company.

The challenges of BYOD are not necessarily a reason to ban the practice altogether, though. The trend has significant momentum, and there are a number of benefits for both companies and users. The trick is for both to understand the advantages, as well as the issues, and to employ BYOD in a way that works for everyone.

Join me for the next part of this BYOD series in a few days, when I dig into a deeper look at BYOD from the employee perspective.

Best regards, Jeff (@securityjones)

The post Beginner’s Guide to BYOD (Bring Your Own Device) appeared first on Microsoft Security Blog.

]]>
Windows 8 Release Preview Available for Download http://approjects.co.za/?big=en-us/security/blog/2012/05/31/windows-8-release-preview-available-for-download/ Thu, 31 May 2012 20:28:00 +0000 Today on the Building Windows 8 blog, Microsoft announced the availability of the Windows 8 Release Preview.  (Read the press release here.) There are a couple of things to note that are of note to us here in the land of Trustworthy Computing: New Family Safety features and enriched privacy and security controls when browsing […]

The post Windows 8 Release Preview Available for Download appeared first on Microsoft Security Blog.

]]>
Today on the Building Windows 8 blog, Microsoft announced the availability of the Windows 8 Release Preview.  (Read the press release here.)

There are a couple of things to note that are of note to us here in the land of Trustworthy Computing:

  • New Family Safety features and enriched privacy and security controls when browsing online, including Do Not Track capabilities being turned on by default with Internet Explorer 10; IE10 is also the first browser to enable Do Not Track “on” by default, giving customers more choice and control over their privacy

The Release Preview itself is available for download at http://preview.windows.com.

Finally, here are some other resource links for you.

Developers

Business

I’ll be downloading this release and installing it myself on a machine at home tonight (I am so excited!)

The post Windows 8 Release Preview Available for Download appeared first on Microsoft Security Blog.

]]>
Profile of A Global Cybercrime Business – Innovative Marketing http://approjects.co.za/?big=en-us/security/blog/2010/03/25/profile-of-a-global-cybercrime-business-innovative-marketing/ Thu, 25 Mar 2010 17:10:23 +0000 (Reuters) – Hundreds of computer geeks, most of them students putting themselves through college, crammed into three floors of an office building in an industrial section of Ukraine’s capital Kiev, churning out code at a frenzied pace. They were creating some of the world’s most pernicious, and profitable, computer viruses. According to court documents, former […]

The post Profile of A Global Cybercrime Business – Innovative Marketing appeared first on Microsoft Security Blog.

]]>
(Reuters) – Hundreds of computer geeks, most of them students putting themselves through college, crammed into three floors of an office building in an industrial section of Ukraine’s capital Kiev, churning out code at a frenzied pace. They were creating some of the world’s most pernicious, and profitable, computer viruses.

According to court documents, former employees and investigators, a receptionist greeted visitors at the door of the company, known as Innovative Marketing Ukraine. Communications cables lay jumbled on the floor and a small coffee maker sat on the desk of one worker.

As business boomed, the firm added a human resources department, hired an internal IT staff and built a call center to dissuade its victims from seeking credit card refunds. Employees were treated to catered holiday parties and picnics with paintball competitions.

Top performers got bonuses as young workers turned a blind eye to the harm the software was doing. “When you are just 20, you don’t think a lot about ethics,” said Maxim, a former Innovative Marketing programer who now works for a Kiev bank and asked that only his first name be used for this story. “I had a good salary and I know that most employees also had pretty good salaries.”

In a rare victory in the battle against cybercrime, the company closed down last year after the U.S. Federal Trade Commission filed a lawsuit seeking its disbandment in U.S. federal court.

An examination of the FTC’s complaint and documents from a legal dispute among Innovative executives offer a rare glimpse into a dark, expanding — and highly profitable — corner of the internet.

Innovative Marketing Ukraine, or IMU, was at the center of a complex underground corporate empire with operations stretching from Eastern Europe to Bahrain; from India and Singapore to the United States. A researcher with anti-virus software maker McAfee Inc who spent months studying the company’s operations estimates that the business generated revenue of about $180 million in 2008, selling programs in at least two dozen countries. “They turned compromised machines into cash,” said the researcher, Dirk Kollberg.

The company built its wealth pioneering scareware — programs that pretend to scan a computer for viruses, and then tell the user that their machine is infected. The goal is to persuade the victim to voluntarily hand over their credit card information, paying $50 to $80 to “clean” their PC.

Scareware, also known as rogueware or fake antivirus software, has become one of the fastest-growing, and most prevalent, types of internet fraud. Software maker Panda Security estimates that each month some 35 million PCs worldwide, or 3.5 percent of all computers, are infected with these malicious programs, putting more than $400 million a year in the hands of cybercriminals. “When you include cost incurred by consumers replacing computers or repairing, the total damages figure is much, much larger than the out of pocket figure,” said Ethan Arenson, an attorney with the Federal Trade Commission who helps direct the agency’s efforts to fight cybercrime.

Groups like Innovative Marketing build the viruses and collect the money but leave the work of distributing their merchandise to outside hackers. Once infected, the machines become virtually impossible to operate. The scareware also removes legitimate anti-virus software from vendors including Symantec Corp, McAfee and Trend Micro Inc, leaving PCs vulnerable to other attacks.

When victims pay the fee, the virus appears to vanish, but in some cases the machine is then infiltrated by other malicious programs. Hackers often sell the victim’s credit card credentials to the highest bidder.

Read the Full Article on Reuters.com

The post Profile of A Global Cybercrime Business – Innovative Marketing appeared first on Microsoft Security Blog.

]]>
Common Objections – Comparing Linux Distros with Windows http://approjects.co.za/?big=en-us/security/blog/2007/01/29/common-objections-comparing-linux-distros-with-windows/ Mon, 29 Jan 2007 21:32:24 +0000 Once again, my effort to explore common misperceptions (more recently exploring unpatched statistics) has brought out some of the common objections from those that don’t necessarily like the results.  Very rarely do I get comments that can find a substantive problem with the analysis – instead the arguments tend to be detailed variations of “your comparison […]

The post Common Objections – Comparing Linux Distros with Windows appeared first on Microsoft Security Blog.

]]>
Once again, my effort to explore common misperceptions (more recently exploring unpatched statistics) has brought out some of the common objections from those that don’t necessarily like the results.  Very rarely do I get comments that can find a substantive problem with the analysis – instead the arguments tend to be detailed variations of “your comparison is not fair.”  Now, nevermind that the “common perception” I am typically exploding was an even less fair comparison… ah, but let’s not digress.

First, let me say that these type of objections are not new.  This past year, when I did year-to-date vulnerability comparisons – Windows vs Linux – Server – 1H06 and later Windows vs Linux – Workstation Comparison – Q3 2006 – similar objections arose.  This is one reason that I keep a link to Apples, Oranges and Vulnerability Metrics on my home page, which is a decent introduction to the issues.  Also, note that in the Q3 comparison, I did take pains to filter out “optional” components in the Linux distros and included only packages that had a comparable counterpart on Windows.

One comment I want to address came from Mike Howard’s blog, where he had given my recent analisys a shout-out.  Here are Joe’s comments, along with my own comments back (in blue):

Jeff’s numbers are off.

The numbers are not off – they are as accurate as possible, given the assumptions and methodology that are laid out as part of the analysis.  Joe’s further comments appear to try and justify this statement, so we continue.

Linux distributions are made up of third party software. Thus the number of vulnerabilities are not RedHat vulnerabilities, but vulnerabilities in third party software. RedHat didn’t *create* these vulnerabilities. However, since they ship the software, they have to provide updates.

Accurate statements in isolation, but I don’t see the relevance, nor do I see how they support the assertion that the numbers are “off”.  Linux distributions are made up of many components originally developed and sometimes maintained by different groups.  So?  Red Hat has their own criteria for pulling together an Enterprise Product offering and then committing to their customers to support it for seven years.  Does an end customer *care* who developed a RHEL component any more than they care which internal group, or set of external contractors, developed a Windows component?  In my experience, they do not.  More importantly, it doesn’t change the facts about disclosed and unpatched issues on the deployed OS in terms of contribution to risk.

Imagine a customer looking at a decision to standardize their desktop on a single platform for the next 5 years between Windows XP and RHEL4WS.  Assume Enterprise support is important for that five year period, which is why they’re looking at RH and not Debian (if you need an explanation).  If they’d like to get an accurate view of which product has more publicly disclosed, but unpatched issues, my original assertion was that the easily accessible data was misleading (thus the analysis and articles exploring this issue).

So comparing Windows with Linux doesn’t work. Microsoft does not issue patches for Adobe, Sun’s Java, Winzip, Quicktime, Firefox, Nero, Roxio, Cisco, etc.

I hold that it does work.  However, I will agree that if a particular user wanted a more detailed comparison of his exact and full software stack, more work would be required.  However, pointing out that more detailed work could be done does not invalidate a basic analysis with simpler assumptions concerning the vendor supplied and supported product.

The core of a linux distribution is the kernel, which is written by Linus. The kernel is useless without the userland and 3rd party software.

So to summarize:

RedHat is a collection of 3rd party software on top of the linux kernel, most of which is not written by RedHat.

Microsoft is a complete OS, all software shipped is written by Microsoft.

Apples and oranges.

The core part of these comments seem to hinge around whether a single vendor wrote the code.  Again, I grant the accuracy of underlying statements (ie, kernel is useless without the 3rd-party components), but don’t see the relevance.  I was not comparing vendors, but supported products.

No, they are not exactly apples to apples due to vendor choices.  Honestly, a comparison of Ubuntu LTS and RHEL4WS also would not be apples to apples.  However, does the fact that one vendor might by default deploy a bunch of software that many folks don’t need *add* to the security argument for that product or *detract* from it?  One might argue that the inclusion of many of those “optional” components in a “default” installation is a vendor choice that affects customer security … in the same way that they inclusion of IIS in the default install of Windows 2000 affected customer security 5 years ago.

Read Apples, Oranges and Vulnerability Metrics for some more thoughts about comparisons.

If you really want to compare apples and apples, one should compare Microsoft and FreeBSD or OpenBSD, since these ship a complete base OS, like Microsoft does.

I find this comment to be the most humorous one of all.  Really humorous, the more I think about it.  Microsoft is a company, FreeBS and OpenBSD are OSes.  But getting past that, Windows XP vs FreeBSD is no more an apples to apples comparison than Windows XP vs RHEL4WS or RHEL4WS versus FreeBSD.  Each of them have different feature set and value/benefits that they promote to customers, and each have to accept both the positives and the negatives of the product choices made by the vendors.  One would have to make a set of assumptions to do an analysis and readers would need to be aware of the context of the comparison in order to interpret it – exactly the same situation as here.

Moving on I’ll respond to a different from JNF:

Interesting, but misleading, while I don’t doubt your numbers in the least, I think its inaccurate to not point out that the majority of all of those bugs reported in windows affects everyone running windows, whereas a minority of those affecting RHEL affects everyone running RHEL.

Furthermore, you need to also ask how many patches has MS released for other peoples products? How many has RH released? How many of the bugs left unpatched in RHEL are for products created by RH or products that RH has a significant interest in (i.e. linux kernel [how many linux kernel developers work for RH?]). How many of those unpatched bugs in RHEL are being actively exploited? How many of those unpatched bugs are being actively exploited in MS products (i.e. msjet40.dll), How many of those products that RHEL has not patched are produced by third party vendors when there are no patches released by the vendor, so on and so forth.

That isn’t to say RH is not responsible for releasing patches, I’m just saying that this post is misleading because of the metrics it leaves out in its analysis- of course, all of these types of articles normally are (regardless of which side of the debate the author is on)

JNF – I think some of the discussion above helps set context here too, but primarily, I’d say that I think my results are less misleading than the easy to access data that was available prior to my analysis efforts.  I have heard people specifically point out that Secunia shows RHEL has “zero unpatched issues” and then ask why Microsoft can’t achieve the same.  That is not just misleading, but specifically inaccurate.

I do think that several of your other questions are interesting and might be interesting to pursue as separate research, but I don’t think the fact that I didn’t answer questions that I wasn’t trying to answer invalidates the results for the question I was focusing on – accuracy of unpatched data for Linux distros as represented by RHEL.

Best regards and thanks for taking the time to comment!    ~Jeff

The post Common Objections – Comparing Linux Distros with Windows appeared first on Microsoft Security Blog.

]]>
Linus’s Law aka “Many Eyes Make All Bugs Shallow” http://approjects.co.za/?big=en-us/security/blog/2006/06/07/linuss-law-aka-many-eyes-make-all-bugs-shallow/ Wed, 07 Jun 2006 07:37:00 +0000 How many of you have heard “many eyes make all bugs shallow”?  My guess is that many of you have and that it may have been in conjunction with an argument supporting why Linux and Open Source products have better security.  For example, Red Hat publishes a document at www.redhat.com/whitepapers/services/Open_Source_Security5.pdf, which they commissioned from TruSecure […]

The post Linus’s Law aka “Many Eyes Make All Bugs Shallow” appeared first on Microsoft Security Blog.

]]>
How many of you have heard “many eyes make all bugs shallow”?  My guess is that many of you have and that it may have been in conjunction with an argument supporting why Linux and Open Source products have better security.  For example, Red Hat publishes a document at www.redhat.com/whitepapers/services/Open_Source_Security5.pdf, which they commissioned from TruSecure (www.trusecure.com) which has a whole section called “Strength in Numbers: The Security of “Many Eyeballs” and says:

The security benefits of open source software stem directly from its openness. Known as the “many

eyeballs”theory,it explains what we instinctively know to be true – that an operating system or application

will be more secure when you can inspect the code,share it with experts and other members of your

user community,identify potential problems and create fixes quickly.

 

It reads pretty well, but there are a few small problems.  For one, nothing really ties the second sentence (the key one) back to the first one.  Secondly, the ability (can) to inspect code does not confirm that it actually gets inspected.  Let me emphasize by applying similar marketing speak to a similar claim for closed source:

 

The security benefits of closed source software stem directly from its quality processes. Known as quality assurance, it explains what we instinctively know to be true – that an operating system or application

will be more secure when qualified persons do inspect the code, [deleted unnecessary] identify potential problems and create fixes quickly.

 

I would argue that both statements are equally true or false, depending on the reality behind the implied assumptions.  For example, if qualified people are inspecting all parts of the open source with the intent of finding and fixing security issues, it is probably true.  For the latter, if a closed source org does have a good quality process, they are likely finding and fixing more security issues than if they did not have that process.

 

Going Back to the Source:  The Cathedral and the Bazaar

 

Now I’ll ask a different question – how many of you have actually read The Cathedral and the Bazaar (CATB) by Eric S. Raymond (henceforth referred to as ESR)?  Shame on you, if you have not.  It is really interesting, and to me, it asks more interesting questions than it answers … though I’ll try not to digress too much or too far.  Keeping to the core idea I want to discuss, let’s look at the lesson #8 in the CATB, as quoted:

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

Or, less formally, “Given enough eyeballs, all bugs are shallow.” I dub this: “Linus’s Law”.

 

Even these statements have some implicit assumptions (ie, the code churn doesn’t cause new problems quicker than the old ones are solved), but as I read through the lead in context and rule #8, I can’t find anything to disagree with.  What I will note is that nothing in this limits his observation to Open Source.  As many later references use the less formal “given enough eyeballs” paraphrase, it does mentally prompt one to think about visual inspection, however, the original lesson doesn’t refer to visual inspection at all!

 

Though ESR was making observations and drawing lessons from Linus’ Linux experience and his own fetchmail experience, I assert that his lessons can be applied more broadly to any software.  Going a bit further in the text, we find another important part of the discussion:

My original formulation was that every problem “will be transparent to somebody”. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. “Somebody finds the problem,” he says, “and somebody else understands it. And I’ll go on record as saying that finding it is the bigger challenge.”

 

So, in finding and fixing issues, you need:

·         “Many eyes” identifying the issues, or a large enough beta-tester base (to take from lesson #8) so that almost every problem will be characterized, and

·         Enough developers working on fixing issues so that a fix can be developed and deployed

 

ESR chronicles a lot of interesting stuff in CATB and enumerates them as lessons, but one key one he does not elaborate upon is the enabling ability to communicate cheaply and quickly with his users/co-developers.  At the time, he used a mailing list.  UUCP news and public file servers were also available for communication and for sharing code and files.  What did this allow?  It allowed him to pretty easily find and connect with the roughly 300 people in the Western developed nations that shared his interest in an improved pop client / fetchmail.  Even 10 years prior and this would have been much more difficult.  But I digress too much … suffice it to say that cheap and easy communication and sharing made a distributed, volunteer, virtual team possible.

 

Applying the “Many Eyes” Lessons To Commercial Software

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

ESR contrasted between two testing models.  Rather than paraphrase, it seems simplest to quote what he says next in CATB:

In Linus’s Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you’ve winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

 

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

 

Now in my experience with commercial products, I can honestly say I never thought of development problems as either deep or shallow.  I thought of flaws as being across a spectrum, where some where simpler and easier to find and others might be deeper and have more challenging pre-conditions to replicate (e.g. timing, state).  I think that would apply to open or close source. 

 

So, ultimately, my analysis of what ESR describes is different in that I see the key difference as time and resources.  The Bazaar model (as he described) created a situation where more resources for both finding and fixing bugs were applied in parallel.  The Cathedral model (as he described) had (by implication) fewer resources that (therefore) needed to work over a longer period of time to achieve a similar level of quality.  This resource analysis makes sense to me, especially if you leave the models out of the equation for a moment.

 

Let’s step back.  What if you had an Open Source project working on a product where their were 5 core developers and about 20 co-developing users?  What if you had a comparable Closed Source project with 50 developers and 50 testers?  Assume both products have 500 active users over a one year period reporting problems and requesting enhancements.  Does it seem likely that the Open Source project will find and fix more bugs simply because it is Open Source?  No.  The number of “eyes” matter, but so do the number of actively contributing developers?  This is consistent with what ESR says (…a large enough beta-tester and co-developer base…), but is not consistent with the common usage of the “many eyes” theory as quoted frequently in the press.

 

How can commercial companies apply this?  First, set up a process that facilitates reasonably frequent releases to large numbers of active users that will find and report problems.  Next, ensure that you have enough developers to fix the reported issues that meet your quality bar.  The CATB also identifies a need for problem reports to have an efficiency of communication that makes the problems easy to replicate and enables the developers to quickly solve the problem.  Finally, there are several more rules which are about being customer-focused, which any product manager would endorse:

7. Release early. Release often. And listen to your customers.

10. If you treat your beta-testers as if they’re your most valuable resource, they will respond by becoming your most valuable resource.

11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.

 

The “Many Eyes” of Microsoft

 

Finally, I would like to think about these issues in the context of how Microsoft currently releases products.

 

First, let’s take the core “many eyes” and consider “Given a large enough beta-tester and co-developer base…”  In CATB, Eric mentions that at a high, he had over 300 people on his active mailing list contributing to the feedback process.  There are multiple levels at which the Microsoft’ product lifecycle seems to work towards achieving many eyes. 

 

Furthest from the development process are the millions and millions of users.  Take a product like Windows Server 2003, which is the next generation of Windows 2000 Server and you find it has benefitted from every bug report from every users in terms of (what Linus described as the harder problem) making bugs shallow.  In the more recent product generations, the communication process has been advanced in a technical way by Windows Error Reporting (WER aka Watson) and Online Crash Analysis (OCA).  Vince Orgovan and Will Dykstra gave a good presentation on the benefits of WER/OCA at WinHec2004. OCA also addresses another problem raised by CATB, that of communicating sufficient detail so a developer can properly diagnose a problem.  One might argue that a large percentage of users do not choose to send crash details back to Microsoft for analysis and that brings us to the next item – Betas.

 

Microsoft releases Beta versions of products that see very high numbers of day-to-day usage before final release.  When Windows XP SP2 was developed and released on a shortened one-year process, it had benefitted from over 1 million Beta users during the process – each one using and installing their own combination of shareware, local utilities, custom developed applications and legacy applications on thousands of combinations of hardware.   I’ve been running Windows Defender (aka Antispyware) along with many other users for about 1.5 years now, through several releases.

 

Even before the external Beta stage, Microsoft employees are helping the product teams “release early and often” by dogfooding products internally.  Incidentally, I am writing this entry using Office 12 Beta running on Windows Vista Beta2.  Dogfood testers may not seem like a lot unless you consider that there are 55,000 employees and well over half of them will probably dogfood more major products.  High numbers of dogfood testers will certainly utilize OCA and will also run the internally deployed stress tools to try and help shake bugs out of the products.

 

There are other mechanisms I won’t go into in detail like customer councils, focus groups, customer feedback via support channels, feature requests, not to mention the Product Development and Quality Assurance teams themselves utilizing a variety of traditional and modern tools to find and fix issues.  The core process has even been augmented as described in The Trustworthy Computing Security Development Lifecycle to include threat modeling and source code annotation.

 

I could go on, but I think you get the picture.  Hopefully, this will stir some folks to think beyond the superficial meaning of “many eyes make all bugs shallow” the next time someone throws out as a blind attempt to assert superior security of Open Source.

 

Jeff

The post Linus’s Law aka “Many Eyes Make All Bugs Shallow” appeared first on Microsoft Security Blog.

]]>
The Importance of the “Evaluated Configuration” in Common Criteria Evaluations http://approjects.co.za/?big=en-us/security/blog/2006/05/24/the-importance-of-the-evaluated-configuration-in-common-criteria-evaluations/ Wed, 24 May 2006 21:56:00 +0000 How many of you have heard of the Common Criteria ?  If you’ve ever done security work with government, you probably have.  If not, then possibly not.  Either way, read on and I’ll give you my own view, including some of the barnacles clinging to the hull of the general program. Common Criteria Background Way […]

The post The Importance of the “Evaluated Configuration” in Common Criteria Evaluations appeared first on Microsoft Security Blog.

]]>
How many of you have heard of the Common Criteria ?  If you’ve ever done security work with government, you probably have.  If not, then possibly not.  Either way, read on and I’ll give you my own view, including some of the barnacles clinging to the hull of the general program.

Common Criteria Background

Way back in the depths of computing history, government departments used to issue request for proposal (RFPs) for computers having certain specific security requirements.  Commercial-off-the-shelf (COTS) systems could not meet these requirements.  This resulted in very expensive proposals for building (largely unsupported) customer systems.  Worse, from a security perspective, the security requirements weren’t always necessarily self-consistent or supportive of actually maintaining good security.  Finally, worst of all, do you think the support for these custom beasts was all that great, compared to the general systems serving millions of customers?  Not so much…

In an effort to alleviate the problems related to this process, various governments came up with schemes to try and get vendors to build security into their “normal” offerings.  In the US, this resulted in the Orange Book, aka the Trusted Computer Systems Evaluation Criteria, as well as an NSA-managed process for getting systems evaluated.  Other countries had similar, but not identical schemes and critieria, such as the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) and the European Information Technology Security Evaluation Criteria (ITSEC).  Fast forward, with a lot of international cooperation between security groups in various governments (as described in this 1998 NIST newsletter) and you get the Common Criteria, a Mutual Recognition Agreement, and at least 22 officially participating countries.  This is unprecedented security goodness and has resulted in, if nothing else, many ex-government folks that received some good basic computer security training.

Assurance and Features

One of the keys to solving the historical custom-computer problem was bringing security experts together to define the necessary set of minimal internally consistent requirements needed for better security.  For example, what good was an audit system if an admin could alter it to hide shenanigans?  This resulted in a criteria defining two key concepts: security assurance and security features. 

In the Orange Book, the evaluation levels tied these together.  A high assurance system would have mandatory access controls evaluated, or it would fail, for example.  The ITSEC did not follow this paradigm and separated features from assurance.  In theory, this would allow a very simple, high-assurance system to be designed, developed and evaluated.

The Common Criteria took a further step forward, by allowing for separate Protection Profiles to be developed for different types of products and Security Targets for individual evaluations.  This meant you could evaluate the assurance and features for a wide variety of products – for example, smart cards – as long as an accepted Protection Profiles was developed by the participating authorities.  By separating assurance from features, the newer program gained a lot of flexibility.

Flexibility and “Playing the System”

Advances frequently come with trade-offs, though.  Let’s run through a theoretical scenario.  Let’s say I develop my own OS distribution, JeffOS,  and I would like to sell to governments that require a Common Criteria certification.  This is a cost of doing business to me.  I want to minimize that cost in order to maximize my profit.  How might I minimize my costs?

1.      Pick the easiest process/country.  There are now several countries, each with slightly different oversight of the process.  Are they all equally rigorous?  Might I find one that is a little “easier” to work with than the others?  Perhaps not, but I owe it to my investors to check, yes?

2.      Pick the cheapest evaluation team.  These teams are in business, yes?  Can I get one of them to commit to a fixed price contract?  Won’t the be less likely to add new requirements if they’re on a fixed contract?  Seems like a good approach.

3.      Finally, maybe I should evaluate fewer components.  Cost for generating evaluation evidence probably scales to this.  Also, imagine if they needed me to change a design to comply with security, that could be costly.

I think I’ll do all three of these, it only makes sense.  The last one, though, shows real promise.  At the end of the day, as long as some of JeffOS is evaluated, I’ll get a certificate and can market the certification, right?  I think I’ll just evaluated the kernel and the bare minimum set of drivers and utilities.  No graphics.  No complicated network protocols, just the basics.

This is Really How it Works

If your mind boggles at the above imaginary scenario, I feel your pain, but it makes total sense from a vendor perspective, doesn’t it?  I can tell you that in my early days as an evaluator, every vendor we worked with came to us at some point and said “just tell me the minimum changes I have to make in order to pass the evaluation.”

The real weakness is if all vendors play the system this way, then none of them have to do better.  I mean, I don’t have to have my implementation of DNS evaluated as part of JeffOS if the Red Hat and SUSE evaluated systems don’t include it either.  They don’t have it, I don’t have it, so the customer has an equal choice either way, and will have an approximate equal starting point for their site Certification and Accreditation process.

Advantage through Evaluated Configuration

Recognizing how this works, a really smart vendor might decide to change the game to their advantage.  How?  By doing the extra work and investing the extra expense to evaluated more useful systems – in other words, by specifying a more useful evaluated configuration.  Look at the following two (theoretically) evaluated systems:

q       JeffOS Evaluated Configuration:  kernel, shell, basic networking, X-Windows, DHCP, DNS, Apache 2.0, MySQL

q       Red Hat Evaluated Configuration: kernel, shell, basic networking

If a customer intended to deploy a Web Server in their environment, assuming assurance level and protection profiles were equal, wouldn’t JeffOS have a real advantage?  It seems like it would to me. 

If I was a customer, I’d be telling my vendors about this and getting them to compete for my business by not just evaluating systems, but by evaluated systems with useful configurations.

Think Security ~ Jeff

The post The Importance of the “Evaluated Configuration” in Common Criteria Evaluations appeared first on Microsoft Security Blog.

]]>
Washington Post – A Time to Patch III: Apple http://approjects.co.za/?big=en-us/security/blog/2006/05/05/washington-post-a-time-to-patch-iii-apple/ Fri, 05 May 2006 20:45:00 +0000 You’ve probably already read Brian Krebs article A Time to Patch III: Apple, but if you haven’t, I encourage you to read it and read the various responses he received – the responses run the gamut of Linux advocates (“You do understand that Mac OS X is not a version of Linux, and is not […]

The post Washington Post – A Time to Patch III: Apple appeared first on Microsoft Security Blog.

]]>
You’ve probably already read Brian Krebs article A Time to Patch III: Apple, but if you haven’t, I encourage you to read it and read the various responses he received – the responses run the gamut of

  • Linux advocates (“You do understand that Mac OS X is not a version of Linux, and is not an open source OS in the usual sense of the word?”),
  • conspiracy theorists (“…This sounds much more like Microsoft propaganda…”),
  • open source advocates (“… finally pointing out that Apple is a company that’s even more protective of its intellecual property than Microsoft …”)
  • existentialists (“… In fact, I have been using Macintoshes heavily since 1984 and I’ve never had a single security problem.”)
  • allegoricists (“…Potentially, an envelope I lick to seal could have LSD on it.”)
  • poor analogies (“…Over the years in a far away country, fires have increasingly ravaged …”)
  • better analogies (“…Imagine someone traveling to a small town and learning …”)

and many, many more.  Good reading and entertaining at the same time.  Brian even provides spreadsheets with his data and links to sources.

When I read this, I thought to myself “What if this article was about Microsoft?” – would the responses have been different?  “What if the article was about Linux?”  Sun?  Oracle?  I think it is clear from the emotional responses that the data matters less to some people than their belief system – and that’s not good for security!

Here’s the question I ask myself.  If I had one system that housed my critical business information (say customer credit cards) and I believed there were attackers who might target me to get that information, then wouldn’t I want to know how many vulnerabilities there are and how long a vendor might leave them unpatched?  I would.  If I was basing a 5-10 year business decision in part on security criteria, I certainly would (among many other things…). 

Of course, I would also consider the threat of a virus and the threat of a targeted attack as two discrete risk issues and not muddle them together… but that’s for another day.

The post Washington Post – A Time to Patch III: Apple appeared first on Microsoft Security Blog.

]]>