Influence operations | Latest Threats | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/threat-intelligence/influence-operations/ Expert coverage of cybersecurity topics Wed, 03 Jul 2024 14:38:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 New research, tooling, and partnerships for more secure AI and machine learning http://approjects.co.za/?big=en-us/security/blog/2023/03/02/new-research-tooling-and-partnerships-for-more-secure-ai-and-machine-learning/ Thu, 02 Mar 2023 16:00:00 +0000 At Microsoft, we’ve been working on the challenges and opportunities of AI for years. Today we’re sharing some recent developments so that the community can be better informed and better equipped for a new world of AI exploration.

The post New research, tooling, and partnerships for more secure AI and machine learning appeared first on Microsoft Security Blog.

]]>
Today we’re on the verge of a monumental shift in the technology landscape that will forever change the security community. AI and machine learning may embody the most consequential technology advances of our lifetime, bringing huge opportunities to build, discover, and create a better world.

Brad Smith recently pointed out that 2023 will likely mark the inflection point for AI going mainstream, the same way we think of 1995 for browsing the internet or 2007 for the smartphone revolution. And while Brad outlines some major opportunities for AI across industries, he also calls out the deep responsibility involved for those who develop these technologies. One of the biggest opportunities is also a core responsibility for us at Microsoft – building a more secure digital future. AI has the incredible potential to reshape our security landscape and protect organizations and people in ways we have been unable to do in the past.

With all of AI’s potential to empower people and organizations, it also comes with risks that the security community must address. It is imperative that we as an industry and global technology community get this journey right, and that means looking at AI with diverse perspectives, taking it slowly, working with our partners across government and industry – and sharing what we’re learning.

At Microsoft, we’ve been working on the challenges and opportunities of AI for years. Today we’re sharing some recent developments so that the community can be better informed and better equipped for a new world of AI exploration:

  • New research: A dedicated AI Security Red Team within Microsoft Threat Intelligence explored how traditional software threats affect AI and how security professionals, developers, and machine learning engineers should think about securing and monitoring AI and machine learning models. This team will continue to research and test security in AI and machine learning as we learn more as a company and as an industry.
  • New tools for defenders: Microsoft recently released an open-source automation tool for security testing of AI systems called Counterfit. The tool is designed to help organizations conduct AI security risk assessments and help ensure that the algorithms used in their businesses are robust, reliable, and trustworthy. As of today, our Counterfit tool will now be part of MITRE’s new Arsenal plug-in.
  • Industry collaboration to help secure the AI supply chain: We worked with Hugging Face, one of the most popular machine learning model repositories, to mitigate threats to AI and machine learning frameworks by collaborating on an AI-specific security scanner. This tool will help the security community to better secure their software supply chain when it comes to AI and machine learning.

AI brings new capabilities – and familiar risks

AI and machine learning can provide remarkable efficiency gains for organizations and lift the burden from a work force overwhelmed by data.

As an example, these capabilities can be particularly helpful in cybersecurity. There are more than 1,200 brute-force password attacks per second, and according to McKinsey, many organizations have more than 100 security tools in place, each with its own portal and alerting system to be checked daily. AI will change the way we defend against threats by improving our ability to protect and respond at the speed of an attack.

This is why AI is popular right now across industries: it provides a way to solve sophisticated problems utilizing complex data relationships merely by human labeling of input and output examples. It uses the inherent advantages of computing to lift the burden of massive data and speed our path to insights and discoveries.

Diagram comparing traditional programming and the AI paradigm

But with its capabilities, AI also brings some risks that organizations may not be considering. Many businesses are pulling existing models from public AI and machine learning repositories as they work to apply AI models to their own operations. But often, either the software used to build AI systems or the AI models housed in the repositories have not been moderated. This creates the risk that anyone can put up a tampered model for consumption, which can poison any system that uses the model.

There is a misconception in the security community that attacking AI and machine learning systems involves exotic algorithms and advanced knowledge of machine learning. But while machine learning may seem like math and magic, at the core it runs on bits and bytes, and like all software, it can be vulnerable to security issues.

Within the Microsoft Threat Intelligence team, we have a group that focuses on understanding these risks. The AI Security Red Team is an interdisciplinary group of security researchers, machine learning engineers, and software engineers whose goal is to proactively identify failure points in AI systems and help remediate them. The AI Security Red Team works to see how attackers approach AI and how they might be able to compromise an AI or machine learning model, so we can understand those attacks and how to get ahead of them.

The research: Old threats take on new life with AI

Recently the AI Security Red Team investigated how easy it would be for an attacker to inject malicious code into AI and machine learning model repositories. Their central question was, how can an adversary with current-day, traditional hacking skills cause harm to AI systems? This question led us to prove that traditional software attack vectors can indeed be a threat.

The security community has long known about Python serialization threats, but not in the context of AI systems. Academic researchers have warned about the lack of security practices in machine learning software. Recently, there has been a wave of research (for example, here, here, and here) looking at serialization threats specifically in the context of machine learning. MITRE ATLAS, the ATT&CK-style framework for adversarial machine learning, specifically calls out machine learning supply chain compromise. Even AI frameworks’ security documentation explicitly points out that machine learning model files are designed to store generic programs.

What has been less clear is how far attackers could take this, which is what the Microsoft AI Security Red Team explored. The AI Security Red Team routinely emulates a range of adversaries, from script kiddies to advanced attackers, to understand attack vectors against AI and machine systems. To answer our question, we assumed the role of an adversary whose goal is to compromise machine learning systems using only traditional hacking tools and methodology. In other words, our adversary knew nothing about specifically hacking AI.

Our exercise allowed us to assess the impact of poor encryption in machine learning endpoints, improperly configured machine learning workspaces and environments, and overly broad permissions in the storage accounts containing the machine learning model and training data – all of which can be thought of as traditional software threats.

The team found that these traditional software threats can be particularly impactful in the context of AI systems. We looked at two of the AI frameworks most widely used by machine learning engineers and data scientists. These frameworks provide a convenient way to write mathematical expressions to transform data into the required format before running it through an algorithm. The team was able to repurpose one such function, Keras Lambda layer, to inject arbitrary code.

The security community is aware of how Python’s pickle module, which is used for serialization and deserialization of a python object, can be abused by adversaries. Our work, however, shows that machine learning model file formats, which may not use pickle format, are still flexible enough to store generic programs and can be abused. This also reduces the number of steps the adversary needs to include a backdoor in a model released to the internet or a popular repository. 

In our proof of concept, we were able to repurpose the mathematical expression processing function to load malware. An added advantage to the adversary: the attack is self-contained and stealthy; it does not require loading extra custom code prior to loading the model itself.

New tools with Counterfit, CALDERA, and ATLAS

In security, we are constantly investing and innovating to learn about attacker behaviors and bring that human-led intelligence to our products. Our mission is to combine the diversity of thinking and experience from our threat hunters and companies we’ve integrated with (like RiskIQ and CyberX), so our customers can benefit from both hyper-scale threat intelligence as well as AI.

With our announcement today that Microsoft Counterfit is now integrated into MITRE CALDERA, security professionals can now build threat profiles to probe how an adversary can attack AI systems both via traditional methods as well as through novel machine learning techniques.

This new tool integration brings together Microsoft Counterfit, MITRE CALDERA (the de facto tool for adversary emulation), and MITRE ATLAS to help security practitioners better understand threats to ML systems. This will enable security teams to proactively look for weaknesses in AI and machine learning models and fix them before an attacker can take advantage. Now security professionals can get a holistic and automated security assessment of their AI systems using a tool that they are already familiar with.

“With the rise in real world attacks on machine learning systems that we’ve seen through the MITRE ATLAS collaboration, it’s more important than ever to create actionable tools for security professionals to prepare for these growing threats across the globe. We are thrilled to release a new adversary emulation tool, Arsenal, in partnership with Microsoft and their Counterfit team. These open-sourced tools will enhance the ability of security professionals and ML engineers across the community to test the vulnerability of their ML models through the MITRE CALDERA tools they already know and love.”

Doug Robbins, VP Engineering & Prototyping, MITRE

Investment and innovation with partners

In theory, once a machine learning model is embedded with malware, it can be posted in popular ML hosting repositories for anyone to download. An unsuspecting ML engineer could then download the backdoored ML model, which could lead to the adversary gaining foothold into the organization environment.

To help prevent this, we worked with Hugging Face, one of the most popular ML model repositories, to mitigate such threats by collaborating on an AI-specific security scanner.

We also recommend Software Bill of Materials (SBOM) for AI systems. We have amended the package URL (purl) specification to include Hugging Face, as well as MLFlow. Software Package Data Exchange (SPDX) and CycloneDX, the leading SBOM standards which leverage purl spec, allow tracking of ML models. Now any Azure ML, Databricks, or Hugging Face user leveraging Microsoft’s recommended SBOM will have the option to track ML models as part of supply chain security. 

Threat Intelligence in this space will continue to be a team sport, which is why we have partnered with MITRE and 11 other organizations to empower security professionals to track these novel forms of attack via the MITRE ATLAS initiative.

Given we distribute hundreds of millions of ML models every month, corrupted artifacts can cause great harm as well as damage the trust in the open-source community. This is why we at Hugging Face actively develop tools to empower users of our platform to secure their artefacts, and greatly appreciate Microsoft’s community contributions in advancing the security of ML models.

Luc Georges, ML Engineer, Hugging Face

It’s imperative that we as an industry and global technology community are thoughtful and diligent in our approach to securing AI and machine learning systems. At Microsoft, this is core to our focus on AI and our security culture. Because of the nature of emerging technology, in that it’s exactly that – emerging – there are many unknowns. In security, we are constantly investing and innovating to learn about attacker behaviors and bring that human-led intelligence to our products.

The reason we invest in research, tools and industry partnerships like those we’re announcing today is so we can understand the nature of what those attacks would entail, do our best to get ahead of them, and help others in the security community do the same. There is still so much to learn about AI, and we are continuously investing across our platforms and in red-team like research to learn about this technology and to help inform how it will be integrated into our platform and products.

Recommendations and resources

The following recommendations for security professionals can help minimize the risks for AI and ML systems:

  1. Encourage ML engineers to inventory, track and update ML models by leveraging model registries. This will help with keeping track of the models in an organization and their software dependencies.
  2. Apply existing security best practices to AI systems. This includes sandboxing the environment running ML models via containers and machine virtualization, network monitoring, and firewalls. We have outlined guidance here to get started. By doing this, we treat AI assets as yet another crown jewel that security teams should protect from adversaries.
  3. Leverage MITRE ATLAS to understand threats to AI systems, and emulate them using Microsoft Counterfit via MITRE CALDERA. This will help security analysts ground their effort in a realistic, numbers-driven approach to protecting AI systems.

This proof of concept that we pursued is part of broader investment at Microsoft to empower the wide range of stakeholders who play an important role to securely develop and deploy AI systems:

  • For security analysts to orient themselves with threats against AI systems, Microsoft, in collaboration with MITRE, released an ATT&CK-style framework Adversarial ML Threat Matrix, complete with case studies of attacks on production machine learning systems, which has evolved into MITRE ATLAS.
  • For security professionals, Microsoft open-sourced Counterfit to help with assessing the posture of AI systems.
  • For security incident responders, we released a bug bar to systematically triage attacks on ML systems.
  • For ML engineers, we released a checklist to complete AI risk assessment.
  • For developers, we released threat modeling guidance specifically for ML systems.
  • For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University, released a taxonomy documenting various machine learning failure modes.
  • For the broader security community, Microsoft hosted the annual Machine Learning Evasion Competition.
  • For Azure machine learning customers, we provided guidance on enterprise security and governance.

Contributors: Ram Shankar Siva Kumar with Gary Lopez Munoz, Matthieu Maitre, Amanda Minnich, Shiven Chawla, Raja Sekhar Rao Dheekonda, Lu Zhang, Charlotte Siska, Sudipto Rakshit.

The post New research, tooling, and partnerships for more secure AI and machine learning appeared first on Microsoft Security Blog.

]]>
Join us at InfoSec Jupyterthon 2022 http://approjects.co.za/?big=en-us/security/blog/2022/11/22/join-us-at-infosec-jupyterthon-2022/ Tue, 22 Nov 2022 18:00:00 +0000 Join our community of analysts and engineers at the third annual InfoSec Jupyterthon 2022, an online event taking place on December 2 and 3, 2022.

The post Join us at InfoSec Jupyterthon 2022 appeared first on Microsoft Security Blog.

]]>
Notebooks are gaining popularity in InfoSec. Used interactively for investigations and hunting or as scheduled processing jobs, notebooks offer plenty of advantages over traditional security operations center (SOC) tools. Sitting somewhere between scripting/macros and a full-blown development environment, they offer easy entry to data analyses and visualizations that are key to modern SOC engagements.

Join our community of analysts and engineers at the third annual InfoSec Jupyterthon 2022, where you’ll meet and engage with security practitioners using notebooks in their daily work. This is an online event taking place on December 2 and 3, 2022. It is organized by our friends at Open Threat Research, together with folks from Microsoft Security research teams and the Microsoft Threat Intelligence Center (MSTIC).

Although this is not a Microsoft event, our Microsoft Security teams are delighted to be involved in helping organize it and deliver talks. Registration is free and it will be streamed on YouTube Live both days from 10:30 AM to 5:00 PM Eastern Time. We’ll also have a dedicated Discord channel for discussions and session Q&A.

Do you have a cool notebook or some interesting techniques or technology to talk about? There are still openings for talks and mini talks (30-minute, 15-minute, and 5-minute sessions). 

For more information, visit the InfoSec Jupyterthon page at: https://infosecjupyterthon.com

We’re looking forward to seeing you there!

The post Join us at InfoSec Jupyterthon 2022 appeared first on Microsoft Security Blog.

]]>
Collaborative innovation on display in Microsoft’s insider risk management strategy http://approjects.co.za/?big=en-us/security/blog/2020/12/17/collaborative-innovation-on-display-in-microsofts-insider-risk-management-strategy/ Thu, 17 Dec 2020 22:00:04 +0000 Partnering with organizations like Carnegie Mellon University allows us to bring their rich research and insights to our products and services, so customers can fully benefit from our breadth of signals.  

The post Collaborative innovation on display in Microsoft’s insider risk management strategy appeared first on Microsoft Security Blog.

]]>
The disrupted work environment, in which enterprises were forced to find new ways to enable their workforce to work remotely, changed the landscape for operations as well as security. One of the top areas of concern is managing insider risks, a complex undertaking even before the pandemic, and even more so in the new remote or hybrid work environment.

Because its scope goes beyond security, insider risk management necessitates diverse perspectives and thus inherently requires collaboration among key stakeholders in the organization. At Microsoft, our insider risk management strategy was built on insights from legal, privacy, and HR teams, as well as security experts and data scientists, who use AI and machine learning to sift through massive amounts of signals to identify possible insider risks.

It was also important for us to extend this collaboration beyond Microsoft. For example, for the past few years, Microsoft has partnered with Carnegie Mellon University to bring in their expertise and experience in insider risks and provide insights about the nature of the broader landscape. (Read: Using Endpoint Signals for Insider Threat Detection [PDF].)

Our partnership with Carnegie Mellon University has helped shape our mindset and influenced our Insider Risk Management product, a Microsoft 365 solution that enables organizations to leverage machine learning to detect, investigate, and act on malicious and unintentional activities. Partnering with organizations like Carnegie Mellon University allows us to bring their rich research and insights to our products and services, so customers can fully benefit from our breadth of signals.

This research partnership with Carnegie Mellon University experiments with innovative ways to identify indicators of insider risk. The output of these experiments become inputs to our research-informed product roadmap. For example, our data scientists and researchers have been looking into using threat data from Microsoft 365 Defender to gain insights that can be used for managing insider risks. Today, we’d like to share our progress on this research in the form of Microsoft 365 Defender advanced hunting queries, now available in a GitHub repo:

  1. Detecting exfiltration to competitor organization: This query helps enterprises detect instances of a malicious insider creating a file archive and then emailing that archive to an external “competitor” organization. Effective query use requires prior knowledge of email addresses that may pose a risk to the organization if data is sent to those addresses.
  2. Detecting exfiltration after termination: This query explores instances in which a terminated individual (i.e., one who has an impending termination date, but has not left the company) downloads many files from a non-domain network address.
  3. Detecting steganography exfiltration: This query detects instances of malicious users who attempt to create steganographic images and then immediately browse to a webmail URL. It requires additional investigation to determine indication of a malicious event through the co-occurrence of a) generating a steganographic image; and b) browsing to a webmail URL

As these queries demonstrate, industry partnerships allow us to enrich our own intelligence with other organizations’ depth of knowledge, helping us address some of the bigger challenges of insider risks through the product, while bringing scientifically proven solutions to our customers more quickly through this open-source library.

Microsoft will continue investing in partnerships like Carnegie Mellon University to learn from experts and deliver best-in-class intelligence to our customers. Follow our insider risk podcast and join us in our Insider Risk Management journey!

The post Collaborative innovation on display in Microsoft’s insider risk management strategy appeared first on Microsoft Security Blog.

]]>
TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 http://approjects.co.za/?big=en-us/security/blog/2019/09/30/tls-version-enforcement-capabilities-now-available-certificate-binding-windows-server-2019/ Mon, 30 Sep 2019 16:00:00 +0000 Microsoft is pleased to announce a powerful new feature in Windows to make your transition to a TLS 1.2+ world easier.

The post TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 appeared first on Microsoft Security Blog.

]]>
At Microsoft, we often develop new security features to meet the specific needs of our own products and online services. This is a story about how we solved a very important problem and are sharing the solution with customers. As engineers worldwide work to eliminate their own dependencies on TLS 1.0, they run into the complex challenge of balancing their own security needs with the migration readiness of their customers. Microsoft faced this as well.

To date, we’ve helped customers address these issues by adding TLS 1.2 support to older operating systems, by shipping new logging formats in IIS for detecting weak TLS usage by clients, as well as providing the latest technical guidance for eliminating TLS 1.0 dependencies.

Now Microsoft is pleased to announce a powerful new feature in Windows to make your transition to a TLS 1.2+ world easier. Beginning with KB4490481, Windows Server 2019 now allows you to block weak TLS versions from being used with individual certificates you designate. We call this feature “Disable Legacy TLS” and it effectively enforces a TLS version and cipher suite floor on any certificate you select.

Disable Legacy TLS also allows an online or on-premise web service to offer two distinct groupings of endpoints on the same hardware: one which allows only TLS 1.2+ traffic, and another which accommodates legacy TLS 1.0 traffic. The changes are implemented in HTTP.sys, and in conjunction with the issuance of additional certificates, allow traffic to be routed to the new endpoint with the appropriate TLS version. Prior to this change, deploying such capabilities would require an additional hardware investment because such settings were only configurable system-wide via registry.

For a deep dive on this important new feature and implementation details and scenarios, please see Technical Guidance for Disabling Legacy TLS. Microsoft will also look to make this feature available in its own online services based on customer demand.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 appeared first on Microsoft Security Blog.

]]>
Deep learning rises: New methods for detecting malicious PowerShell http://approjects.co.za/?big=en-us/security/blog/2019/09/03/deep-learning-rises-new-methods-for-detecting-malicious-powershell/ Tue, 03 Sep 2019 16:00:03 +0000 We adopted a deep learning technique that was initially developed for natural language processing and applied to expand Microsoft Defender ATP's coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector.

The post Deep learning rises: New methods for detecting malicious PowerShell appeared first on Microsoft Security Blog.

]]>
Scientific and technological advancements in deep learning, a category of algorithms within the larger framework of machine learning, provide new opportunities for development of state-of-the art protection technologies. Deep learning methods are impressively outperforming traditional methods on such tasks as image and text classification. With these developments, there’s great potential for building novel threat detection methods using deep learning.

Machine learning algorithms work with numbers, so objects like images, documents, or emails are converted into numerical form through a step called feature engineering, which, in traditional machine learning methods, requires a significant amount of human effort. With deep learning, algorithms can operate on relatively raw data and extract features without human intervention.

At Microsoft, we make significant investments in pioneering machine learning that inform our security solutions with actionable knowledge through data, helping deliver intelligent, accurate, and real-time protection against a wide range of threats. In this blog, we present an example of a deep learning technique that was initially developed for natural language processing (NLP) and now adopted and applied to expand our coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector. These deep learning-based detections add to the industry-leading endpoint detection and response capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP).

Word embedding in natural language processing

Keeping in mind that our goal is to classify PowerShell scripts, we briefly look at how text classification is approached in the domain of natural language processing. An important step is to convert words to vectors (tuples of numbers) that can be consumed by machine learning algorithms. A basic approach, known as one-hot encoding, first assigns a unique integer to each word in the vocabulary, then represents each word as a vector of 0s, with 1 at the integer index corresponding to that word. Although useful in many cases, the one-hot encoding has significant flaws. A major issue is that all words are equidistant from each other, and semantic relations between words are not reflected in geometric relations between the corresponding vectors.

Contextual embedding is a more recent approach that overcomes these limitations by learning compact representations of words from data under the assumption that words that frequently appear in similar context tend to bear similar meaning. The embedding is trained on large textual datasets like Wikipedia. The Word2vec algorithm, an implementation of this technique, is famous not only for translating semantic similarity of words to geometric similarity of vectors, but also for preserving polarity relations between words. For example, in Word2vec representation:

Madrid – Spain + Italy ≈ Rome

Embedding of PowerShell scripts

Since training a good embedding requires a significant amount of data, we used a large and diverse corpus of 386K distinct unlabeled PowerShell scripts. The Word2vec algorithm, which is typically used with human languages, provides similarly meaningful results when applied to PowerShell language. To accomplish this, we split the PowerShell scripts into tokens, which then allowed us to use the Word2vec algorithm to assign a vectorial representation to each token .

Figure 1 shows a 2-dimensional visualization of the vector representations of 5,000 randomly selected tokens, with some tokens of interest highlighted. Note how semantically similar tokens are placed near each other. For example, the vectors representing -eq, -ne and -gt, which in PowerShell are aliases for “equal”, “not-equal” and “greater-than”, respectively, are clustered together. Similarly, the vectors representing the allSigned, remoteSigned, bypass, and unrestricted tokens, all of which are valid values for the execution policy setting in PowerShell, are clustered together.

Figure 1. 2D visualization of 5,000 tokens using Word2vec

Examining the vector representations of the tokens, we found a few additional interesting relationships.

Token similarity: Using the Word2vec representation of tokens, we can identify commands in PowerShell that have an alias. In many cases, the token closest to a given command is its alias. For example, the representations of the token Invoke-Expression and its alias IEX are closest to each other. Two additional examples of this phenomenon are the Invoke-WebRequest and its alias IWR, and the Get-ChildItem command and its alias GCI.

We also measured distances within sets of several tokens. Consider, for example, the four tokens $i, $j, $k and $true (see the right side of Figure 2). The first three are usually used to represent a numeric variable and the last naturally represents a Boolean constant. As expected, the $true token mismatched the others – it was the farthest (using the Euclidean distance) from the center of mass of the group.

More specific to the semantics of PowerShell in cybersecurity, we checked the representations of the tokens: bypass, normal, minimized, maximized, and hidden (see the left side of Figure 2). While the first token is a legal value for the ExecutionPolicy flag in PowerShell, the rest are legal values for the WindowStyle flag. As expected, the vector representation of bypass was the farthest from the center of mass of the vectors representing all other four tokens.

Figure 2. 3D visualization of selected tokens

Linear Relationships: Since Word2vec preserves linear relationships, computing linear combinations of the vectorial representations results in semantically meaningful results. Below are a few interesting relationships we found:

high – $false + $true ≈’ low
‘-eq’ – $false + $true ‘≈ ‘-neq’
DownloadFile – $destfile + $str ≈’ DownloadString ‘
Export-CSV’ – $csv + $html ‘≈ ‘ConvertTo-html’
‘Get-Process’-$processes+$services ‘≈ ‘Get-Service’

In each of the above expressions, the sign ≈ signifies that the vector on the right side is the closest (among all the vectors representing tokens in the vocabulary) to the vector that is the result of the computation on the left side.

Detection of malicious PowerShell scripts with deep learning

We used the Word2vec embedding of the PowerShell language presented in the previous section to train deep learning models capable of detecting malicious PowerShell scripts. The classification model is trained and validated using a large dataset of PowerShell scripts that are labeled “clean” or “malicious,” while the embeddings are trained on unlabeled data. The flow is presented in Figure 3.

Figure 3 High-level overview of our model generation process

Using GPU computing in Microsoft Azure, we experimented with a variety of deep learning and traditional ML models. The best performing deep learning model increases the coverage (for a fixed low FP rate of 0.1%) by 22 percentage points compared to traditional ML models. This model, presented in Figure 4, combines several deep learning building blocks such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN). Neural networks are ML algorithms inspired by biological neural systems like the human brain. In addition to the pretrained embedding described here, the model is provided with character-level embedding of the script.

Figure 4 Network architecture of the best performing model

Real-world application of deep learning to detecting malicious PowerShell

The best performing deep learning model is applied at scale using Microsoft ML.Net technology and ONNX format for deep neural networks to the PowerShell scripts observed by Microsoft Defender ATP through the AMSI interface. This model augments the suite of ML models and heuristics used by Microsoft Defender ATP to protect against malicious usage of scripting languages.

Since its first deployment, this deep learning model detected with high precision many cases of malicious and red team PowerShell activities, some undiscovered by other methods. The signal obtained through PowerShell is combined with a wide range of ML models and signals of Microsoft Defender ATP to detect cyberattacks.

The following are examples of malicious PowerShell scripts that deep learning can confidently detect but can be challenging for other detection methods:

Figure 5. Heavily obfuscated malicious script

Figure 6. Obfuscated script that downloads and runs payload

Figure 7. Script that decrypts and executes malicious code

Enhancing Microsoft Defender ATP with deep learning

Deep learning methods significantly improve detection of threats. In this blog, we discussed a concrete application of deep learning to a particularly evasive class of threats: malicious PowerShell scripts. We have and will continue to develop deep learning-based protections across multiple capabilities in Microsoft Defender ATP.

Development and productization of deep learning systems for cyber defense require large volumes of data, computations, resources, and engineering effort. Microsoft Defender ATP combines data collected from millions of endpoints with Microsoft computational resources and algorithms to provide industry-leading protection against attacks.

Stronger detection of malicious PowerShell scripts and other threats on endpoints using deep learning mean richer and better-informed security through Microsoft Threat Protection, which provides comprehensive security for identities, endpoints, email and data, apps, and infrastructure.

 

Shay Kels and Amir Rubin
Microsoft Defender ATP team

 

Additional references:

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

The post Deep learning rises: New methods for detecting malicious PowerShell appeared first on Microsoft Security Blog.

]]>
From unstructured data to actionable intelligence: Using machine learning for threat intelligence http://approjects.co.za/?big=en-us/security/blog/2019/08/08/from-unstructured-data-to-actionable-intelligence-using-machine-learning-for-threat-intelligence/ Thu, 08 Aug 2019 16:30:12 +0000 Machine learning and natural language processing can automate the processing of unstructured text for insightful, actionable threat intelligence.

The post From unstructured data to actionable intelligence: Using machine learning for threat intelligence appeared first on Microsoft Security Blog.

]]>
The security community has become proficient in using indicators of compromise (IoC) feeds for threat intelligence. Automated feeds have simplified the task of extracting and sharing IoCs. However, IoCs like IP addresses, domain names, and file hashes are in the lowest levels of the threat intelligence pyramid; they are relatively easy to access and consume, but they’re also easy for attackers to change to evade detection. IoCs are not enough.

Tactics, techniques, and procedures (TTPs) can enable organizations to extract valuable insights like patterns of attack on an enterprise or industry vertical, or trends of attacker techniques in the overall ecosystem. However, TTPs are at the highest level of the threat intelligence pyramid; this information often comes in the form of unstructured texts like blogs, research papers, and incident response (IR) reports, and the process of gathering and sharing these high-level indicators has remained largely manual.

Automating the processing of unstructured text for threat intelligence can benefit threat analysts and customers alike. At my Black Hat session “Death to the IOC: What’s Next in Threat Intelligence“, I presented a system that automates this process using machine learning and natural language processing (NLP) to identify and extract high-level patterns of attack from unstructured text.

Figure 1. Basic structure of system

Trained on documentation of known threats, this system takes unstructured text as input and extracts threat actors, attack techniques, malware families, and relationships to create attacker graphs and timelines.

Data extraction and machine learning

In natural language processing, named entity extraction is a task that aims to classify phrases into pre-defined categories. This is usually a preprocessing step for other more complex tasks like identifying aliases, relationship extraction between actors and TTPs, etc. In our use case, the categories we want to identify are threat actors, malware families, attack techniques, and relationships between entities.

To train our model, our corpus was comprised of about 2,700 publicly available documents that describe the actions, behaviors, and tools of various threat actors. On average, each document in this corpus contained about two thousand tokens.

Figure 2. Training data distributions

We also see that the distribution of tokens that fall into one of our predefined categories is very low. On average, only 1% of the tokens are relevant entities. This tells us that we have class imbalance in our data.

Therefore, in addition to using traditional features that are common to natural language processing tasks (for example, lemma, part of speech, orthographic features), we experimented with using custom word embeddings, which allow the identification of relationships between two words that mean the same thing or are used in similar contexts.

Word embeddings are vector representations of words such that the semantic context in which a word appears is captured in the numeric vector. If two words mean the same thing, or are used in the same context frequently, then we would expect the cosine similarity of their word embedding vectors to be high. In other words, in a graphical representation, datapoints for words that mean the same thing or are used in the same context frequently would be relatively close together.

For example, we looked at some clusters of points formed around APT28 and found that the four closest points to it were either aliases (Sofacy, TG-4127) of the threat or were related by attribution (APT29, Dymalloy).

Figure 3. Tensorboard visualization of custom trained embeddings

We experimented with several models that are suited for a sequence labelling problem and measured performance in two ways—on the test dataset and on only the unseen tokens in the test dataset. We found that the experiments trained using conditional random fields (CRFs) trained on traditional and word embedding features have the best performance for both these scenarios.

Figure 4. Architecture of training pipeline for extractor system

Machine learning for insightful, actionable intelligence

Using the system we developed, we automatically extracted the techniques known to be used by Emotet, a prominent commodity malware family, as well as a spread of APT actors that public documents refer to as Saffron Rose, Snake, and Muddy Water, and generated the following graph, which shows that there is a significant overlap between some techniques used by commodity malware and those used by APTs.

Figure 5. Overlaps in techniques used by commodity malware and APTs

In this graph, we can see that techniques like obfuscated PowerShell, spear-phishing, and process hollowing are not restricted to APTs, but are prevalent in commodity malware. Insights like this can be used by organizations to guide security investments. Organizations can place defensive choke points to detect or prevent these attacker techniques so that they can stop not only annoying commodity malware, but also the high-profile targeted attacks.

At Microsoft, we are continuing to push the boundaries on how machine learning can improve the security posture of our customers. The output of machine learning-backed threat intelligence will show up in the effectiveness of the protection we deliver through Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) and the broader Microsoft Threat Protection.

In recent months, we have extensively discussed how we’re using machine learning to continuously innovate protections in Microsoft Defender ATP, particularly in hardening against evasion and adversarial attacks. In this blog we showed another application of machine learning: processing the vast amounts of threat intelligence that organizations receive and identifying high-level patterns. More importantly, we’re sharing our approaches so organizations can be inspired to explore more applications of machine learning to improve overall security.

 

Bhavna Soman (@bsoman3)
Microsoft Defender ATP Research

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

The post From unstructured data to actionable intelligence: Using machine learning for threat intelligence appeared first on Microsoft Security Blog.

]]>
DART: the Microsoft cybersecurity team we hope you never meet http://approjects.co.za/?big=en-us/security/blog/2019/03/25/dart-the-microsoft-cybersecurity-team-we-hope-you-never-meet/ Tue, 26 Mar 2019 00:12:11 +0000 Meet Microsoft’s Detection and Response Team (DART) and read their advice that may help you avoid working with them in future.

The post DART: the Microsoft cybersecurity team we hope you never meet appeared first on Microsoft Security Blog.

]]>
If you spent 270 days away from home, not on vacation, you’d want it to be for a good reason. When boarding a plane, sometimes having been pulled out of bed to leave family for weeks on end, it’s because one of our customers is in need. It means there is a security compromise and they may be dealing with a live cyberattack.

As the Microsoft Detection and Response Team (DART), our job is to respond to compromises and help our customers become cyber-resilient. This is also our team mission. One we take very seriously. And it’s why we are passionate about what we do for our customers.

Our unique focus within the Microsoft Cybersecurity Solutions Group allows DART to provide onsite reactive incident response and remote proactive investigations. DART leverages Microsoft’s strategic partnerships with security organizations around the world and with internal Microsoft product groups to provide the most complete and thorough investigation possible. Our response expertise has been leveraged by government and commercial entities around the world to help secure their most sensitive, critical environments.

How DART works with Microsoft customers

Our team works with customers globally to identify risks and provide reactive incident response and proactive security investigation services to help our customers manage their cyber-risk, especially in today’s dynamic threat environment.

In one recent example, our experts were called in to help several financial services organizations deal with attacks launched by an advanced threat actor group that had gained administrative access and executed fraudulent transactions, transferring large sums of cash into foreign bank accounts.

When the attackers realized they had been detected, they rapidly deployed destructive malware that crippled the customers’ operations for three weeks. Our team was on site within hours, working around the clock, side-by-side with the customers’ security teams to restore normal business operations.

Incidents like these are a reminder that trust remains one of the most valuable assets in cybersecurity and the role of technology is to empower defenders to stay a step ahead of well-funded and well-organized adversaries.

Overlooking a single security threat can create a serious event that could severely erode community and consumer confidence, can tarnish reputation and brand, negatively impact corporate valuations, provide competitors with an advantage, and create unwanted scrutiny.

That’s why our DART team also offers The Security Crisis and Response Exercise. This is a hands-on two-day custom, interactive experience on understanding security crisis situations and how to respond in the event of a cybersecurity incident. We examine our customers’ security posture and implement proactive readiness training with the objective of helping customers prepare for incident response through practice exercises.

The simulation is based on real-life scenarios from recent cybersecurity incident response engagements. The exercise focuses on topics such as Ransomware, Office 365 compromises, and compromises via industry-specific malware with complex backdoor software. Each scenario focuses on the key areas of cybersecurity: Identify, Protect, Detect, Respond, and Recover and covers a broad eco-system including supply chain vulnerabilities such as software vendors, IT service vendors, and hardware vendors.

DART basic recommendations

To help you become more cyber-resilient, below are a few recommendations from our team based on our experiences of what customers can be doing now to help harden their security posture.

Standardize—The cost of security increases as the complexity of the environment increases. To reduce the total cost of ownership (TCO), standardization is key. It also reduces the number of secure configurations the organization must maintain.

  • Domain controllers should be nearly identical to each other in both the operating system (OS) level and the apps running on them.
  • Member server groups should be standardized based on other similar or same functions.
    • File servers on the same OS with the same apps.
    • SQL servers on the same OS with the same apps.
    • Exchange servers on the same OS with the same apps.
  • Reduce the number of disjoined security products.
    • It is not possible to manage the security of an enterprise from 15 different security consoles that are not integrated.
    • Find a partner that covers multiple layers of security with integrated products.

Modernize—Consider this analogy: In WWII, the battleship was a fearsome ship bristling with guns, big and small, and built to take a hit. Today, a single missile cruiser could sink an entire fleet of WWII battleships. Technology evolves quickly. If you put off modernizing your environment, you could be missing critical technologies that protect your organization.

  • Accelerate adoption plans for Server 2016 and Windows 10.
    • Start with Domain Controllers and workstations of admins/VIPs.
    • Follow on with line of business (LOB) member servers and easy win upgrades like file servers.
    • Finalize with all other member servers and workstations.
  • Accelerate cloud adoption plans, while understanding the shared-risk model between customers’ cloud vendors and their retained risk you must continue to manage.
  • Evaluate security tools based on their ability to succeed in the modern threat landscape. Cloud-enabled security solutions need to base capability on four key pillars:
    • Endpoint telemetry—Windows, Android, iOS, Linux, etc. are the initial points from which data is collected.
    • Compute—Datacenter power. This is the compute power needed to organize all the endpoint telemetry.
    • Machine learning and artificial intelligence (AI)—Once we have all this endpoint telemetry organized, we use machine learning and AI to make sense of it.
    • Threat intelligence—Generated from the combination of the three previously mentioned pillars, the human interaction/feedback loop (the DART team) is used to make this data actionable and can help product groups course correct the machine learning and AI algorithms when needed.

Develop a comprehensive patching strategy

  • Update both Microsoft and all third-party apps.
  • Employ a software inventory solution like System Center Configuration Manager (SCCM).
  • Reboot after patching.
  • Avoid policy exceptions for business units to avoid patching where possible.
    • Short term: Enforce vulnerable machine/application isolation.
    • Long term: Adjust the acquisitions process to include a new vendor for the needed functionality.

Develop a comprehensive backup strategy

  • Always have a backup policy in place.
  • Test to ensure backups work.
  • Check to see if successful backups are online. If so, ensure they are not vulnerable to online threats.

Credential hygiene

  • Most modern attacks are identity based.
  • Read the Pass-the-Hash white papers that explains the exposure of privileged credentials on lower trusted tier systems.
  • Run through a Security Development Lifecycle (SDL) on internally developed apps to look for vulnerabilities and/or hard coded credentials.
  • Look for privileged accounts that are being used as service accounts.
    • At the very least, change them manually on a regular basis.
    • If you upgrade to 2012R2 or higher, you can use managed service accounts (MSA) where supported.

As the DART team, we have engaged with the most well-run IT environments in the world. Yet, even these networks get penetrated from time to time. The challenge of cybersecurity is one we must face together. While we hope you never have to call on our DART team, we are a trusted partner ready to help.

Learn more

To learn more about DART, our engagements, and how they are delivered by experienced cybersecurity professionals who devote 100 percent of their time to providing cybersecurity solutions to customers worldwide, please contact your account executive. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post DART: the Microsoft cybersecurity team we hope you never meet appeared first on Microsoft Security Blog.

]]>
Microsoft AI competition explores the next evolution of predictive technologies in security http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/ http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/#respond Thu, 13 Dec 2018 19:00:54 +0000 Predictive technologies are already effective at detecting and blocking malware at first sight. A new malware prediction competition on Kaggle will challenge the data science community to push these technologies even further—to stop malware before it is even seen.

The post Microsoft AI competition explores the next evolution of predictive technologies in security appeared first on Microsoft Security Blog.

]]>
Predictive technologies are already effective at detecting and blocking malware at first sight. A new malware prediction competition on Kaggle will challenge the data science community to push these technologies even further—to stop malware before it is even seen.

The Microsoft-sponsored competition calls for participants to predict if a device is likely to encounter malware given the current machine state. Participants will build models using 9.4GB of anonymized data from 16.8M devices, and the resulting models will be scored by their ability to make correct predictions. Winning teams get $25,000 in total prizes.

The competition provides academics and researchers with varied backgrounds a fresh opportunity to work on a real-world problem using a fresh set of data from Microsoft. Results from the contest will help us identify opportunities to further improve Microsoft’s layered defenses, focusing on preventative protection. Not all machines are equally likely to get malware; competitors will help build models for identifying devices that have a higher risk of getting malware so that preemptive action can be taken.

Cybersecurity is the central challenge of our digital age. Today, Windows Defender Advanced Threat Protection (Windows Defender ATP) uses intelligent systems to protect millions of devices against cyberattacks every day. Machine learning and artificial intelligence drive cloud-delivered protections that catch and predict new and emerging threats.

We also believe in the power of working with the broader research community to stay ahead of threats. Microsoft’s 2015 malware classification competition on Kaggle was a huge success, with the dataset provided by Microsoft cited in more than 50 research papers in multiple languages. To this day, the 0.5TB dataset from that competition is still used for research and continues to produce value for Microsoft and the data science community. This new competition is organized by the Windows Defender ATP Research team, in cooperation with Northeastern University and Georgia Institute of Technology as academic partners, with the goal of bringing new ideas to the fight against malware attacks and breaches.

Kaggle is a platform for data scientists to create data science projects, download datasets, and participate in contests. Microsoft is happy to use the Kaggle platform to engage a rich community of amazing thinkers. We think this collaboration will result in better protection for Microsoft customers and the Internet at large. Stay tuned for the results, we can’t wait to see what the data science community comes up with!

Click here to join the competition.

 

 

Chase Thomas and Robert McCann
Windows Defender Research team

 

 

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post Microsoft AI competition explores the next evolution of predictive technologies in security appeared first on Microsoft Security Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/feed/ 0
Partnering with the industry to minimize false positives http://approjects.co.za/?big=en-us/security/blog/2018/08/16/partnering-with-the-industry-to-minimize-false-positives/ http://approjects.co.za/?big=en-us/security/blog/2018/08/16/partnering-with-the-industry-to-minimize-false-positives/#respond Thu, 16 Aug 2018 17:00:27 +0000 Every day, antivirus capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) protect millions of customers from threats. To effectively scale protection, Microsoft Defender ATP uses intelligent systems that combine multiple layers of machine learning models, behavior-based detection algorithms, generics, and heuristics that make a verdict on suspicious files, most of the time in […]

The post Partnering with the industry to minimize false positives appeared first on Microsoft Security Blog.

]]>
Every day, antivirus capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) protect millions of customers from threats. To effectively scale protection, Microsoft Defender ATP uses intelligent systems that combine multiple layers of machine learning models, behavior-based detection algorithms, generics, and heuristics that make a verdict on suspicious files, most of the time in a fraction of a second.

This multilayered approach allows us to proactively protect customers in real-time, whether in the form of stopping massive malware outbreaks or detecting limited sophisticated cyberattacks. This quality of antivirus capabilities is reflected in the consistently high scores that Microsoft Defender ATP gets in independent tests and the fact that our antivirus solution is the most deployed in the enterprise.

The tradeoff of an intelligent, scalable approach is that some of our more aggressive classifiers from time to time misclassify normal files as malicious (false positives). While false positives are a very tiny occurrence compared to the large number of malware we correctly identify (true positives) and protect customers from, we are aware of the impact that misclassified files might have. Keeping false positives at a minimum is an equally important quality metric that we continually work to improve on.

Avoiding false positives is a two-way street between security vendors and developers. Publishing apps to the Microsoft Store is the best way for vendors and developers to ensure their programs are not misclassified. For customers, apps from the Microsoft Store are trusted and Microsoft-verified.

Here are other ways developers can raise the level of trust by both security vendors and customers and help make sure programs and files are not inadvertently detected as malware.

Digitally sign files

Digital signatures are an important way to ensure the integrity of software. By verifying the identity of the software publisher, a signature assures customers that they know who provided the software they’re installing or running. Digital signatures also assure customers that the software they received is in the same condition as when the publisher signed it and the software has not been tampered with.

Code signing does not necessarily guarantee the quality or functionality of software. Digitally signed software can still contain flaws or security vulnerabilities. However, because software vendors’ reputations are based on the quality of their code, there is an incentive to fix these issues.

We use the reputation of digital certificates to help determine the reputation of files signed by them. The reverse is also true: we use the reputation of digitally signed files to determine the reputation of the digital certificates they are signed with. One of the most effective ways for developers to reduce the chances of their software being detected as malware is it to digitally sign files with a reputable certificate.

The second part of reducing the risk of unintended detection is to build a good reputation on that certificate. Microsoft uses many factors to determine the reputation of a certificate, but the most important are the files that are signed by it. If all the files using a certificate have good reputation and the certificate is valid, then the certificate keeps a good reputation.

Extended validation (EV) code signing is a more advanced version of digital certificates and requires a more rigorous vetting and authentication process. This process requires a more comprehensive identity verification and authentication process for each developer. The EV code signing certificates require the use of hardware to sign applications. This hardware requirement is an additional protection against theft or unintended use of code signing certificates. At the time this post was written, programs signed by an EV code signing certificate could immediately establish reputation with Microsoft Defender SmartScreen even if no prior reputation existed for that file or publisher. Starting in 2020, EV certificates were no longer treated specially, and will now develop reputation the same way as other Authenticode certificates.

Keep good reputation

To gain positive reputation on multiple programs and files, developers sign files with a digital certificate with positive reputation. However, if one of the files gains poor reputation (e.g., detected as malware) or if the certificate was stolen and used to sign malware, then all of the files that are signed with that certificate will inherit the poor reputation. This situation could lead to unintended detection. This framework is implemented this way to prevent the misuse of reputation sharing.

We thus advise developers to not share certificates between programs or other developers. This advice particularly holds true for programs that incorporate bundling or use advertising or freemium models of monetization. Reputation accrues—if a software bundler includes components that have poor reputation, the certificate that bundler is signed with gets the poor reputation.

Be transparent and respect users’ ability to choose

Malware threats use a variety of techniques to hide. Some of these techniques include file obfuscation, being installed in nontraditional install locations, and using names that don’t reflect that purpose of the software.

Customers should have choice and control over what happens on their devices. Using nontraditional install locations or misleading software names reduce user choice and control.

Obfuscation has legitimate uses, and some forms of obfuscation are not considered malicious. However, many techniques are only employed to evade antivirus detection. Developers should refrain from using non-commercial packers and obfuscation software.

When programs employ malware-like techniques, they trigger flags in our detection algorithms and greatly increase the chances of false positives.

Keep good company

Another indicator that can influence the reputation of a file are the other programs the file is associated with. This association can come from what the program installs, what is installed at the same time as the program, or what is seen on the same machines as the file. Not all of these associations directly lead to detections, however, if a program installs other programs or files that have poor reputation, then by association that program gains poor reputation.

Understand the detection criteria

Microsoft’s policy aims to protect customers against malicious software while minimizing the restrictions on developers. The diagram below demonstrates the high-level evaluation criteria Microsoft uses for classifying files:

  • Malicious software: Performs malicious actions on a computer
  • Unwanted software: Exhibits the behavior of adware, browser modifier, misleading, monitoring tool, or software bundler
  • Potentially unwanted application (PUA): Exhibits behaviors that degrade the Windows experience
  • Clean: We trust the file is not malicious, is not inappropriate for an enterprise environment, and does not degrade the Windows experience

These evaluation criteria describe the characteristics and behavior of malware and potentially unwanted applications and guide the proper identification of threats. Developers should make sure their programs and files don’t demonstrate undesirable characteristics or behavior to minimize chances of their programs being misclassified.

Challenging a detection decision

If you follow these pieces of advice and we unintentionally detect your file, you can help us fix the issue by reporting it through the Microsoft Defender Security Intelligence portal.

Customer protection is our top priority. We deliver this through Microsoft Defender ATP’s unified endpoint security platform. Helping Microsoft maintain high-quality protection benefits customers and developers alike, allowing for an overall productive and secure computing experience.

Michael Johnson

Microsoft Defender ATP Research Team

The post Partnering with the industry to minimize false positives appeared first on Microsoft Security Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2018/08/16/partnering-with-the-industry-to-minimize-false-positives/feed/ 0
March-April 2018 test results: More insights into industry AV tests http://approjects.co.za/?big=en-us/security/blog/2018/07/20/march-april-2018-test-results-more-insights-into-industry-av-tests/ http://approjects.co.za/?big=en-us/security/blog/2018/07/20/march-april-2018-test-results-more-insights-into-industry-av-tests/#respond Fri, 20 Jul 2018 19:30:38 +0000 In a previous post, in the spirit of our commitment to delivering industry-leading protection, customer choice, and transparency on the quality of our solutions, we shared insights and context into the results of AV-TEST’s January-February 2018 test cycle. We released a transparency report to help our customers and the broader security community to stay informed […]

The post March-April 2018 test results: More insights into industry AV tests appeared first on Microsoft Security Blog.

]]>
In a previous post, in the spirit of our commitment to delivering industry-leading protection, customer choice, and transparency on the quality of our solutions, we shared insights and context into the results of AV-TEST’s January-February 2018 test cycle. We released a transparency report to help our customers and the broader security community to stay informed and understand independent test results better.

In the continued spirit of these principles, we’d like to share Windows Defender AV’s scores in the March-April 2018 test. In this new iteration of the transparency report, we continue to investigate the relationship of independent test results and the real-world protection of antivirus solutions. We hope that you find the report insightful.

Download the complete transparency report on March-April 2018 AV-TEST results

 

Below is a summary of the transparency report:

Protection: Windows Defender AV achieved an overall Protection score of 5.5/6.0, missing 2 out of 5,680 malware samples (0.035% miss rate). With the latest results, Windows Defender AV has achieved 100% on 9 of the 12 most recent tests (combined “Real World” and “Prevalent malware”).
Usability (false positives): Windows Defender AV maintained its previous score of 5.5/6.0. Based on telemetry, most samples that Windows Defender AV incorrectly classified as malware (false positive) had very low prevalence and are not commonly used in business context. This means that it is unlikely for these false positives to affect enterprise customers.
Performance: Windows Defender AV maintained its previous score of 5.5/6.0 and continued to outperform the industry in most areas. These results reflect the investments we made in optimizing Windows Defender AV performance for high-frequency actions.

 

The report aims to help customers evaluate the extent to which test results are reflective of the quality of protection in the real world. At the same time, insights from the report continue to drive further improvements in the intelligent security services that Microsoft provides for customers.

Windows Defender AV and the rest of the built-in security technologies in Windows Defender Advanced Threat Protection (Windows Defender ATP) work together to create a unified endpoint security platform. In real customer environments, this unified security platform provides intelligent protection, detection, investigation, and response capabilities that are not currently reflected in independent tests. We tested the two malware samples that Windows Defender AV missed in the March-April 2018 test and proved that for both missed samples, at least three other components of Windows Defender ATP would detect or block the malware in a true attack scenario. You can find these details and more in the transparency report.

Download the complete transparency report on March-April 2018 AV-TEST results

 

The Windows Defender ATP security platform incorporates attack surface reduction, next-generation protection, endpoint detection and response, and advanced hunting capabilities. To see these capabilities for yourself, sign up for a 90-day trial of Windows Defender ATP, or enable Preview features on existing tenants.

 

 

Zaid Arafeh

Senior Program Manager, Windows Defender Research team

 

 

Related blog posts:

Why Windows Defender Antivirus is the most deployed in the enterprise

Adding transparency and context into industry AV test results

 

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post March-April 2018 test results: More insights into industry AV tests appeared first on Microsoft Security Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2018/07/20/march-april-2018-test-results-more-insights-into-industry-av-tests/feed/ 0