Russ Rogers, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog Expert coverage of cybersecurity topics Mon, 30 Jun 2025 09:56:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Success in security: reining in entropy http://approjects.co.za/?big=en-us/security/blog/2020/05/20/success-security-reining-entropy/ Wed, 20 May 2020 18:00:12 +0000 http://approjects.co.za/?big=en-us/security/blog//?p=91132 Your network is unique. It’s a living, breathing system evolving over time. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>
Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>
Threat hunting in Azure Advanced Threat Protection (ATP) http://approjects.co.za/?big=en-us/security/blog/2020/01/07/threat-hunting-azure-advanced-threat-protection/ Tue, 07 Jan 2020 17:00:53 +0000 http://approjects.co.za/?big=en-us/security/blog//?p=90419 DART was called into an engagement where the adversary had a foothold within the on-premises network, which had been gained through compromising cloud credentials. Luckily, this customer had deployed Azure ATP prior to the incident and it had already normalized authentication and identity transactions within the customer network.

The post Threat hunting in Azure Advanced Threat Protection (ATP) appeared first on Microsoft Security Blog.

]]>
As members of Microsoft’s Detection and Response Team (DART), we’ve seen a significant increase in adversaries “living off the land” and using compromised account credentials for malicious purposes. From an investigation standpoint, tracking adversaries using this method is quite difficult as you need to sift through the data to determine whether the activities are being performed by the legitimate user or a bad actor. Credentials can be harvested in numerous ways, including phishing campaigns, Mimikatz, and key loggers.

Recently, DART was called into an engagement where the adversary had a foothold within the on-premises network, which had been gained through compromising cloud credentials. Once the adversary had the credentials, they began their reconnaissance on the network by searching for documents about VPN remote access and other access methods stored on a user’s SharePoint and OneDrive. After the adversary was able to access the network through the company’s VPN, they moved laterally throughout the environment using legitimate user credentials harvested during a phishing campaign.

Once our team was able to determine the initially compromised accounts, we were able to begin the process of tracking the adversary within the on-premises systems. Looking at the initial VPN logs, we identified the starting point for our investigation. Typically, in this kind of investigation, your team would need to dive deeper into individual machine event logs, looking for remote access activities and movements, as well as looking at any domain controller logs that could help highlight the credentials used by the attacker(s).

Luckily for us, this customer had deployed Azure Advanced Threat Protection (ATP) prior to the incident. By having Azure ATP operational prior to an incident, the software had already normalized authentication and identity transactions within the customer network. DART began querying the suspected compromised credentials within Azure ATP, which provided us with a broad swath of authentication-related activities on the network and helped us build an initial timeline of events and activities performed by the adversary, including:

  • Interactive logins (Kerberos and NTLM)
  • Credential validation
  • Resource access
  • SAMR queries
  • DNS queries
  • WMI Remote Code Execution (RCE)
  • Lateral Movement Paths

 

Azure Advanced Threat Protection

Detect and investigate advanced attacks on-premises and in the cloud.

This data enabled the team to perform more in-depth analysis on both user and machine level logs for the systems the adversary-controlled account touched. Azure ATP’s ability to identify and investigate suspicious user activities and advanced attack techniques throughout the cyber kill chain enabled our team to completely track the adversary’s movements in less than a day. Without Azure ATP, investigating this incident could have taken weeks—or even months—since the data sources don’t often exist to make this type of rapid response and investigation possible.

Once we were able to track the user throughout the environment, we were able to correlate that data with Microsoft Defender ATP to gain an understanding of the tools used by the adversary throughout their journey. Using the right tools for the job allowed DART to jump start the investigation; identify the compromised accounts, compromised systems, other systems at risk, and the tools being used by the adversaries; and provide the customer with the needed information to recover from the incident faster and get back to business.

Learn more and keep updated

Learn more about how DART helps customers respond to compromises and become cyber-resilient. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Threat hunting in Azure Advanced Threat Protection (ATP) appeared first on Microsoft Security Blog.

]]>
Facing the cold chills http://approjects.co.za/?big=en-us/security/blog/2019/07/15/facing-cold-chills/ Mon, 15 Jul 2019 16:00:54 +0000 DART recently worked with a customer who had been subject to a targeted compromise where the entity was intently and purposefully attempting to get into their systems.

The post Facing the cold chills appeared first on Microsoft Security Blog.

]]>
Have you ever felt the cold chill in your spine when the “fix engine” light comes on in your car? How about when one of your children turns pale and gets their first fever? It’s a feeling of helplessness and concern regarding what could be wrong. Then there’s the feeling of relief that comes with understanding, even if it’s only partial understanding. We give the child medicine and the fever fades. We add oil to the engine and the light goes off. The human mind often wants to take the easiest path away from fear and stress. But these solutions only fix the symptoms, leaving the cause of the issue unaddressed. The same thing is true in security related situations.

The Microsoft Detection and Response Team (DART) recently worked with a customer who had been subject to a targeted compromise, one where the entity was intently and purposefully attempting to get into their systems. The attack came through one of the customer’s child organizations, who was initially compromised. The parent organization shares a trust with the child organization. During an investigation of the child organization, the parent organization was notified that attackers had migrated their access foothold into the parent network. The parent organization was able to take immediate steps to stop the malicious activities, just before things could have gotten very serious.

From a security perspective, the customer has addressed the symptom (a known compromise) but missed the opportunity to address the core issues that allowed the compromise. It’s not unusual for an organization to shift to the perspective that everything is now better. But it’s never quite so simple.

For DART, one of our key responsibilities is helping our customers understand what happened, how it happened, how long it’s been happening, the potential impact to the organization, and how the customer can improve their protection, detection, and response mechanisms to be better prepared in the future.

Understanding a compromise

Let’s dissect this story a bit more to better understand what happened. The example customer is a global company, with dozens of child organizations around the globe, all connected to the same Active Directory architecture. From a customer perspective, the IT and security functions are decentralized at each child, with each region retaining autonomous control over the operation of their data resources. This takes the pressure off the parent organization by delegating administrative processes like patching, account management, and configuration management to administrators at the child organization; and allowing the parent to focus primarily on critical business operations and their own IT and security.

Infographic of parent org and child org relationship. The child orgs surround the parent org, which is in the cloud, and is made vulnerable as the child orgs are made vulnerable.

Each of the child organizations operates their own Active Directory forest for their users and systems, and a majority of these organizations have a two-way trust with the Active Directory in the parent organization. Roughly half of these trusts have no security identifier (SID) filtering in place to restrict account movement between the various forests. The parent organization’s incident was possible because a compromised account was allowed to move into their network, unhindered. In fact, a compromise in any of the other child organizations would have the same result, creating legitimate risk for the parent and all the other connected child organizations.

How DART helps customers address underlying risks

DART spent days trying to weave a story for the customer explaining the real risk to the organization, even though this specific attack had been blocked. There are a number of systemic issues that worked together to create the risk to the customer networks. Patching was sporadic, and due to the decentralized nature of both the information technology (IT) and security processes across the various organizations, there were large numbers of systems with known vulnerabilities. The decentralized nature of the network also created blind spots in security monitoring across the various forest and network boundaries. The customer could not have detected the lateral movement of bad actors on the network because they weren’t watching those boundaries.

Finally, the lack of configuration management across the company allowed users to have excessive account privileges and to install unsafe software packages. This resulted in large numbers of dangerous software packages to be installed on user systems with privileged access—simply because users opened email attachments, clicked a link, or installed questionable software downloaded from the internet, such as key generators for commercial software products.

The large number of potentially unwanted applications (PUAs) and malware present on the network was clear evidence of the issues facing the customer. A compromised user in one segment of the customer organization creates risk for the entire company. Faced with the reality of the situation, the customer shifted perspectives to improving the security of their environment.

To start, the customer needed to get a handle on the configuration and security of the various arms of the organization. Centralizing IT and security functions would allow for consistent patching, secure account management, and security monitoring. Two-way trusts putting the organization at risk should be managed with appropriate SID filtering, reduced to one-way trusts as needed, or removed from a trust relationship altogether, depending on business need. Standardized security software, such as anti-malware solutions with automatic updates, would provide detection of malware much more quickly on endpoints. Security monitoring at all key network boundaries would create immediate alerts when malicious software or bad actors attempt to move across the environment or create persistence points. A sensible and centralized management plan would enable the customer to protect, detect, and respond to incidents.

It’s easy to get forget security incidents are sometimes symptoms of a bigger problem facing the organization. Leadership would benefit from taking a step back from current events to work with their team and determine where the real security issues exist, and what’s needed to make the organization more secure. In essence, a security aspirin will help lower our fever, but it’s a temporary fix. The fever will return, and it could be worse. It’s more effective in the long run to obtain the needed X-rays or take appropriate blood tests to determine how sick the network is, and what treatment options will remove the key risks to network health.

Learn more

To learn more about DART, our engagements, and how they are delivered by experienced cybersecurity professionals who devote 100 percent of their time to providing cybersecurity solutions to customers worldwide, please contact your account executive. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Facing the cold chills appeared first on Microsoft Security Blog.

]]>