Skip to main content Why Microsoft Security AI-powered cybersecurity Cloud security Data security & governance Identity & network access Privacy & risk management Security for AI Unified SecOps Zero Trust Microsoft Defender Microsoft Entra Microsoft Intune Microsoft Priva Microsoft Purview Microsoft Sentinel Microsoft Security Copilot Microsoft Entra ID (Azure Active Directory) Microsoft Entra Agent ID Microsoft Entra External ID Microsoft Entra ID Governance Microsoft Entra ID Protection Microsoft Entra Internet Access Microsoft Entra Private Access Microsoft Entra Permissions Management Microsoft Entra Verified ID Microsoft Entra Workload ID Microsoft Entra Domain Services Azure Key Vault Microsoft Sentinel Microsoft Defender for Cloud Microsoft Defender XDR Microsoft Defender for Endpoint Microsoft Defender for Office 365 Microsoft Defender for Identity Microsoft Defender for Cloud Apps Microsoft Security Exposure Management Microsoft Defender Vulnerability Management Microsoft Defender Threat Intelligence Microsoft Defender Suite for Business Premium Microsoft Defender for Cloud Microsoft Defender Cloud Security Posture Mgmt Microsoft Defender External Attack Surface Management Azure Firewall Azure Web App Firewall Azure DDoS Protection GitHub Advanced Security Microsoft Defender for Endpoint Microsoft Defender XDR Microsoft Defender for Business Microsoft Intune core capabilities Microsoft Defender for IoT Microsoft Defender Vulnerability Management Microsoft Intune Advanced Analytics Microsoft Intune Endpoint Privilege Management Microsoft Intune Enterprise Application Management Microsoft Intune Remote Help Microsoft Cloud PKI Microsoft Purview Communication Compliance Microsoft Purview Compliance Manager Microsoft Purview Data Lifecycle Management Microsoft Purview eDiscovery Microsoft Purview Audit Microsoft Priva Risk Management Microsoft Priva Subject Rights Requests Microsoft Purview Data Governance Microsoft Purview Suite for Business Premium Microsoft Purview data security capabilities Pricing Services Partners Cybersecurity awareness Customer stories Security 101 Product trials How we protect Microsoft Industry recognition Microsoft Security Insider Microsoft Digital Defense Report Security Response Center Microsoft Security Blog Microsoft Security Events Microsoft Tech Community Documentation Technical Content Library Training & certifications Compliance Program for Microsoft Cloud Microsoft Trust Center Security Engineering Portal Service Trust Portal Microsoft Secure Future Initiative Business Solutions Hub Contact Sales Start free trial Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft Teams Windows 365 Microsoft AI Azure Space Mixed reality Microsoft HoloLens Microsoft Viva Quantum computing Sustainability Education Automotive Financial services Government Healthcare Manufacturing Retail Find a partner Become a partner Partner Network Microsoft Marketplace Marketplace Rewards Software development companies Blog Microsoft Advertising Developer Center Documentation Events Licensing Microsoft Learn Microsoft Research View Sitemap
Rico Mariani headshot.

8 best practices for CISOs conducting risk reviews

Copilot logo Powered by Microsoft Copilot

The Deputy CISO blog series is where Microsoft Deputy Chief Information Security Officers (CISOs) share their thoughts on what is most important in their respective domains. In this series, you will get practical advice, tactics to start (and stop) deploying, forward-looking commentary on where the industry is going, and more. In this blog, Rico Mariani, Deputy CISO for Microsoft Security Products, Research Infrastructure, and Engineering Systems shares some of his best practices and expertise in conducting risk reviews.

The nature of cyberthreats has never been static, but it’s hard to accurately convey the scale of their recent evolution and proliferation. As we’ve seen in many other arenas, AI has become a very powerful productivity tool for would-be cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion in fraud attempts.1 And as of the writing of the Microsoft Digital Defense Report 2025, we are tracking 100 trillion security signals each day (a 40% increase since 2023).2

This is why I decided to write a blog about risk reviews. By asking the right questions, risk reviews help us transform the utility of our security data from primarily reactive remediation and response information into key insights helping to inform our proactive security stances. And embracing strong proactive security is something we can all do to mitigate our increased exposure to security threats.  

Risk reviews are also a topic I’ve lent focus to during my first six months as Deputy CISO for Microsoft Security. It’s a very interesting role for me, as I’ve traditionally described myself as performance specialist and a systems specialist more than a security specialist. It’s not necessarily a distinction of skill set, but more one of mindset, and what I’d like to share with you is actually a bit of a synthesis of my inherent performance- and systems-first way of thinking and things I’ve brought into that practice after working with many of the other Microsoft Deputy CISOs over the last few months.

There are roughly eight points I want to bring up concerning risk reviews in this blog. Each point has the potential to help expose potential security vulnerabilities when brought up with security teams. Together, they represent a structured and approachable way to initiate necessary conversations and drive meaningful results:

  1. Assets
  2. Applications 
  3. Authentication 
  4. Authorization 
  5. Network isolation 
  6. Detections 
  7. Auditing 
  8. Things not to miss 

Now, why did I choose to highlight these areas and not others? Generally, I find that looking at problems from the lens of risk management gives me a fresh perspective. When you very consistently ask specific questions around these areas, they often effectively start the conversation you want to have.

Just one last thing before we dive in: What I’m about to tell you is only approximately correct. There will be edge cases and exceptions, but generally I think you’ll find this information helpful.

1. Assets

The best place to start a review is identifying the assets that you need to protect. This will largely define the scope of the review. A good place to find those assets is, of course, on your architecture diagrams and your threat models. The assets we’re talking about could be storage (where perhaps you’re storing sensitive or otherwise important data) or they could be highly-privileged applications like command-and-control systems or something similar. This is, in short, the list of things that your cyberattacker wants to get to. 

2. Applications

In the next step, you identify your applications. These are, broadly speaking, the active part of your system. They are the outward-facing surfaces that customers will use and the set of microservices that support your interface. These systems could be providing any set of services that you might need—and herein lies the problem. It’s entirely normal for your applications to require access to your most important assets, but that means the applications themselves can become viable targets for a cyberattacker. So how do we make this situation better? At this point, it’s reasonable to start talking about possible controls. 

Read up on Zero Trust for source code access.

3. Good quality authentication 

The next thing you will want to inspect is the form of authentication that your system is using. The best systems are using tokens for authentication, and they are getting these tokens from standard token issuers like, for instance, Microsoft Entra. It’s sometimes viable to have your own token generation system, but remember that such systems tend to have bugs. Those bugs can be exploitable. And even lacking bugs, there could be, say, gaps or vulnerabilities in your token issuing system such that perhaps the tokens cannot be properly scoped. The tokens could also tend to be too long-lived, or difficult to be made fine-grained enough, or lack the capacity to allow for flowing user context from the request to the authorization system. Many such deficiencies are possible. 

Even with a good quality token issuing system, you can easily find yourself in a situation where the tokens that you’re creating are too fungible, or too powerful, or both. Thinking back to the assets you’re trying to protect and the applications that you have, you can likely categorize some of the applications as having more “power,” if you will, than others. Sometimes we call these “highly privileged applications” because they have the capability to do something that is especially of interest to cyberattackers, like reading a lot of data, changing configuration, or anything like that. 

To best manage the privileges associated with these applications, it needs to be the case that the kinds of tokens that they use are as limited as possible. So, a particular token might authorize a capability for a certain customer, on behalf of a certain user, for a certain set of data—and nothing more than that. When privileges are very generic, like “I can do this operation for anyone, anywhere,” things become much more dangerous. So, here the idea is to make sure that the tokens that you’re getting are very specific to the intent that you have and that only the applications that need those tokens can get them, and, again, the tokens are as limited as possible. This goes a long way in reducing the possible damage that a cyberattacker could do if they found such a token errantly stored somewhere. 

A lot of the things we think about when we’re working with tokens and trying to limit them fall into the category of limiting what a cyberattacker can do if they get a foothold somewhere. This is the Zero Trust model, where you assume breach everywhere.  

Additionally, it’s essential to use standard libraries to accurately authenticate with tokens, so that all the aspects and limitations of the token are certain to be honored. 

Learn about phishing-resistant multifactor authentication from the Microsoft Secure Future Initiative (SFI). 

4. Good quality authorization  

Good quality tokens are not going to help you if they’re enforced poorly (or not at all). And bugs can creep into code. Ad hoc authorization code can render the good authentication that you’ve done moot. 

Any time you can use declarative style patterns that help you verify tokens against incoming APIs and the data that the client is attempting to access with your API, you’ll find yourself in a better place. Simple, consistent authorization yields fewer bugs and therefore less risk. 

5. Network isolation 

In addition to having good quality tokens, it’s important to isolate the pieces of your environment to the maximum extent possible. Again, this is done because it’s prudent to assume that a cyberattacker has a foothold somewhere in your network. The questions are “where exactly can that foothold be,” and “once they have that foothold, where in my network can they get to?” If a threat actor can reach any part of your system from any other part of your system, this is obviously less good than if your most sensitive systems can be accessed from exactly one or two key places and nowhere else. When properly controlled, most footholds become useless to a cyberattacker—or at least only indirectly useful.  

Use service tags to create boundaries around your various assets such that applications are used by exactly those systems that are supposed to be using them and data is accessed by exactly those applications that are supposed to be accessing the data. This goes a long way to take many cyberthreats off the table.  

Network isolation can happen at several levels in the network stack. Popularly, level 7 is used at the perimeter. Maybe this manifests as some kind of HTTP proxy, for example, or an HTTP routing gateway. However, protection is incomplete without additional work happening at level 3 within your network. You want to limit IP traffic to be going to exactly the places that you want it to go. You might use techniques like virtual LANs, or similar constructs like network security groups (NSGs) in Microsoft Azure. The idea is to limit connectivity to exactly what is necessary to do the job and not give the cyberattacker freedom to move around. 

With good network isolation comes the ability to log any attempts to gain access at the perimeter, and potentially even internally. Depending on what networking technology you’re using, all of this is great for hunting. We’ll talk about that in the next section.  

Learn more about network isolation and other best practices from SFI.

6. Detections  

It’s normal to think about monitoring for reliability. Systems need to stay within their operating parameters in the face of changes and external conditions. But it’s also important to think about detection from the perspective of your threat model. If you identify five or ten risks in your threat model that need controls, it’s useful to think about how you might detect if any of those things are actually happening in your environment.  

In this context, one place to look is at the perimeter—by examining your incoming HTTP traffic, for instance. But you can also look anywhere in your environment where you predict that attacks might happen. You might look for badly formatted requests, or fuzzing, or evidence of DDoS attack—whatever is appropriate to the risks you have. The idea is that you want to be able to create alerts if you have evidence of a threat actor operating in your estate.  

And, of course, security products can be very helpful here.  

7. Auditing

We separate the notions of auditing from detection. Specifically, auditing is what I will call the pieces of data that you would use after a breach to determine the extent of the breach and the customers that were affected by it. In the event that you find a vulnerability without any evidence of threat actor exploitation, you’d want to go and check your auditing again to verify those claims. That way you can have evidence that whatever problem you found was not in fact exploited. If it was exploited, you’ll know to what extent, who was affected, and who needs to be notified. 

Some parts of your endpoint detection and response (EDR) stream will be very useful for auditing. Additional auditing information can come from the logs you create in your applications that record suitable information concerning recent activity. 

8. Things not to miss 

It’s important to think about all the applications and data that you have in your estate. For instance, it’s easy to overlook the backup data that you have stored. A cyberattacker might not be able to get access to your primary systems but might find that your backups are entirely unprotected and they can just read the backup.

Similarly, support systems often go overlooked. There are frequently important customer support scenarios that require access, and it’s easy to fall into the trap of not giving those systems the highest level of scrutiny. 

We should add systems that are under development and test systems to this problematic set. In both these cases, the code that’s running those systems is less trustworthy than normal production code. Development code, for instance, can be presumed to have more bugs than production code. Some of those bugs might be authorization bugs. And if there are authorization bugs, that buggy code might provide access to important assets. Therefore, your plans should include even greater scrutiny when it comes to these kinds of systems. 

Explore actionable patterns and practices from SFI

In summary

If you’ve gotten as far as identifying all of your assets, all your applications, and then thinking about the access patterns and controls that you have between them—including authentication, authorization, network isolation, and the use of bug-resistant patterns—you’re in a pretty good place to write a risk summary that can guide your actions for many months. And we haven’t even touched on basic things like vulnerability management, security, bug management, and the usual software lifecycle things that are necessary to keep the system in good health. Combine all of the above and you should have a good-looking risk plan. 

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


1Microsoft Cyber Signals Issue 9

2Microsoft Digital Defense Report 2024.

Related posts