Developers News and Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/tag/developers/ Expert coverage of cybersecurity topics Tue, 24 Dec 2024 00:12:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Recent enhancements for Microsoft Power Platform governance http://approjects.co.za/?big=en-us/security/blog/2021/02/01/recent-enhancements-for-microsoft-power-platform-governance/ Mon, 01 Feb 2021 17:00:19 +0000 An emerging trend in digital transformation efforts has been the rise of low-code development platforms. Of course, these low-code platforms must be grounded in best-of-breed governance capabilities which include security and compliance features. Without strong governance, the full benefits of low-code development cannot be realized. It’s only natural that any low-code platform chosen by an […]

The post Recent enhancements for Microsoft Power Platform governance appeared first on Microsoft Security Blog.

]]>
An emerging trend in digital transformation efforts has been the rise of low-code development platforms. Of course, these low-code platforms must be grounded in best-of-breed governance capabilities which include security and compliance features. Without strong governance, the full benefits of low-code development cannot be realized. It’s only natural that any low-code platform chosen by an organization must have strong security and compliance capabilities. Microsoft has developed the Power Platform which includes Power Apps, Power Automate, Power Virtual Agents, and Power BI to serve our customer’s needs for a robust low-code development platform that includes app development, automation, chatbots, and rich, detailed data analysis and visualization. We previously reported on the fundamental security and compliance capabilities offered with Microsoft Flow which was renamed Power Automate. In this blog, we’re going to discuss the integrated security and compliance capabilities across the Power Platform and provide an update on the new capabilities we’ve launched.

Foundations of governance

As the number of developers grows, governance becomes a key criterion to ensure digital transformation. As such, IT must create stronger guardrails to ensure the growing numbers of developers and the assets they create all remain compliant and secure. The Power Platform’s governance approach is multi-step with a focus on security, monitoring, administrative management, and application lifecycle management (figure 1). Check out our detailed governance and administration capabilities. The Power Platform also offers a Center of Excellence Starter Kit which organizations can use to evolve and educate employees on governance best practices. The Power Platform comes equipped with features that help reduce the complexity of governing your environment and empowers admins to unlock the greatest benefits from their Power Platform services. We’re reporting some of our newest capabilities to protect your organization’s data with tenant restrictions and blocking email exfiltration. We’re also announcing new analytics reports available for the robotic process automation (RPA) capability recently launched with Power Automate.

The Power Platform multi-step governance strategy

Figure 1: The Power Platform multi-step governance strategy.

Cross-tenant inbound and outbound restrictions using Azure Active Directory

The Power Platform offers access to over 400 connectors to today’s most popular enterprise applications. Connectors are proxies or wrappers around an API that allows the underlying service to ‘talk’ to Power Automate, Power Apps, and Azure Logic Apps. Control and access to these connectors and the data residing in the applications is a crucial aspect of a proactive governance and security approach. To this end, we have recently enhanced the cross-tenant inbound and outbound restrictions for Power Platform connectors. The Power Platform leverages Azure Active Directory (Azure AD) for controlling user authentication and access to data for important connectors such as Microsoft first-party services. While tenant restrictions can be created with Azure AD all up, enabling organizations to control access to software as a service (SaaS) cloud applications and services based on the Azure AD tenant used for single sign-on, they cannot target specific Microsoft services such as Power Platform exclusively. Organizations can opt to isolate the tenant for Azure AD-based connectors exclusively for Power Platform, using Power Platform’s tenant isolation capability. Power Platform tenant isolation works for connectors using Azure AD-based authentication such as Office 365 Outlook or SharePoint. Power Platform’s tenant isolation can be one way or two way depending on the specific use case. Tenant admins can also choose to allow one or more specific tenants in inbound or outbound direction for connection establishment while disallowing all other tenants. Learn more about tenant restrictions and tenant isolation. For now, this capability is available through support and will soon be available for admin self-service using Power Platform admin center.

In addition to leveraging Power Platform tenant isolation’s ability to prevent data exfiltration and infiltration for Azure AD-based connectors, admins can safeguard against connectors using external identity providers such as Microsoft account, Google, and much more—creating a data loss prevention policy that classifies the connector under the Blocked group.

Email exfiltration controls

Digital transformation has opened a variety of new communications channels. However, email remains the foundational method of digital communication and Microsoft Outlook continues as one of the dominant email services for enterprises. Preventing the exfiltration of sensitive data via email is crucial to maintaining enterprise data security. To this end, we have added the ability for Power Platform admins to prevent emails sent through Power Platform to be distributed to external domains. This is done by setting Exchange mail rules based on specific SMTP headers that are inserted in emails sent through Power Automate and Power Apps using the Microsoft 365 Exchange and Outlook connector. The SMTP headers can be used to create appropriate exfiltration (unauthorized transfer of data from one device to another) rules in Microsoft Exchange for outbound emails. For more details on these headers auto-inserted through Microsoft 365 Outlook connector, see SMTP headers. With the new controls, admins can easily block the exfiltration of forwarded emails and exempt specific flows (automated workflow created with Power Automate) or apps from exfiltration blocking. To block the exfiltration of forwarded emails, admins can set up Exchange mail flow rules to monitor or block emails sent by Power Automate and or Power Apps using the Microsoft 365 Outlook connector. Figure 2 is an example SMTP header for an email sent using Power Automate with the reserved word ‘Power Automate’ in the application header type.

Power Platform SMTP email header with reserved word ‘Power Automate’

Figure 2: Power Platform SMTP email header with reserved word ‘Power Automate.’

The SMTP header also includes the operation ID includes the type of email, which in figure 2 is a forwarded email. Exchange admins can use these headers to set up exfiltration blocking rules in the Exchange admin center. As you can see in figure 2, the SMTP header also includes a workflow identifier as the new ‘User-Agent’ header which is equal to the app or flow ID. Admins can exempt some flows (or apps) from the exfiltration due to the business scenario or use the workflow ID as part of the user-agent header to do the same. Learn more about how Power Platform helps admins prevent email exfiltration with these sophisticated new controls.

Powerful analytics for monitoring robotic process automation processes

One of the most exciting new capabilities offered with the Power Platform is Desktop Flows (previously known as UI flows) which provide robotic process automation (RPA)  available through Power Automate. Along with this powerful new feature, we have launched new analytics dashboards to ensure admins have full visibility with new RPA processes. Admins can view the overall status of automation that runs in the organization and monitor the analytics for automation that’s built with RPA automation from the Power Platform admin center. These analytics reports are accessible to users granted environment admin privilege. Admins can access the Power Platform admin center by clicking the Admin Center from the Power Automate portal settings menu. From the admin center, admins can access either Cloud flows (non-RPA automation) or Desktop flows. The Desktop flows page offers three types of reports:

  • Runs: Gives you an overview of daily, weekly, and monthly desktop flows run statics.
  • Usage: Usage of the different RPA processes.
  • Created: Analytics for recently created RPA processes.

Figure 3 shows an example of the new Runs report available in the admin center for Desktop flows. You can get more details on these powerful new analytics capabilities from our Microsoft docs page and our announcement blog. Check them both out.

New analytics ‘Run’ report for Desktop flows in Power Platform Admin Center

Figure 3: New analytics ‘Run’ report for Desktop flows in Power Platform admin center.

Join our community and get started today

Join the growing Power Platform community so you can get the latest updates, join discussions, and get ideas on how the Power Platform can help your organization. You can also learn how the products work from these learning modules available at Microsoft Learn. Be sure to check out some of our great assets which will get you more knowledgeable about the powerful tools available to ensure your organization benefits from low-code development with the Power Platform while adhering to some of the industry’s best compliance and security standards.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Recent enhancements for Microsoft Power Platform governance appeared first on Microsoft Security Blog.

]]>
Microsoft announces new Project OneFuzz framework, an open source developer tool to find and fix bugs at scale http://approjects.co.za/?big=en-us/security/blog/2020/09/15/microsoft-onefuzz-framework-open-source-developer-tool-fix-bugs/ Tue, 15 Sep 2020 16:00:22 +0000 We're excited to release a new tool called OneFuzz, an extensible fuzz testing framework for Azure.

The post Microsoft announces new Project OneFuzz framework, an open source developer tool to find and fix bugs at scale appeared first on Microsoft Security Blog.

]]>
Microsoft is dedicated to working with the community and our customers to continuously improve and tune our platform and products to help defend against the dynamic and sophisticated threat landscape. Earlier this year, we announced that we would replace the existing software testing experience known as Microsoft Security and Risk Detection with an automated, open-source tool as the industry moved toward this model. Today, we’re excited to release this new tool called Project OneFuzz, an extensible fuzz testing framework for Azure. Available through GitHub as an open-source tool, the testing framework used by Microsoft Edge, Windows, and teams across Microsoft is now available to developers around the world.

Fuzz testing is a highly effective method for increasing the security and reliability of native code—it is the gold standard for finding and removing costly, exploitable security flaws. Traditionally, fuzz testing has been a double-edged sword for developers: mandated by the software-development lifecycle, highly effective in finding actionable flaws, yet very complicated to harness, execute, and extract information from. That complexity required dedicated security engineering teams to build and operate fuzz testing capabilities making it very useful but expensive. Enabling developers to perform fuzz testing shifts the discovery of vulnerabilities to earlier in the development lifecycle and simultaneously frees security engineering teams to pursue proactive work.

Microsoft’s goal of enabling developers to easily and continuously fuzz test their code prior to release is core to our mission of empowerment. The global release of Project OneFuzz is intended to help harden the platforms and tools that power our daily work and personal lives to make an attacker’s job more difficult.

Recent advancements in the compiler world, open-sourced in LLVM and pioneered by Google, have transformed the security engineering tasks involved in fuzz testing native code. What was once attached—at great expense—can now be baked into continuous build systems through:

  • Crash detection, once attached via tools such as Electric Fence, can be baked in with asan.
  • Coverage tracking, once attached via tools such as iDNA, Dynamo Rio, and Pin can be baked in with sancov.
  • Input harnessing, once accomplished via custom I/O harnesses, can be baked in with libfuzzer’s LLVMFuzzerTestOneInput function prototype.

These advances allow developers to create unit test binaries with a modern fuzzing lab compiled in: highly reliable test invocation, input generation, coverage, and error detection in a single executable. Experimental support for these features is growing in Microsoft’s Visual Studio. Once these test binaries can be built by a compiler, today’s developers are left with the challenge of building them into a CI/CD pipeline and scaling fuzzing workloads in the cloud.

Project OneFuzz has already enabled continuous developer-driven fuzzing of Windows that has allowed Microsoft to proactively harden the Windows platform prior to shipment of the latest OS builds. With a single command line (baked into the build system!) developers can launch fuzz jobs ranging in size from a few virtual machines to thousands of cores. Project OneFuzz enables:

  • Composable fuzzing workflows: Open source allows users to onboard their own fuzzers, swap instrumentation, and manage seed inputs.
  • Built-in ensemble fuzzing: By default, fuzzers work as a team to share strengths, swapping inputs of interest between fuzzing technologies.
  • Programmatic triage and result deduplication: It provides unique flaw cases that always reproduce.
  • On-demand live-debugging of found crashes: It lets you summon a live debugging session on-demand or from your build system.
  • Observable and Debug-able: Transparent design allows introspection into every stage.
  • Fuzz on Windows and Linux OSes: Multi-platform by design. Fuzz using your own OS build, kernel, or nested hypervisor.
  • Crash reporting notification callbacks: Currently supporting Azure DevOps Work Items and Microsoft Teams messages

Project OneFuzz is available now on GitHub under an MIT license. It is updated by contributions from Microsoft Research & Security Groups across Windows and by more teams as we grow our partnership and expand fuzzing coverage across the company to continuously improve the security of all Microsoft platforms and products. Microsoft will continue to maintain and expand Project OneFuzz, releasing updates to the open-source community as they occur. Contributions from the community are welcomed. Share questions, comments, and feedback with us: fuzzing@microsoft.com

To learn more about Microsoft Security solutions visit our website.  Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Microsoft announces new Project OneFuzz framework, an open source developer tool to find and fix bugs at scale appeared first on Microsoft Security Blog.

]]>
Best security, compliance, and privacy practices for the rapid deployment of publicly facing Microsoft Power Apps intake forms http://approjects.co.za/?big=en-us/security/blog/2020/06/29/best-security-compliance-and-privarapid-deployment-publicly-microsoft-power-apps-intake-forms/ Mon, 29 Jun 2020 19:00:20 +0000 Security is a major concern of not only major governments but of other entities using Microsoft Power App intake forms.

The post Best security, compliance, and privacy practices for the rapid deployment of publicly facing Microsoft Power Apps intake forms appeared first on Microsoft Security Blog.

]]>
With the dawn of the COVID-19 pandemic, state and federal agencies around the globe were looking at ways to modernize data intake for social services recipients. The government of a country of about 40 million citizens reached out to Microsoft and asked us to assist in this endeavor. Going paperless eliminates waiting in line at an agency office, and lowers the chance of COVID-19 transmission. The ability to make requests or apply for federal or local assistance online makes the process safer and more efficient, as once data is collected citizens should start receiving funds more accurately and quickly.

Security is a major concern of not only major governments but of other entities using Microsoft Power App intake forms. Organizations and agencies needed to be certain that Microsoft Power App intake forms could not be used to collect data from large, sensitive databases containing personal information like names, addresses, Social Security or national security identification numbers, telephone numbers, or bank account information for direct deposit. If internet-facing forms collect personal information, and are not securely implemented, bad actors can use those forms to cleverly gain access to millions—if not billions—of personal records.

We authored this white paper specifically for those agencies and organizations who are transforming data intake to partially or 100-percent paperless. Microsoft wants to ensure that customers are implementing our technologies with the most secure approach possible, and adhering to compliance with all data privacy laws. Microsoft is also making recommendations in the white paper regarding the best way to implement the NIST Cybersecurity Framework in order to identify, protect, detect, respond, and recover from cybersecurity attacks.

For more information on Microsoft Security Solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Best security, compliance, and privacy practices for the rapid deployment of publicly facing Microsoft Power Apps intake forms appeared first on Microsoft Security Blog.

]]>
Success in security: reining in entropy http://approjects.co.za/?big=en-us/security/blog/2020/05/20/success-security-reining-entropy/ Wed, 20 May 2020 18:00:12 +0000 Your network is unique. It’s a living, breathing system evolving over time. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>
Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>
Afternoon Cyber Tea: Building operational resilience in a digital world http://approjects.co.za/?big=en-us/security/blog/2020/04/13/afternoon-cyber-tea-building-operational-resilience-digital-world/ Mon, 13 Apr 2020 16:00:33 +0000 On Afternoon Cyber Tea with Ann Johnson, Ann and Ian Coldwater talk about how CISOs can prepare for a cyberattack, master the magic and complexity of containers, and encourage collaboration between engineering and security.

The post Afternoon Cyber Tea: Building operational resilience in a digital world appeared first on Microsoft Security Blog.

]]>
Operational resiliency is a topic of rising importance in the security community. Unplanned events, much like the one we are facing today, are reminders of how organizations can be prepared to respond to a cyberattack. Ian Coldwater and I explored a variety of options in my episode of Afternoon Cyber Tea with Ann Johnson.

Ian Coldwater is a Kubernetes containers and cloud infrastructure specialist with a background in penetration testing and DevOps. In their role as a consultant, Ian has helped companies bridge the gaps between security and DevOps. It was a real pleasure to discuss what Ian has learned in these roles, and I think you’ll find our discussion valuable.

During our conversation, Ian and I talked about threat modeling and how to best protect your crown jewels. We also explored what it means to bring security into DevOps. Hint: it’s about more than just new tooling. And, we demystified Kubernetes. Do you wonder which projects are a good fit for Kubernetes, and which are not? Are you concerned about how to keep Kubernetes containers secure? Take a listen to Building operational resiliency in a digital work on Afternoon Cyber Tea with Ann Johnson for actionable advice that you can apply to your own SecDevOps organization.

What’s next

In this important cyber series, I talk with cybersecurity influencers about trends shaping the threat landscape in these unprecedented times, and explore the risk and promise of systems powered by artificial intelligence (AI), Internet of Things (IoT), and other emerging tech.

You can listen to Afternoon Cyber Tea with Ann Johnson on:

  • Apple Podcasts—You can also download the episode by clicking the Episode Website link.
  • Podcast One—Includes option to subscribe, so you’re notified as soon as new episodes are available.
  • CISO Spotlight page—Listen alongside our CISO Spotlight episodes, where customers and security experts discuss similar topics such as Zero Trust, compliance, going passwordless, and more.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to me on LinkedIn or Twitter if you have guest or topic suggestions.

The post Afternoon Cyber Tea: Building operational resilience in a digital world appeared first on Microsoft Security Blog.

]]>
Introducing Microsoft Application Inspector http://approjects.co.za/?big=en-us/security/blog/2020/01/16/introducing-microsoft-application-inspector/ Thu, 16 Jan 2020 15:00:37 +0000 Microsoft Application Inspector is a new source code analyzer that helps you understand what a program does by identifying interesting features and characteristics.

The post Introducing Microsoft Application Inspector appeared first on Microsoft Security Blog.

]]>
Modern software development practices often involve building applications from hundreds of existing components, whether they’re written by another team in your organization, an external vendor, or someone in the open source community. Reuse has great benefits, including time-to-market, quality, and interoperability, but sometimes brings the cost of hidden complexity and risk.

You trust your engineering team, but the code they write often accounts for only a tiny fraction of the entire application. How well do you understand what all those external software components actually do? You may find that you’re placing as much trust in each of the thousands of contributors to those components as you have in your in-house engineering team.

At Microsoft, our software engineers use open source software to provide our customers high-quality software and services. Recognizing the inherent risks in trusting open source software, we created a source code analyzer called Microsoft Application Inspector to identify “interesting” features and metadata, like the use of cryptography, connecting to a remote entity, and the platforms it runs on.

Application Inspector differs from more typical static analysis tools in that it isn’t limited to detecting poor programming practices; rather, it surfaces interesting characteristics in the code that would otherwise be time-consuming or difficult to identify through manual introspection. It then simply reports what’s there, without judgement.

For example, consider this snippet of Python source code:

Image of Python source code.

Here we can see that a program that downloads content from a URL, writes it to the file system, and then executes a shell command to list details of that file. If we run this code through Application Inspector, we’ll see the following features identified which tells us a lot about what it can do:

  • FileOperation.Write
  • Network.Connection.Http
  • Process.DynamicExecution

In this small example, it would be trivial to examine the snippet manually to identify those same features, but many components contain tens of thousands of lines of code, and modern web applications often use hundreds of such components. Application Inspector is designed to be used individually or at scale and can analyze millions of lines of source code from components built using many different programming languages. It’s simply infeasible to attempt to do this manually.

Application Inspector is positioned to help in key scenarios

We use Application Inspector to identify key changes to a component’s feature set over time (version to version), which can indicate anything from an increased attack surface to a malicious backdoor. We also use the tool to identify high-risk components and those with unexpected features that require additional scrutiny, under the theory that a vulnerability in a component that is involved in cryptography, authentication, or deserialization would likely have higher impact than others.

Using Application Inspector

Application Inspector is a cross-platform, command-line tool that can produce output in multiple formats, including JSON and interactive HTML. Here is an example of an HTML report:

Image of Feature Groups in Microsoft Application Inspector.

Each icon in the report above represents a feature that was identified in the source code. That feature is expanded on the right-hand side of the report, and by clicking any of the links, you can view the source code snippets that contributed to that identification.

Each feature is also broken down into more specific categories and an associated confidence, which can be accessed by expanding the row.

Image of general features and Application Inspector's confidence rating for each.

Application Inspector comes with hundreds of feature detection patterns covering many popular programming languages, with good support for the following types of characteristics:

  • Application frameworks (development, testing)
  • Cloud / Service APIs (Microsoft Azure, Amazon AWS, and Google Cloud Platform)
  • Cryptography (symmetric, asymmetric, hashing, and TLS)
  • Data types (sensitive, personally identifiable information)
  • Operating system functions (platform identification, file system, registry, and user accounts)
  • Security features (authentication and authorization)

Get started with Application Inspector

Application Inspector can identify interesting features in source code, enabling you to better understand the software components that your applications use. Application Inspector is open source, cross-platform (.NET Core), and can be downloaded at github.com/Microsoft/ApplicationInspector. We welcome all contributions and feedback.

The post Introducing Microsoft Application Inspector appeared first on Microsoft Security Blog.

]]>
How to secure your IoT deployment during the security talent shortage http://approjects.co.za/?big=en-us/security/blog/2019/12/17/how-to-secure-iot-when-resources-are-limited/ Tue, 17 Dec 2019 17:00:41 +0000 It’s complex work to define a security strategy for IoT—especially with a 3-million-person shortage of cybersecurity pros. But there is a way to augment existing security teams and resources.

The post How to secure your IoT deployment during the security talent shortage appeared first on Microsoft Security Blog.

]]>
Businesses across industries are placing bigger and bigger bets on the Internet of Things (IoT) as they look to unlock valuable business opportunities. But time and time again, as I meet with device manufacturers and businesses considering IoT deployments, there are concerns over the complexity of IoT security and its associated risks—to the company, its brands, and its customers. With the growing number and increased severity of IoT attacks, these organizations have good reason to be cautious. With certainty, we can predict that the security vulnerabilities and requirements of IoT environments will continue to evolve, making them difficult to frame and address. It’s complex work to clearly define a security strategy for emerging technologies like IoT. To compound the challenge, there’s a record-setting 3-million-person shortage of cybersecurity pros globally. This massive talent shortage is causing the overextension of security teams, leaving organizations without coverage for new IoT deployments.

Despite the risks that come with IoT and the strain on security teams during the talent shortage, the potential of IoT is too valuable to ignore or postpone. Decision makers evaluating how to pursue both IoT innovation and security don’t need to steal from one to feed the other. It isn’t a binary choice. There is a way to augment existing security teams and resources, even amidst the talent shortage. Trustworthy solutions can help organizations meet the ongoing security needs of IoT without diminishing opportunity for innovation.

As organizations reach the limit of their available resources, the key to success becomes differentiating between the core activities that require specific organizational knowledge and the functional practices that are common across all organizations.

Utilize your security teams to focus on core activities, such as defining secure product experiences and building strategies for reducing risk at the app level. This kind of critical thinking and creative problem solving is where your security teams deliver the greatest value to the business—this is where their focus should be.

Establishing reliable functional practices is critical to ensure that your IoT deployment can meet the challenges of today’s threat landscape. You can outsource functional practices to qualified partners or vendors to gain access to security expertise that will multiply your team’s effectiveness and quickly ramp up your IoT operations with far less risk.

When considering partners and vendors, find solutions that deliver these essential capabilities:

Holistic security design—IoT device security is difficult. To do it properly requires the expertise to stitch hardware, software, and services into gap-free security systems. A pre-integrated, off-the-shelf solution is likely more cost-effective and more secure than a proprietary solution, and it allows you to leverage the expertise of functional security experts that work across organizations and have a bird’s-eye view of security needs and threats.

Threat mitigation—To maintain device security over time, ongoing security expertise is needed to identify threats and develop device updates to mitigate new threats as they emerge. This isn’t a part-time job. It requires dedicated resources immersed in the threat landscape and who can rapidly implement mitigation strategies. Attackers are creative and determined, the effort to stop them needs to be appropriately matched.

Update deploymentWithout the right infrastructure and dedicated operational hygiene, organizations commonly postpone or deprioritize security updates. Look for providers that streamline or automate the delivery and deployment of updates. Because zero-day attacks require quick action, the ability to update a global fleet of devices in hours is a must.

When you build your IoT deployment on a secure platform, you can transform the way you do business: reduce costs, streamline operations, light up new business models, and deliver more value to your customers. We believe security is the foundation for lasting innovation that will continue to deliver value to your business and customers long into the future. With this in mind, we designed Microsoft Azure Sphere as a secured platform on which you can confidently build and deploy your IoT environment.

Azure Sphere is an end-to-end solution for securely connecting existing equipment and creating new IoT devices with built-in security. Azure Sphere’s integrated security spans hardware, software, and cloud, and delivers active security by default with ongoing OS and security updates that put the power of Microsoft’s expertise to work for you every day.

With Azure Sphere, you can design and create innately secured IoT devices, as well as securely connect your existing mission-critical equipment. Connecting equipment for the first time can introduce incredible value to the business—as long as security is in place.

Through a partnership with Azure Sphere, Starbucks is connecting essential coffee equipment in stores around the globe for the first time. The secured IoT implementation is helping Starbucks improve their customer experience, realize operational efficiency, and drive cost savings. To see how they accomplished this, watch the session I held with Jeff Wile, Starbucks CIO of Digital Customer and Retail Technology, at Microsoft Ignite 2019.

Learn more

With a secured platform for IoT devices, imagination is the only limit to what innovation can achieve. I encourage you to read Secure your IoT deployment during the security talent shortage to learn more about how you can build comprehensive, defense-in-depth security for your IoT initiatives, so you can focus on what you’re in business to do.

Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Azure Sphere

A comprehensive IoT security solution—including hardware, OS, and cloud components—to help you innovate with confidence.

The post How to secure your IoT deployment during the security talent shortage appeared first on Microsoft Security Blog.

]]>
Foundations of Microsoft Flow—secure and compliant automation, part 1 http://approjects.co.za/?big=en-us/security/blog/2019/09/05/foundations-of-microsoft-flow-secure-compliant-automation-part-1/ Thu, 05 Sep 2019 16:00:59 +0000 In part 1 of our two-part series, we introduce a security minded audience to Microsoft Flow—an automation service with a strong security and compliance foundation.

The post Foundations of Microsoft Flow—secure and compliant automation, part 1 appeared first on Microsoft Security Blog.

]]>
Automation services are steadily becoming significant drivers of modern IT, helping improve efficiency and cost effectiveness for organizations. A recent McKinsey survey discovered that “the majority of all respondents (57 percent) say their organizations are at least piloting the automation of processes in one or more business units or functions. Another 38 percent say their organizations have not begun to automate business processes, but nearly half of them say their organizations plan to do so within the next year.”

Automation is no longer a theme of the future, but a necessity of the present, playing a key role in a growing number of IT and user scenarios. As security professionals, you’ll need to recommend an automation service that enables your organization to reap its benefits without sacrificing on strict security and compliance standards.

In our two-part series, we share how Microsoft delivers on the promise of empowering a secure, compliant, and automated organization. In part 1, we provide a quick intro into Microsoft Flow and provide an overview into its best-in-class, secure infrastructure. In part 2, we go deeper into how Flow secures your users and data, as well as enhances the IT experience. We also cover Flow’s privacy and certifications to give you a glimpse into the rigorous compliance protocols the service supports. Let’s get started by introducing you to Flow.

To support the need for secure and compliant automation, Microsoft launched Flow. With Flow, organizations will experience:

  • Seamlessly integrated automation at scale.
  • Accelerated productivity.
  • Secure and compliant automation.

Secure and compliant automation is perhaps the most interesting value of Flow for this audience, but let’s discuss the first two benefits before diving into the third.

Integrated automation at scale

Flow is a Software as a Service (SaaS) automation service used by customers ranging from large enterprises, such as Virgin Atlantic, to smaller organizations, such as G&J Pepsi. Importantly, Flow serves as a foundational pillar for the Microsoft Power Platform, a seamlessly integrated, low-code development platform enabling easier and quicker application development. With Power Platform, organizations analyze data with Power BI, act on data through Microsoft PowerApps, and automate processes using Flow (Figure 1).

Diagram showing app automation driving business processes with Flow. The diagram shows Flow, PowerApps, and Power BI circling CDS, AI Builder, and Data Connectors.

Figure 1. Power Platform offers a seamless loop to deliver your business goals.

Low-code platforms can help scale IT capabilities to create a broader range of application developers—from the citizen to pro developer (Figure 2). With growing burdens on IT, scaling IT through citizen developers who design their own business applications, is a tremendous advantage. Flow is also differentiated from all other automated services because of its native integration with Microsoft 365, Dynamics 365, and Azure.

Image showing Citizen Developers, IT/Admins, and Pro Developers.

Figure 2. Low-code development platforms empower everyone to become a developer, from the citizen developer to the pro developer.

Accelerated productivity

Flow accelerates your organization’s productivity. The productivity benefits from Flow were recently quantified in a Total Economic Impact (TEI) study conducted by Forrester Research and commissioned by Microsoft (The Total Economic Impact™ Of PowerApps And Microsoft Flow, June 2018). Forrester determined that over a three-year period Flow helped organizations reduce application development and application management costs while saving thousands of employee hours (Figure 3).

Image showing 70% for Application development costs, 38% for Application management costs, and +122K for Worker Hours Saved.

Figure 3. Forrester TEI study results on the reduced application development and management costs and total worker hours saved.

Built with security and compliance

Automation will be the backbone for efficiency across much of your IT environment, so choosing the right service can have enormous impact on delivering the best business outcomes. As a security professional, you must ultimately select the service which best balances the benefits from automation with the rigorous security and compliance requirements of your organization. Let’s now dive into how Flow is built on a foundation of security and compliance, so that selecting Flow as your automation service is an easy decision.

A secure infrastructure

Comprehensive security accounts for a wide variety of attack vectors, and since Flow is a SaaS offering, infrastructure security is an important component and where we’ll start. Flow is a global service deployed in datacenters across the world (Figure 4). Security begins with the physical datacenter, which includes perimeter fencing, video cameras, security personnel, secure entrances, and real-time communications networks—continuing from every area of the facility to each server unit.

The physical security is complemented with threat management of our cloud ecosystem. Microsoft security teams leverage sophisticated data analytics and machine learning and continuously pen-test against distributed-denial-of-service (DDoS) attacks and other intrusions.

Flow also has the luxury of being the only automation service natively built on Azure which has an architecture designed to secure and protect data. Each datacenter deployment of Flow consists of two clusters:

  • Web Front End (WFE) cluster—A user connects to the WFE before accessing any information in Flow. Servers in the WFE cluster authenticate users with Azure Active Directory (Azure AD), which stores user identities and authorizes access to data. Azure Traffic Manager finds the nearest Flow deployment, and that WFE cluster manages sign-in and authentication.
  • Backend cluster—All subsequent activity and access to data is handled through the back-end cluster. It manages dashboards, visualizations, datasets, reports, data storage, data connections, and data refresh activities. The backend cluster hosts many roles, including Azure API Management, Gateway, Presentation, Data, Background Job Processing, and Data Movement.

Users directly interact only with the Gateway role and Azure API Management, which are accessible through the internet. These roles perform authentication, authorization, distributed denial-of-service (DDoS) protection, bandwidth throttling, load balancing, routing, and other security, performance, and availability functions. There is a distinction between roles users can access and roles only accessible by the system.

Stay tuned for part 2 of our series where we’ll go deeper into how Flow further secures authentication of your users and data, and enhances the IT experience, all while aligning to several regulatory frameworks.

Image showing Microsoft’s global datacenter locations.

Figure 4. Microsoft’s global datacenter locations.

Let Flow enhance your digital transformation

Let your organization start benefiting from one of the most powerful and secure automation services available on the market. Watch the video and follow the instructions to get started with Flow. Be sure to join the growing Flow community and participate in discussions, provide insights, and even influence product roadmap. Also, follow the Flow blog to get news on the latest Flow updates and read our white paper on best practices for deploying Flow in your organization. Be sure to check out part 2 where we dive deeper into how Flow offers the best and broadest security and compliance foundation for any automation service available in the market.

Additional resources

The post Foundations of Microsoft Flow—secure and compliant automation, part 1 appeared first on Microsoft Security Blog.

]]>
4 best practices to help you integrate security into DevOps http://approjects.co.za/?big=en-us/security/blog/2019/06/11/4-best-practices-help-you-integrate-security-into-devops/ Tue, 11 Jun 2019 16:00:45 +0000 Learn how Microsoft evolved its culture, teams, and practices to integrate security into an agile development process that supports a cloud-based world.

The post 4 best practices to help you integrate security into DevOps appeared first on Microsoft Security Blog.

]]>
Microsoft’s transition of its corporate resources to the cloud required us to rethink how we integrate security into the agile development environment. In the old process, we often worked on 6- to 12-month development cycles for internal products. The security operations team was separate from the application development team and was responsible for ensuring that applications met security requirements. There was time to troubleshoot security between the two teams. Once we shifted to a shorter development cycle, we had to compress the new process to bake security into DevOps.

Our experience has led us to adopt four best practices that guide our thinking about integrating security with DevOps:

  1. Inventory your cloud resources.
  2. Establish a governance structure for cloud services.
  3. Give DevOps accountability for security.
  4. Redefine centralized security.

This post walks you through these tenets with some advice we hope you can apply to your own organization.

Inventory your cloud resources

Cloud subscriptions are so easy to spin up that many organizations don’t have a comprehensive understanding of which teams are using which services. This makes it challenging to manage your costs and enforce security policies. If you are uncertain which services you are currently paying for, billing is good place to start.

Establish a governance structure for cloud services

Once you understand your cloud inventory, you can begin the work of making sure your investments align with your business strategies. This may mean limiting which services your organization uses to maximize the ones that will help you meet your business goals. Then, align your organization to your cloud strategy by defining a governing structure:

  • Develop business scenarios that define acceptable use and configuration of cloud resources.
  • Define architecture and patterns for the cloud services you plan to use.
  • Limit who can create new subscriptions.

Give DevOps accountability for security

The only way to effectively enforce security policies in a short development cycle is to integrate security into the application development process. Early in our evolution, we dropped security team members into application development teams to create a single team with shared goals. This revealed cultural challenges and unexamined assumptions. Initially, both the application developers and the security team expected to conduct their jobs as they had in the past. Application developers wrote code and then security operations queued up issues to address. This proved unworkable for two reasons. Security analysts were queuing up too many security tasks to fit within the cycle. The application developers were often confused because security operations underestimated how well they understood the nuances of security.

The only way to meet our goals was to shift accountability for security to the DevOps teams. We wanted application developers to try to solve security issues as part of their process. This required education, but we also implemented some practices that encouraged the team to take on that responsibility:

  • Secure DevOps Kit for Azure—The Secure DevOps Kit for Azure provides scripts that can be configured for each resource. During development and before production, DevOps can easily validate that security controls are at the right level.
  • Security scorecard—The scorecard highlights which members of the team are skilled at addressing security and encourages people to improve and collaborate with each other.
  • Penetration testing—When a red team conducts a penetration test of an application, the results typically inspire the team to take security more seriously.

Redefine centralized security

We experimented with eliminating a central security team entirely, but ultimately, we realized that we needed a centralized team to monitor the big picture and set baselines. They establish our risk tolerance and measure security controls across subscriptions. They also automate as much of the security controls as they can. This includes configuring the Secure DevOps Kit for Azure. This team also needed training to better understand the vulnerabilities of the cloud. Tabletop exercises to talk through possible attacks with red teams was one way they got up to speed.

As our evolving process suggests, our biggest challenge was shifting culture and mindset. We recommend that you take time to define roles and start with a small team. You can expect to continuously discover better ways to improve teamwork and the security of your process and your applications.

Get started

For more details on how we evolved our security process for the cloud, watch the Speaking of security: Cloud migration webinar and get the Secure DevOps Kit for Azure.

The post 4 best practices to help you integrate security into DevOps appeared first on Microsoft Security Blog.

]]>
Step 9. Protect your OS: top 10 actions to secure your environment http://approjects.co.za/?big=en-us/security/blog/2019/05/21/step-9-protect-your-os-top-10-actions-secure-your-environment/ Tue, 21 May 2019 16:00:43 +0000 The “Top 10 actions to secure your environment” series outlines fundamental steps you can take with your investment in Microsoft 365 security solutions. In “Step 9. Protect your OS,” you’ll learn how to configure Microsoft Defender Advanced Threat Protection to prevent, detect, investigate, and respond to advanced threats.

The post Step 9. Protect your OS: top 10 actions to secure your environment appeared first on Microsoft Security Blog.

]]>
In “Step 9. Protect your OS” of the Top 10 actions to secure your environment blog series, we provide resources to help you configure Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) to defend your Windows, macOS, Linux, iOS, and Android devices from advanced threats.

In an advanced threat, hackers and cybercriminals infiltrate your network through compromised users or vulnerable endpoints and can stay undetected for weeks—or even months—while they attempt to exfiltrate data and move laterally to gain more privileges. Microsoft Defender ATP helps you detect these threats early and take action immediately.

Enabling Microsoft Defender ATP and related products will help you:

  • Mitigate vulnerabilities.
  • Reduce your attack surface.
  • Enable next generation protection from the most advanced attacks.
  • Detect endpoint attacks in real-time and respond immediately.
  • Automate investigation and remediation.

Threat & Vulnerability Management

Threat & Vulnerability Management is a new component of Microsoft Defender ATP that provides:

  • Real-time endpoint detection and response (EDR) insights correlated with endpoint vulnerabilities.
  • Linked machine vulnerability and security configuration assessment data in the context of exposure discovery.
  • Built-in remediation processes through Microsoft Intune and Microsoft System Center Configuration Manager.

To use Threat & Vulnerability Management, you’ll need to turn on the Microsoft Defender ATP preview features.

Attack surface reduction

Attack surface reduction limits the number of attack vectors that a malicious actor can use to gain entry. You can configure attack surface reduction through the following:

  • Microsoft Intune
  • System Center Configuration Manager
  • Group Policy
  • PowerShell cmdlets

Enable these capabilities to reduce your attack surface:

Hardware-based isolation Configure Microsoft Defender Application Guard to protect your company while your employees browse the internet. You define which websites, cloud resources, and internal networks are trusted. Everything not on your list is considered untrusted.
Application control Restrict the applications that your users can run and require that applications earn trust in order to run.
Device control Configure Windows 10 hardware and software to “lock down” Windows systems so they operate with properties of mobile devices. Use configurable code to restrict devices to only run authorized apps.
Exploit protection Configure Microsoft Defender Exploit Guard to manage and reduce the attack surface of apps used by your employees.
Network protection Use network protection to prevent employees from using an application to access dangerous domains that may host phishing scams, exploits, and other malicious content.
Controlled folder access Prevent apps that Microsoft Defender Antivirus determines are malicious or suspicious from making changes to files in protected folder.
Network firewall Block unauthorized network traffic from flowing into or out of the local device.
Attack surface reduction controls Prevent actions and apps that are typically used by exploit-seeking malware to infect machines.

Next generation protection

The Intelligent Security Graph powers the antivirus capabilities of Microsoft Defender Antivirus, which works with Microsoft Defender ATP to protect desktops, laptops, and servers from the most advanced ransomware, fileless malware, and other types of attacks.

Configure Microsoft Defender Antivirus capabilities to:

Enable cloud-delivered protection Leverage artificial intelligence (AI) and machine learning algorithms to analyze the billions of signals on the Intelligent Security Graph and identify and block attacks within seconds.
Specify the cloud-delivered protection level Define the amount of information to be shared with the cloud and how aggressively new files are blocked.
Configure and validate network connections for Microsoft Defender Antivirus Configure firewall or network filtering rules to allow required URLs.
Configure the block at first sight feature Block new malware within seconds.

Endpoint detection and response

Microsoft Defender ATP endpoint detection and response capabilities detect advanced attacks in real-time and give you the power to respond immediately. Microsoft Defender ATP correlates alerts and aggregates them into an incident, so you can understand cross-entity attacks (Figure 1).

Alerts are grouped into an incident based on these criteria:

  • Automated investigation triggered the linked alert while investigating the original alert.
  • File characteristics associated with the alert are similar.
  • Manual association by a user to link the alerts.
  • Proximate time of alerts triggered on the same machine falls within a certain timeframe.
  • Same file is associated with different alerts.

Image of the Windows Defender Security Center.

Figure 1. Microsoft Defender ATP correlates alerts and aggregate them into incidents.

Review your alerts and incidents on the security operations dashboard. You can customize and filter the incident queue to help you focus on what matters most to your organization (Figure 2). You can also customize the alert queue view and the machine alerts view to make it easier for you to manage.

Image of a list of incidents in the Windows Defender Security Center.

Figure 2. Default incident queue displays incidents seen in the last 30 days, with the most recent incident showing at the top of the list.

Once you detect an attack that requires remediation, you can take the following actions:

Auto investigation and remediation

Microsoft Defender ATP can be configured to automatically investigate and remediate alerts (Figure 3), which will reduce the number of alerts your Security Operations team will need to investigate manually.

Image showing automated investigations in Microsoft Defender ATP.

Figure 3. You can view the details of an automated investigation to see information such as the investigation graph, alerts associated with the investigation, the machine that was investigated, and other information.

Create and manage machine groups in Microsoft Defender ATP to define automation levels:

Automation level Description
Not protected. Machines will not get any automated investigations run on them.
Semi – require approval for any remediation. This is the default automation level.
An approval is needed for any remediation action.
Semi – require approval for non-temp folders remediation. An approval is required on files or executables that are not in temporary folders. Files or executables in temporary folders, such as the user’s download folder or the user’s temp folder, will automatically be remediated if needed.
Semi – require approval for core folders remediation. An approval is required on files or executables that are in the operating system directories such as Windows folder and program files folder. Files or executables in all other folders will automatically be remediated if needed.
Full – remediate threats automatically. All remediation actions will be performed automatically.

Microsoft Threat Experts

Microsoft Threat Experts is a new, managed threat hunting service that provides proactive hunting, prioritization, and additional context and insights that further empower security operations centers (SOCs) to identify and respond to threats quickly and accurately with two capabilities:

  1. Targeted attack notifications—Alerts that are tailored to organizations provide as much information as can be quickly delivered to bring attention to critical network threats, including the timeline, scope of breach, and the methods of intrusion.
  2. Experts on demand—When a threat exceeds your SOC’s capability to investigate, or when more actionable information is needed, security experts provide technical consultation on relevant detections and adversaries. In cases where a full incident response becomes necessary, seamless transition to Microsoft incident response services is available.

Microsoft Defender ATP customers can register for Microsoft Threat Experts and we will reach out to notify you via email when you’ve been selected.

Learn more

Check back in a few weeks for our final blog post in the series, “Step 10. Detect and investigate security threats,” which will give you tips to deploy Azure Advanced Threat Protection to detect suspicious activity in real-time.

Resources

The post Step 9. Protect your OS: top 10 actions to secure your environment appeared first on Microsoft Security Blog.

]]>