Insights for Security Professionals | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/job-function/security/ Tue, 24 Sep 2024 14:13:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Microsoft Trustworthy AI: Unlocking human potential starts with trust  https://aka.ms/MicrosoftTrustworthyAI https://aka.ms/MicrosoftTrustworthyAI#respond Tue, 24 Sep 2024 14:00:00 +0000 At Microsoft, we have commitments to ensuring Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer. Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
As AI advances, we all have a role to play to unlock AI’s positive impact for organizations and communities around the world. That’s why we’re focused on helping customers use and build AI that is trustworthy, meaning AI that is securesafe and private.

At Microsoft, we have commitments to ensure Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer.

Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

Security. Security is our top priority at Microsoft, and our expanded Secure Future Initiative (SFI) underscores the company-wide commitments and the responsibility we feel to make our customers more secure. This week we announced our first SFI Progress Report, highlighting updates spanning culture, governance, technology and operations. This delivers on our pledge to prioritize security above all else and is guided by three principles: secure by design, secure by default and secure operations. In addition to our first party offerings, Microsoft Defender and Purview, our AI services come with foundational security controls, such as built-in functions to help prevent prompt injections and copyright violations. Building on those, today we’re announcing two new capabilities:

  • Evaluations in Azure AI Studio to support proactive risk assessments.
  • Microsoft 365 Copilot will provide transparency into web queries to help admins and users better understand how web search enhances the Copilot response. Coming soon.

Our security capabilities are already being used by customers. Cummins, a 105-year-old company known for its engine manufacturing and development of clean energy technologies, turned to Microsoft Purview to strengthen their data security and governance by automating the classification, tagging and labeling of data. EPAM Systems, a software engineering and business consulting company, deployed Microsoft 365 Copilot for 300 users because of the data protection they get from Microsoft. J.T. Sodano, Senior Director of IT, shared that “we were a lot more confident with Copilot for Microsoft 365, compared to other large language models (LLMs), because we know that the same information and data protection policies that we’ve configured in Microsoft Purview apply to Copilot.”

Safety. Inclusive of both security and privacy, Microsoft’s broader Responsible AI principles, established in 2018, continue to guide how we build and deploy AI safely across the company. In practice this means properly building, testing and monitoring systems to avoid undesirable behaviors, such as harmful content, bias, misuse and other unintended risks. Over the years, we have made significant investments in building out the necessary governance structure, policies, tools and processes to uphold these principles and build and deploy AI safely. At Microsoft, we are committed to sharing our learnings on this journey of upholding our Responsible AI principles with our customers. We use our own best practices and learnings to provide people and organizations with capabilities and tools to build AI applications that share the same high standards we strive for.

Today, we are sharing new capabilities to help customers pursue the benefits of AI while mitigating the risks:

  • Correction capability in Microsoft Azure AI Content Safety’s Groundedness detection feature that helps fix hallucination issues in real time before users see them.
  • Embedded Content Safety, which allows customers to embed Azure AI Content Safety on devices. This is important for on-device scenarios where cloud connectivity might be intermittent or unavailable.
  • New evaluations in Azure AI Studio to help customers assess the quality and relevancy of outputs and how often their AI application outputs protected material.
  • Protected Material Detection for Code is now in preview in Azure AI Content Safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.

It’s amazing to see how customers across industries are already using Microsoft solutions to build more secure and trustworthy AI applications. For example, Unity, a platform for 3D games, used Microsoft Azure OpenAI Service to build Muse Chat, an AI assistant that makes game development easier. Muse Chat uses content-filtering models in Azure AI Content Safety to ensure responsible use of the software. Additionally, ASOS, a UK-based fashion retailer with nearly 900 brand partners, used the same built-in content filters in Azure AI Content Safety to support top-quality interactions through an AI app that helps customers find new looks.

We’re seeing the impact in the education space too. New York City Public Schools partnered with Microsoft to develop a chat system that is safe and appropriate for the education context, which they are now piloting in schools. The South Australia Department for Education similarly brought generative AI into the classroom with EdChat, relying on the same infrastructure to ensure safe use for students and teachers.

Privacy. Data is at the foundation of AI, and Microsoft’s priority is to help ensure customer data is protected and compliant through our long-standing privacy principles, which include user control, transparency and legal and regulatory protections. To build on this, today we’re announcing:

  • Confidential inferencing in preview in our Azure OpenAI Service Whisper model, so customers can develop generative AI applications that support verifiable end-to-end privacy. Confidential inferencing ensures that sensitive customer data remains secure and private during the inferencing process, which is when a trained AI model makes predictions or decisions based on new data. This is especially important for highly regulated industries, such as health care, financial services, retail, manufacturing and energy.
  • The general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, which allow customers to secure data directly on the GPU. This builds on our confidential computing solutions, which ensure customer data stays encrypted and protected in a secure environment so that no one gains access to the information or system without permission.
  • Azure OpenAI Data Zones for the EU and U.S. are coming soon and build on the existing data residency provided by Azure OpenAI Service by making it easier to manage the data processing and storage of generative AI applications. This new functionality offers customers the flexibility of scaling generative AI applications across all Azure regions within a geography, while giving them the control of data processing and storage within the EU or U.S.

We’ve seen increasing customer interest in confidential computing and excitement for confidential GPUs, including from application security provider F5, which is using Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs to build advanced AI-powered security solutions, while ensuring confidentiality of the data its models are analyzing. And multinational banking corporation Royal Bank of Canada (RBC) has integrated Azure confidential computing into their own platform to analyze encrypted data while preserving customer privacy. With the general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, RBC can now use these advanced AI tools to work more efficiently and develop more powerful AI models.

Achieve more with Trustworthy AI 

We all need and expect AI we can trust. We’ve seen what’s possible when people are empowered to use AI in a trusted way, from enriching employee experiences and reshaping business processes to reinventing customer engagement and reimagining our everyday lives. With new capabilities that improve security, safety and privacy, we continue to enable customers to use and build trustworthy AI solutions that help every person and organization on the planet achieve more. Ultimately, Trustworthy AI encompasses all that we do at Microsoft and it’s essential to our mission as we work to expand opportunity, earn trust, protect fundamental rights and advance sustainability across everything we do.

Commitments

Capabilities

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
https://aka.ms/MicrosoftTrustworthyAI/feed/ 0
Red teams think like hackers to help keep AI safe https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/ https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/#respond Thu, 01 Aug 2024 15:00:00 +0000 Just as AI tools such as ChatGPT and Copilot have transformed the way people work in all sorts of roles around the globe, they’ve also reshaped so-called red teams—groups of cybersecurity experts whose job is to think like hackers to help keep technology safe and secure.  

The post Red teams think like hackers to help keep AI safe appeared first on The Microsoft Cloud Blog.

]]>
Just as AI tools such as ChatGPT and Copilot have transformed the way people work in all sorts of roles around the globe, they’ve also reshaped so-called red teams — groups of cybersecurity experts whose job is to think like hackers to help keep technology safe and secure.  

Generative AI’s abilities to communicate conversationally in multiple languages, write stories and even create photorealistic images hold new potential hazards, from providing biased or inaccurate results to giving people with ill intent new ways to stir up discord. These risks spurred a novel and broad approach to how Microsoft’s AI Red Team is working to identify and reduce potential harm. 

“We think security, responsible AI and the broader notion of AI safety are different facets of the same coin,” says Ram Shankar Siva Kumar, who leads Microsoft’s AI Red Team. “It’s important to get a universal, one-stop-shop look at all the risks of an AI system before it reaches the hands of a customer. Because this is an area that is going to have massive sociotechnical implications.” 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools. 

The term “red teaming” was coined during the Cold War, when the U.S. Defense Department conducted simulation exercises with red teams acting as the Soviets and blue teams acting as the U.S. and its allies. The cybersecurity community adopted the language a few decades ago, creating red teams to act as adversaries trying to break, corrupt or misuse technology — with the goal of finding and fixing potential harms before any problems emerged. 

When Siva Kumar formed Microsoft’s AI Red Team in 2018, he followed the traditional model of pulling together cybersecurity experts to proactively probe for weaknesses, just as the company does with all its products and services.  

At the same time, Forough Poursabzi was leading researchers from around the company in studies with a new and different angle from a responsible AI lens, looking at whether the generative technology could be harmful — either intentionally or due to systemic issues in models that were overlooked during training and evaluation. That’s not an element red teams have had to contend with before. 

The different groups quickly realized they’d be stronger together and joined forces to create a broader red team that assesses both security and societal-harm risks alongside each other, adding a neuroscientist, a linguist, a national security specialist and numerous other experts with diverse backgrounds.  

It’s important to get a universal, one-stop-shop look at all the risks of an AI system before it reaches the hands of a customer.

Ram Shankar Siva Kumar, head of Microsoft’s AI Red Team

“We need a wide range of perspectives to get responsible AI red teaming done right,” says Poursabzi, a senior program manager on Microsoft’s AI Ethics and Effects in Engineering and Research (Aether) team, which taps into a whole ecosystem of responsible AI at Microsoft and looks into emergent risks and longer-term considerations with generative AI technologies.  

The dedicated AI Red Team is separate from those who build the technology, and its expanded scope includes adversaries who may try to compel a system to generate hallucinations, as well as harmful, offensive or biased outputs due to inadequate or inaccurate data.  

Team members assume various personas, from a creative teenager pulling a prank to a known adversary trying to steal data, to reveal blind spots and uncover risks. Team members live around the world and collectively speak 17 languages, from Flemish to Mongolian to Telugu, to help with nuanced cultural contexts and region-specific threats.  

And they don’t only try to compromise systems alone; they also use large language models (LLMs) for automated attacks on other LLMs. 

“We need a wide range of perspectives to get responsible AI red teaming done right,” says Poursabzi, a senior program manager on Microsoft’s AI Ethics and Effects in Engineering and Research (Aether) team, which taps into a whole ecosystem of responsible AI at Microsoft and looks into emergent risks and longer-term considerations with generative AI technologies.  

The dedicated AI Red Team is separate from those who build the technology, and its expanded scope includes adversaries who may try to compel a system to generate hallucinations, as well as harmful, offensive or biased outputs due to inadequate or inaccurate data.  

Team members assume various personas, from a creative teenager pulling a prank to a known adversary trying to steal data, to reveal blind spots and uncover risks. Team members live around the world and collectively speak 17 languages, from Flemish to Mongolian to Telugu, to help with nuanced cultural contexts and region-specific threats.  

And they don’t only try to compromise systems alone; they also use large language models (LLMs) for automated attacks on other LLMs. 

The post Red teams think like hackers to help keep AI safe appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/feed/ 0
Security above all else—expanding Microsoft’s Secure Future Initiative http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/ http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/#respond Fri, 03 May 2024 14:55:00 +0000 We are making security our top priority at Microsoft, above all else—over all other features.

The post Security above all else—expanding Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
Last November, we launched the Secure Future Initiative (SFI) to prepare for the increasing scale and high stakes of cyberattacks. SFI brings together every part of Microsoft to advance cybersecurity protection across our company and products.

Since then, the threat landscape has continued to rapidly evolve, and we have learned a lot. The recent findings by the Department of Homeland Security’s Cyber Safety Review Board (CSRB) regarding the Storm-0558 cyberattack from last July, and the Midnight Blizzard attack we reported in January, underscore the severity of the threats facing our company and our customers.

Microsoft plays a central role in the world’s digital ecosystem, and this comes with a critical responsibility to earn and maintain trust. We must and will do more.

We are making security our top priority at Microsoft, above all else—over all other features. We’re expanding the scope of SFI, integrating the recent recommendations from the CSRB as well as our learnings from Midnight Blizzard to ensure that our cybersecurity approach remains robust and adaptive to the evolving threat landscape.

We will mobilize the expanded SFI pillars and goals across Microsoft and this will be a dimension in our hiring decisions. In addition, we will instill accountability by basing part of the compensation of the company’s Senior Leadership Team on our progress in meeting our security plans and milestones.

Below are details to demonstrate the seriousness of our work and commitment.

Diagram illustrating the six pillars of the Microsoft Secure Future Initiative.
Expansion of SFI approach and scope
We have evolved our security approach, and going forward our work will be guided by the following three security principles:

Secure by design: Security comes first when designing any product or service.
Secure by default: Security protections are enabled and enforced by default, require no extra effort, and are not optional.
Secure operations: Security controls and monitoring will continuously be improved to meet current and future threats.
We are further expanding our goals and actions aligned to six prioritized security pillars and providing visibility into the details of our execution:

  1. Protect identities and secrets
    Reduce the risk of unauthorized access by implementing and enforcing best-in-class standards across all identity and secrets infrastructure, and user and application authentication and authorization. As part of this, we are taking the following actions:

Protect identity infrastructure signing and platform keys with rapid and automatic rotation with hardware storage and protection (for example, hardware security module (HSM) and confidential compute).
Strengthen identity standards and drive their adoption through use of standard SDKs across 100% of applications.
Ensure 100% of user accounts are protected with securely managed, phishing-resistant multifactor authentication.
Ensure 100% of applications are protected with system-managed credentials (for example, Managed Identity and Managed Certificates).
Ensure 100% of identity tokens are protected with stateful and durable validation.
Adopt more fine-grained partitioning of identity signing keys and platform keys.
Ensure identity and public key infrastructure (PKI) systems are ready for a post-quantum cryptography world.

  1. Protect tenants and isolate production systems
    Protect all Microsoft tenants and production environments using consistent, best-in-class security practices and strict isolation to minimize breadth of impact. As part of this, we are taking the following actions:

Maintain the security posture and commercial relationships of tenants by removing all unused, aged, or legacy systems.
Protect 100% of Microsoft, acquired, and employee-created tenants, commerce accounts, and tenant resources to the security best practice baselines.
Manage 100% of Microsoft Entra ID applications to a high, consistent security bar.
Eliminate 100% of identity lateral movement pivots between tenants, environments, and clouds.
100% of applications and users have continuous least-privilege access enforcement.
Ensure only secure, managed, healthy devices will be granted access to Microsoft tenants.

  1. Protect networks
    Protect Microsoft production networks and implement network isolation of Microsoft and customer resources. As part of this, we are taking the following actions:

Secure 100% of Microsoft production networks and systems connected to the networks by improving isolation, monitoring, inventory, and secure operations.
Apply network isolation and microsegmentation to 100% of the Microsoft production environments, creating additional layers of defense against attackers.
Enable customers to easily secure their networks and network isolate resources in the cloud.

  1. Protect engineering systems
    Protect software assets and continuously improve code security through governance of the software supply chain and engineering systems infrastructure. As part of this, we are taking the following actions:

Build and maintain inventory for 100% of the software assets used to deploy and operate Microsoft products and services.
100% of access to source code and engineering systems infrastructure is secured through Zero Trust and least-privilege access policies.
100% of source code that deploys to Microsoft production environments is protected through security best practices.
Secure development, build, test, and release environments with 100% standardized, governed pipelines and infrastructure isolation.
Secure the software supply chain to protect Microsoft production environments.

  1. Monitor and detect threats
    Comprehensive coverage and automatic detection of threats to Microsoft production infrastructure and services. As part of this, we are taking the following actions:

Maintain a current inventory across 100% of Microsoft production infrastructure and services.
Retain 100% of security logs for at least two years and make six months of appropriate logs available to customers.
100% of security logs are accessible from a central data lake to enable efficient and effective security investigation and threat hunting.
Automatically detect and respond rapidly to anomalous access, behaviors, and configurations across 100% of Microsoft production infrastructure and services.

  1. Accelerate response and remediation
    Prevent exploitation of vulnerabilities discovered by external and internal entities, through comprehensive and timely remediation. As part of this, we are taking the following actions:

Reduce the Time to Mitigate for high-severity cloud security vulnerabilities with accelerated response.
Increase transparency of mitigated cloud vulnerabilities through the adoption and release of Common Weakness Enumeration™ (CWE™), and Common Platform Enumeration™ (CPE™) industry standards for released high severity Common Vulnerabilities and Exposures (CVE) affecting the cloud.
Improve the accuracy, effectiveness, transparency, and velocity of public messaging and customer engagement.
These goals directly align to our learnings from the Midnight Blizzard incident as well as all four CSRB recommendations to Microsoft and all 12 recommendations to cloud service providers (CSPs), across the areas of security culture, cybersecurity best practices, auditing logging norms, digital identity standards and guidance, and transparency.

We are delivering on these goals through a new level of coordination with a new operating model that aligns leaders and teams to the six SFI pillars, in order to drive security holistically and break down traditional silos. The pillar leaders are working across engineering Executive Vice Presidents (EVPs) to drive integrated, cross-company engineering execution, doing this work in waves. These engineering waves involve teams across Microsoft Azure, Windows, Microsoft 365, and Security, with additional product teams integrating into the process weekly.

While there is much more to do, we’ve made progress in executing against SFI priorities. For example, we’ve implemented automatic enforcement of multifactor authentication by default across more than one million Microsoft Entra ID tenants within Microsoft, including tenants for development, testing, demos, and production. We have eliminated or reduced application targets by removing 730,000 apps to date across production and corporate tenants that were out-of-lifecycle or not meeting current SFI standards. We have expanded our logging to give customers deeper visibility. And we recently announced a significant shift on our response process: We are now publishing root cause data for Microsoft CVEs using the CWE™ industry standard.

Adhering to standards with paved paths systems
Paved paths are best practices from our learned experiences, drawing upon lessons such as how to optimize productivity of our software development and operations, how to achieve compliance (such as Software Bill of Materials, Sarbanes-Oxley Act, General Data Protection Regulation, and others), and how to eliminate entire categories of vulnerabilities and mitigate related risks. A paved path becomes a standard when adoption significantly improves the developer or operations experience or security, quality, or compliance.

With SFI, we are explicitly defining standards for each of the six security pillars, and adherence to these standards will be measured as objectives and key results (OKRs).

Driving continuous improvement
The Secure Future Initiative empowers all of Microsoft to implement the needed changes to deliver security first. Our company culture is based on a growth mindset that fosters an ethos of continuous improvement. We continually seek feedback and new perspectives to tune our approach and progress. We will take our learnings from security incidents, feed them back into our security standards, and operationalize these learnings as paved paths that can enable secure design and operations at scale.

Instituting new governance
We are also taking major steps to elevate security governance, including several organizational changes and additional oversight, controls, and reporting.

Microsoft is implementing a new security governance framework spearheaded by the Chief Information Security Officer (CISO). This framework introduces a partnership between engineering teams and newly formed Deputy CISOs, collectively responsible for overseeing SFI, managing risks, and reporting progress directly to the Senior Leadership Team. Progress will be reviewed weekly with this executive forum and quarterly with our Board of Directors.

Finally, given the importance of threat intelligence, we are bringing the full breadth of nation-state actor and threat hunting capabilities into the CISO organization.

Instilling a security-first culture
Culture can only be reinforced through our daily behaviors. Security is a team sport and is best realized when organizational boundaries are overcome. The engineering EVPs, in close coordination with SFI pillar leaders, are holding broadscale weekly and monthly operational meetings that include all levels of management and senior individual contributors. These meetings work on detailed execution and continuous improvement of security in context with what we collectively deliver to customers. Through this process of bottom-to-top and end-to-end problem solving, security thinking is ingrained in our daily behaviors.

Ultimately, Microsoft runs on trust and this trust must be earned and maintained. As a global provider of software, infrastructure, and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure. Our promise is to continually improve and adapt to the evolving needs of cybersecurity. This is job number one for us.

Get started with Microsoft Security

The post Security above all else—expanding Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/feed/ 0
Groundbreaking AI innovation is transforming industries across France http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/#respond Tue, 30 Apr 2024 15:00:00 +0000 Reflecting on the transformative journey of AI within France alone, it becomes evident that we're venturing into an unprecedented era of technological advancement.

The post Groundbreaking AI innovation is transforming industries across France appeared first on The Microsoft Cloud Blog.

]]>
This blog is part of the AI worldwide series, which highlights customer stories from around the globe. Read more stories from India, Australia and New Zealand, Brazil, and Japan.

AI is currently at the forefront of global technological advancement, permeating various sectors from insurance to energy, driving efficiency, innovation, and transformative changes in society. With ongoing developments in machine learning and natural language processing, AI continues to reshape industries, offering a glimpse into a future where technology and human ingenuity intersect in exciting new ways. The expanding footprint of AI promises both unprecedented opportunities and considerations for responsible implementation.  

For me personally, one of the most exciting aspects is seeing the revolution of industries set into motion as sector-specific use cases begin to emerge. The number of Azure AI customers continues to grow with more than 65% of the Fortune 500 companies now using Microsoft Azure OpenAI Service, which underscores the critical role of partnerships and industry innovation to scale AI solutions to full potential across sectors.

Microsoft AI

You dream it. AI helps build it.

A decorative GIF with abstract swirling animations

This willingness of industry leaders to be pioneers of AI was on bold display during the Microsoft AI Tour stop in Paris, part of the global event series designed to help decision makers and developers discover new opportunities with AI and advance their knowledge. Organizations such as Schneider Electric, The Groupama Group, Amadeus, Onepoint, AXA, and TotalEnergies are not just adopting AI; they’re redefining its potential. These groundbreaking use cases are shedding light on a future where AI is not just a tool, but a catalyst for a richer, more efficient, and more sustainable world.

Groupama’s virtual assistant optimizes policyholder service management

The Groupama Group, a premier mutual insurance group in France, has introduced a cutting-edge virtual assistant within its Employee Savings unit, harnessing the power of Azure OpenAI Service, Azure AI Search, and the Microsoft Bot Framework, to streamline customer managers’ interactions with policyholders. First ideated during an AI hackathon, the assistant has been embraced by the unit’s entire staff and boasts an impressive 80% success rate in providing accurate, dependable, and verifiable information.

“Our managers save a considerable amount of time and are able to carry out their work in much better conditions,” shared François-Xavier Enderlé, Head of Digital Transformation, “and this clearly enhances the quality of the relationship we maintain with our customers.”

Groupama has also stood up an interdepartmental AI-centered think tank, AI Factory, which is currently exploring more than 25 AI use cases aimed at revolutionizing claims processing, enhancing customer service through tools like a new FAQ chatbot, and streamlining the underwriting process for efficiency. Further, Groupama aims to democratize AI technology through training on AI prompting, empowering employees to innovate and improve operational efficiency and customer engagement.

Amadeus enhances employee efficiency with Copilot for Microsoft 365

Amadeus, a global technology provider for the travel industry, has deployed Copilot for Microsoft 365 to streamline work and free employees to focus on value-added tasks. Seamlessly integrated into Microsoft Teams, Word, PowerPoint, and Outlook, Amadeus’ Copilot solution has significantly improved operational efficiency.

“One of the challenges for large-scale, global organizations is the collaboration and the data management,” explains Marco Ruiz González, Product Manager and Solution Architect who supervised the deployment. “We are generating a large amount of data from different countries, and it’s very useful to have quick access to all this data.”

Copilot for Microsoft 365

Adoption strategies

Early results are promising: pilot users reported substantial time savings in communication drafting, enhanced efficiency in email and meeting management, and improved information gathering and content translation capabilities.

With a 90% adoption rate among the initial 300 pilot users, half of whom engage with Copilot weekly, Amadeus plans to extend Copilot to 3,000 employees over the next six months, prioritizing adoption training tailored to diverse user profiles.

Schneider Electric leads the charge in sustainable energy management with AI

Schneider Electric is tackling the complex issue of optimizing energy use and performance with its EcoStruxure platform. By combining Azure OpenAI Service and Internet of Things (IoT), EcoStruxures merges Schneider Electric’s industry knowledge with Microsoft AI technology, enabling sustainable energy solutions and efficient energy management on a global scale. This includes dynamic control of energy performance, decision-making on the use of renewable energy sources, and overall energy optimization.

“People are using sustainable energy solutions to both produce and consume energy, and they can optimize how to produce or store that energy on the grid as it makes sense,” says Yoann Bersihand, Vice President of AI Technology at Schneider Electric. “Without AI, there is no way that we could address a problem as complex as this.”

The platform is designed with a layered architecture, planning future enhancements to integrate AI directly into hardware for efficient and sustainable energy management, especially beneficial for customers with limitations on using cloud services.

Schneider Electric has expanded its partnership with Microsoft, integrating Azure OpenAI Service into its operations to enhance efficiency and innovation across various processes. The integration enables the creation of solutions such as the Resource Advisor Copilot, leveraging large language model technology for data analysis and decision support, and Jo-Chat GPT, an internal tool enhancing employee productivity through generative AI. Further innovations include a programmable logic controller (PLC) code generation assistant that helps engineers quickly create high-quality, tested, and validated code, the Finance Advisor and Knowledge Bot, aimed at improving financial decision-making and customer service, respectively, as well as plans to incorporate GitHub Copilot to boost offer creation and Microsoft Copilot for Sales to support frontline staff. These advancements signify Schneider Electric’s commitment to leveraging generative AI for operational excellence and innovation.

“We didn’t want AI just to be an extra layer on top of the data teams. We decided to really go all-in on AI and not simply create proofs of concepts,” explained Yoann Bersihand, Schneider Electric’s Vice President of AI Technology, capturing Schneider Electric’s commitment to fully integrating AI into its operations to lead innovation in the energy sector.

Onepoint unlocks productivity company-wide with generative AI

An early adopter of AI, technology and consulting firm, Onepoint, is infusing AI at every level of the company with Microsoft’s turnkey generative AI solutions: Azure OpenAI Service, GitHub Copilot, and Copilot for Microsoft 365.

Neo, Onepoint’s secure conversational agent built on Azure OpenAI Service, leverages a library of prompts and business-oriented solutions to quickly generate reports and analyses, increasing productivity for 3,300 employees across the company. Onepoint has also piloted GitHub Copilot, resulting in higher-quality code, better documentation, and productivity gains around 40% on code production.

“The pilot showed us that if developers were properly acculturated to the product, it was really possible to make a quantum leap in productivity,” asserts François Binder, Partner Data & AI for Onepoint.

Addressing the challenge of acculturating employees to AI technology and practices, Onepoint has instituted an “AI Office” to ensure both technical and non-technical staff understand and adopt AI effectively. By providing structured training, fostering an AI community, and overseeing the deployment of AI solutions, the unit seeks to bridge knowledge gaps and biases related to AI among both technical and non-technical employees.

“We’re doing everything we can to fully embark our team in the generative AI adventure and give each of our consultants the means to become augmented consultants,” insists Binder.

What’s more, Onepoint is strategically integrating generative AI solutions into its offerings, along with personalized training to ensure customers are well-versed in AI’s capabilities and best practices, extending the benefits of AI to their customers as well.

AXA’s secure generative AI platform boost productivity of global employees

AXA, a global leader in insurance and asset management, is embracing the digital future through generative AI with the launch of AXA Secure GPT. Developed in collaboration with Microsoft and powered by Azure OpenAI Service, this AXA Secure GPT is designed to equip AXA’s 140,000 employees with cutting-edge AI tools in a secure and efficient manner.

Addressing the challenge of safely integrating public AI advancements within the corporate environment, AXA Secure GPT ensures the utmost privacy and control over data by employing robust filtering and classification, alongside secure cloud tenancy to keep all data and interactions within a controlled environment. Stringent authentication protocols and comprehensive security controls monitor and protect against potential threats. By leveraging Microsoft’s content filtering and adding an extra layer of security, AXA Secure GPT exceeds the current standards for data privacy and security, ensuring a reliable and secure tool for its employees.

With the goal to scale from 1,000 current users to all 140,000 global employees by mid-2024, AXA provides comprehensive AI support and training which includes leveraging Microsoft consulting services for optimal technological use and architecture design, alongside a dedicated change management program in each country to ensure smooth integration.

“As an employer, it is our responsibility to provide our employees with the best tools to enhance their comfort and enable them to focus on high-value activities,” said Vincent De Ponthaud, Head of Software & AI Engineering at AXA. Tailored training sessions and a specially curated prompt library, aimed at enhancing productivity across various departments, empower AXA employees to focus on high-value activities.

TotalEnergies supports operational transformation with AI and low-code solutions

Multi-energy company, TotalEnergies, has implemented Copilot for Microsoft 365 to support operational transformation. In the initial testing phase involving 300 employees, the company observed enhanced operational efficiency and improved user experience. Concurrently, TotalEnergies is empowering its workforce with Microsoft Power Platform, enabling them to develop low-code/no-code solutions integrated with other company applications and databases, thereby streamlining the resolution of various day-to-day challenges.

“In line with our pioneering spirit, TotalEnergies is committed to digital transformation and supports its employees so that they can make the most of it,” said Patrick Pouyanné, CEO of TotalEnergies. “The new technologies of generative artificial intelligence and of ‘low code no code’ will provide them with the simplification and autonomy they need to put their skills and creativity even further at the service of our company’s transition strategy.” In pursuit of this objective, TotalEnergies employees will receive training dedicated to the understanding and utilization of the new AI tools effectively.

AI for everyone

The Microsoft AI Tour provided a compelling opportunity to witness firsthand the pinnacle of regional innovation and to glimpse the far-reaching global impact poised to shape our future. Reflecting on the transformative journey of AI within France alone, it becomes evident that we’re venturing into an unprecedented era of technological advancement. The success stories of companies like Schneider Electric, Groupama, Amadeus, Onepoint, and TotalEnergies illustrate how the synergy between AI and human ingenuity propels progress and transformation across diverse sectors, transformation that will doubtlessly reach beyond borders to the benefit of organizations across the globe.

20240312_080224830_iOS
The Microsoft AI Tour in Paris, France.

It’s also important to recognize that our exploration with AI is in its infancy and the horizon for transformative impact is limitless. It’s critical that business leaders scaffold AI innovation within an architecture of responsible AI, which include the development of ethical guidelines that address transparency, equity, accountability, and privacy; building diverse teams; investing in employee education around responsible AI use; and collaboration with industry bodies and policymakers to establish regulatory frameworks that can guide responsible deployment. When innovation and responsibility are aligned, we draw closer to ensuring that the potential for transformational impact of AI will be harnessed for the benefit of society as a whole.

Take the next step in your AI journey by exploring Microsoft AI solutions, diving into The AI Strategy Roadmap, and getting skilled up with Microsoft Learn’s AI learning hub to ensure you’re ready to leverage Microsoft AI to its fullest potential.

The post Groundbreaking AI innovation is transforming industries across France appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/feed/ 0
Building a foundation for AI success: Governance http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/#respond Thu, 28 Mar 2024 15:00:00 +0000 We have collected a set of resources that encompass best practices for AI governance, focusing on security, privacy and data governance, and responsible AI. 

The post Building a foundation for AI success: Governance appeared first on The Microsoft Cloud Blog.

]]>
This is the last post in our six-part blog series. See part one, part two, part three, part four, part five, and download the white paper.

To date, this series has explored four of the five drivers of AI readiness: business strategy, technology and data strategy, AI strategy and experience, and organization and culture. Each is critical to an organization’s ability to use AI to deliver value to the business, whether it’s related to productivity enhancements, customer experience, revenue generation, or net-new innovation. But nothing is ultimately more important than AI governance, which includes the processes, controls, and accountability structures needed to govern data privacy, data governance, security, and responsible development and use of AI in an organization.   

“We recognize that trust is not a given but earned through action,” said Microsoft Vice Chair and President Brad Smith. “That’s precisely why we are so focused on implementing our Microsoft responsible AI principles and practices—not just for ourselves, but also to equip our customers and partners to do the same.” 

In that spirit, we have collected a set of resources that encompass best practices for AI governance, focusing on security, privacy and data governance, and responsible AI. 

Building a Foundation for AI Success

A leader’s guide to accelerate your company’s success with AI

a close up of a purple wall

Security

Just as AI enables new opportunities, it also introduces new imperatives to manage risk, whether related specifically to AI usage, app and data protection, compliance with organizational and legal policies, or threat detection. The Microsoft Security Blog includes a set of resources to help you modernize security operations, empower security professionals, and learn best practices to mitigate and manage risk more effectively.  

One of the first steps you can take is to understand how AI is being used in the organization so you can make informed decisions and implement the appropriate controls. This post lays out the primary concerns leaders have about implementing AI, as well as a set of recommendations on how to discover, protect, and govern AI usage. 

For example, you may have heard of (or already be implementing) red teaming. Red teaming, according to this post by the Microsoft AI Red Team, “broadly refers to the practice of emulating real-world adversaries and their tools, tactics, and procedures to identify risks, uncover blind spots, validate assumptions, and improve the overall security posture of systems.” The post shares additional education, guidance, and resources to help your organization apply this best practice to your AI systems. 

Microsoft’s holistic approach to generative AI security considers the technology, its users, and society at large across four areas of protection: data privacy and ownership, transparency and accountability, user guidance and policy, and secure by design. For more on how Microsoft secures generative AI, download Securing AI guidance.  

Privacy and data governance

Building trust in AI requires a strong privacy and data governance foundation. As our Chief Privacy Officer Julie Brill has said, “At Microsoft we want to empower our customers to harness the full potential of new technologies like artificial intelligence, while meeting their privacy needs and expectations.” Enhancing trust and protecting privacy in the AI era, originally posted on the Microsoft on the Issues Blog, describes our approach to data privacy, focusing on topics such as data security, transparency, and data protection user controls. It also includes a set of resources to help you dig deeper into our approaches to privacy issues and share what we are learning. 
 

Data governance refers to the processes, policies, roles, metrics, and standards that enable secure, private, accurate, and usable data throughout its life cycle. It’s vital to your organization’s ability to manage risk, build trust, and promote successful business outcomes. It is also the foundation for data management practices that reduce the risk of data leakage or misuse of confidential or sensitive information such as business plans, financial records, trade secrets, and other business-critical assets. This post shares Microsoft’s approach to data security and compliance so you can learn more about how to safely and confidently adopt AI technologies and keep your most important asset—your data—safe. 

Responsible AI

“Don’t ask what computers can do, ask what they should do.” That is the title of the chapter on AI and ethics in a book Brad Smith coauthored in 2019, and they are also the first words in Governing AI: A Blueprint for the Future, which details Microsoft’s five-point approach to help governance advance more quickly, as well as our “Responsible by Design” approach to building AI systems that benefit society. 

The Microsoft on the Issues Blog includes a wealth of perspectives on responsible AI topics, including the Microsoft AI Access Principles, which detail our commitments to promote innovation and competition in the new AI economy and approaches to combating deepfakes in elections announced as part of the new Tech Accord announced in February in Munich. 

The Responsible AI Standard is the product of a multi-year effort to define product development requirements for responsible AI. It captures the essence of the work Microsoft has done to operationalize its responsible AI principles and offers valuable guidance to leaders and practitioners looking to apply similar approaches in their own organizations.

You may also have heard about our AI customer commitments, which include:  

  • Sharing what we are learning about developing and deploying AI responsibly and assist you in learning how to do the same. 
  • Creating an AI assurance program.
  • Supporting you as you implement your own AI systems responsibly. 

The Empowering responsible AI practices website brings together a range of policy, research, and engineering resources relevant to a spectrum of roles within your organization. Here you can find out more about our commitments to advance safe, secure, and trustworthy AI, learn about the most recent research advancements and collaborations, and explore responsible AI tools to help your organization define and implement best practices for human-AI interaction, fairness, transparency and accountability, and other critical objectives. 

Next steps

As Brad Smith concluded in Governing AI: A Blueprint for the Future, “We’re on a collective journey to forge a responsible future for artificial intelligence. We can all learn from each other. And no matter how good we may think something is today, we will all need to keep getting better.” 

Download our e-book, “The AI Strategy Roadmap: Navigating the Stages of AI Value Creation,” in which we share the emerging best practices that global leaders are using to accelerate time to value with AI. It is based on a research study including more than 1,300 business and technology decision makers across multiple regions and industries.

The post Building a foundation for AI success: Governance appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/feed/ 0
Protecting the data of our commercial and public sector customers in the AI era https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/ https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/#respond Thu, 28 Mar 2024 14:00:00 +0000 Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.  

The post Protecting the data of our commercial and public sector customers in the AI era appeared first on The Microsoft Cloud Blog.

]]>
Organizations across industries are leveraging Microsoft Azure OpenAI Service and Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today. 

At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.  

Microsoft’s existing privacy commitments extend to our AI commercial products 

 Commercial and public sector customers can rest assured that the privacy commitments they have long relied on for our enterprise cloud products also apply to our enterprise generative AI solutions, including Azure OpenAI Service and our Copilots.  

  • We will keep your organization’s data private. Your data remains private when using Azure OpenAI Service and Copilots and is governed by our applicable privacy and contractual commitments, including the commitments we make in the Microsoft’s Data Protection AddendumMicrosoft’s Product Terms, and the Microsoft Privacy Statement.  
  • You are in control of your organization’s data. Your data is not used in undisclosed ways or without your permission. You may choose to customize your use of Azure OpenAI Service by opting to use your data to fine tune models for your organization’s own use. If you do use your organization’s data to fine tune, any fine-tuned AI solutions created with your data will be available only to you. 
  • Your access control and enterprise policies are maintained. To protect privacy within your organization when using enterprise products with generative AI capabilities, your existing permissions and access controls will continue to apply to ensure that your organization’s data is displayed only to those users to whom you have given appropriate permissions.   
  • Your organization’s data is not shared. Microsoft does not share your data with third parties without your permission. Your data, including the data generated through your organization’s use of Azure OpenAI Service or Copilots – such as prompts and responses – are kept private and are not disclosed to third parties. 
  • Your organization’s data privacy and security are protected by design. Security and privacy are incorporated through all phases of design and implementation of Azure OpenAI Service and Copilots. As with all our products, we provide a strong privacy and security baseline and make available additional protections that you can choose to enable. As external threats evolve, we will continue to advance our solutions and offerings to ensure world-class privacy and security in Azure OpenAI Service and Copilots, and we will continue to be transparent about our approach. 
  • Your organization’s data is not used to train foundation models. Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services and capabilities, do not use your organization’s data to train foundation models without your permission. Your data is not available to OpenAI or used to train OpenAI models.
  • Our products and solutions continue to comply with global data protection regulations. The Microsoft AI products and solutions you deploy continue to be compliant with today’s global data protection and privacy regulations. As we continue to navigate the future of AI together, including the implementation of the EU AI Act and other laws globally, organizations can be certain that Microsoft will be transparent about our privacy, safety, and security practices. We will comply with laws globally that govern AI, and back up our promises with clear contractual commitments.  

You can find additional details about how Microsoft’s privacy commitments apply to Azure OpenAI and Copilots here

We provide programs, transparency documentation, and tools to assist your AI deployment  

To support our customers and empower their use of AI, Microsoft offers a range of solutions, tooling, and resources to assist in their AI deployment, from comprehensive transparency documentation to a suite of tools for data governance, risk, and compliance. Dedicated programs such as our industry-leading AI Assurance program and Customer Copyright Commitment further broaden the support we offer commercial customers in addressing their needs.  

Microsoft’s AI Assurance Program helps customers ensure that the AI applications they deploy on our platforms meet the legal and regulatory requirements for responsible AI. The program includes support for regulatory engagement and advocacy, risk framework implementation and the creation of a customer council. 

For decades we’ve defended our customers against intellectual property claims relating to our products. Building on our previous AI customer commitments, Microsoft announced our Customer Copyright Commitment, which extends our intellectual property indemnity support to both our commercial Copilot services and our Azure OpenAI Service. Now, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or Azure OpenAI Service, or for the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer has used the guardrails and content filters we have built into our products. 

Our comprehensive transparency documentation about Azure OpenAI Service and Copilot and the customer tools we provide help organizations understand how our AI products work and provide choices our customers can use to influence system performance and behavior.  

Azure’s enterprise-grade protections provide a strong foundation upon which customers can build their data privacy, security, and compliance systems to confidently scale AI while managing risk and ensuring compliance. With a range of solutions in the Microsoft Purview family of products, organizations can further discover, protect, and govern their data when using Copilot for Microsoft 365 within their organizations.  

With Microsoft Purview, customers can discover risks associated with data and users, such as which prompts include sensitive data. They can protect that sensitive data with sensitivity labels and classifications, which means Copilot will only summarize content for users when they have the right permissions to the content. And when sensitive data is included in a Copilot prompt, the Copilot generated output automatically inherits the label from the reference file. Similarly, if a user asks Copilot to create new content based on a labeled document, the Copilot generated output automatically inherits the sensitivity label along with all its protection, like data loss prevention policies.  

 Copilot conversation inherits sensitivity label 

Finally, our customers can govern their Copilot usage to comply with regulatory and code of conduct policies through audit logging, eDiscovery, data lifecycle management, and machine-learning based detection of policy violations.  

As we continue to innovate and provide new kinds of AI solutions, Microsoft will continue to offer industry-leading tools, transparency resources, and support for our customers in their AI journey, and remain steadfast in protecting our customers’ data. 

The post Protecting the data of our commercial and public sector customers in the AI era appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/feed/ 0
Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/ https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/#respond Mon, 29 Jan 2024 15:55:00 +0000 This past year was one of technology’s most exciting with the emergence of generative AI, as leaders everywhere considered the possibilities it represented for their organizations.

The post Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud appeared first on The Microsoft Cloud Blog.

]]>
This past year was one of technology’s most exciting with the emergence of generative AI, as leaders everywhere considered the possibilities it represented for their organizations. Many recognized its value and are eager to continue innovating, while others are inspired by what it has unlocked and are seeking ways to adopt it. At Microsoft, we are focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI Transformation for our customers. As I talk to customers and partners about the outcomes they are seeing — and rationalize those against Microsoft’s generative AI capabilities — we have identified four areas of opportunity for organizations to empower their AI Transformation: enriching employee experiences, reinventing customer engagement, reshaping business processes and bending the curve on innovation. With these as a foundation, it becomes easier to see how to bring pragmatic AI innovation to life, and I am proud of the impact we have made with customers and partners around the world. From developing customer-focused AI and cloud services for millions across Europe and Africa with Vodafone, to empowering customers and employees with generative AI capabilities with Walmart, I look forward to what we will help you achieve in the year ahead.

Coworkers reviewing photographs
Dentsu drives creativity and growth for brands, supported by Microsoft Copilot.
Enriching employee experiences and shaping the future of work with copilot technology

Bayer employees are collaborating better on worldwide research projects and saving time on daily tasks with Copilot for Microsoft 365, while Finnish company Elisa is helping knowledge workers across finance, sales and customer service streamline routine tasks. Banreservas is driving employee productivity and enhancing decision-making, and Hong Kong’s largest transportation companies — Cathay and MTR — are streamlining workflows, improving communications, and reducing time-consuming administrative tasks. Across professional services, KPMG has seen a 50% jump in employee productivity, Dentsu is saving hundreds of employees up to 30 minutes per day on creative visualization processes, and EY is making it easier to generate reports and access insights in near real-time with Copilot for Microsoft 365. In Malaysia, financial services organization PNB is saving employees time searching through documents and emails and AmBank employees are enhancing the quality and impact of their work. At Hargreaves Lansdown, financial advisers are using Copilot for Microsoft 365 and Teams to drive productivity and make meetings more inclusive. Avanade is helping sellers save time updating contact records and summarizing email threads with Copilot for Dynamics 365, while HSO Group, Vixxo, and 9altitudes are streamlining work for field and service teams.

Employee and customer in store
Organizations are creating their own Generative AI assistants to help employees improve customer service.
Reinventing customer engagement with generative AI to deliver greater value and increased satisfaction

MECOMS is making it possible for utility customers to ask questions and get suggestions about how to reduce power consumption using Microsoft Fabric and copilot on their Power Pages portal. Schneider Electric has built a Resource Advisor copilot to equip customers with enhanced data analysis, visualization, decision support and performance optimization. California State University San Marcos is finding ways to better understand and personalize the student journey while driving engagement with parents and alumni using Dynamics 365 Customer Insights and Copilot for Dynamics 365. With Azure OpenAI Service, Adecco Group is bolstering its services and solutions to enable worker preparedness as generative AI reshapes the workforce, UiPath has already helped one of its insurance customers save over 90,000 hours through more efficient operations, and Providence has developed a solution for clinicians to respond to patient messages up to 35% faster. Organizations are building generative AI assistants to help employees save time, improve customer service and focus on more complex work, including Domino’s, LAQO and OCBC. Within a few weeks of introducing its copilot to personalize customer service, Atento has increased customer satisfaction by 30% while reducing operational errors by nearly 20%, and Turkey-based Setur is personalizing travel planning with a chatbot to customize responses in multiple languages for its 60,000 daily users. In the fashion industry, Coats Digital launched an AI assistant in six weeks to make customer onboarding easier. Greece-based ERGO Insurance partnered with EBO to provide 24/7 personalized assistance with its virtual agent, and H&R Block introduced AI Tax Assist to help individuals and small business owners file and manage their taxes confidently while saving costs.

Man and woman working in lab
Novo Nordisk is building out GitHub Copilot integration to decrease repetitive research and engineering tasks.
Reshaping business processes to uncover efficiencies, improve developer creativity and spur AI innovation

Siemens built its own industrial copilot to simplify virtual collaboration of design engineers and front-line workers, accelerate simulation times and reduce tasks from weeks to minutes. With help from Neudesic, Hanover Research designed a custom AI-powered research tool to streamline workflows and identify insights up to 10 times faster. With Microsoft Fabric, organizations like the London Stock Exchange Group and Milliman are reshaping how teams create more value from data insights, while Zeiss is streamlining analytics workflows to help teams make more customer-centric decisions. Volvo Group has saved more than 10,000 manual hours by launching a custom solution built with Azure AI to simplify document processing. By integrating GitHub Copilot, Carlsberg has significantly enhanced productivity across its development team; and Hover, SPH Media, Doctolib and CloudZero have improved their workflows within an agile and secure environment. Mastery Logistics Systems and Novo Nordisk are using GitHub Copilot to automate repetitive coding tasks for developers, while Intertech is pairing it with Azure OpenAI Service to enhance coding accuracy and reduce daily emails by 50%. Swiss AI-driven company Unique AG is helping financial industry clients reduce administrative work, speed up existing processes and improve IT support; and PwC is simplifying its audit process and increasing transparency for clients with Azure OpenAI Service. By leveraging Power Platform, including AI and Copilot features, Epiq has automated employee processes, saving over $500,000 in annual costs and 2,000 hours of work each month, PG&E is addressing up to 40% of help desk demands to save more than $1 million annually, and Nsure is building automations that reduce manual processing times by over 60% and costs by 50%. With security top of mind, WTW is using Microsoft Copilot for Security to accelerate its threat-hunting capabilities by making it possible for cyber teams to ask questions in natural language, while LTIMindtree is planning on using it to reduce training time and strengthen security analyst expertise.

Man working at multiple screens
VinBrain is harnessing Microsoft’s cutting-edge AI technologies to transform healthcare in Vietnam.
Bending the curve on innovation across industries with differentiated AI offerings

To make disaster response more efficient, nonprofit Team Rubicon is quickly identifying and engaging the right volunteers in the right locations with the help of Copilot for Dynamics 365. Netherlands-based TomTom is bringing the benefits of generative AI to the global automotive industry by developing an advanced AI-powered voice assistant to help drivers with tasks like navigation and temperature control. In Vietnam, VinBrain has developed one of the country’s first comprehensive AI-powered copilots to support medical professionals with enhanced screening and detection processes and encourage more meaningful doctor-patient interactions. Rockwell Automation is delivering industry-first capabilities with Azure OpenAI Service to accelerate time-to-market for customers building industrial automation systems. With a vision to democratize AI and reach millions of users, Perplexity.AI has brought its conversational answer engine to market in six months using Azure AI Studio. India’s biggest online fashion retailer, Myntra, is solving the open-ended search problem facing the industry by using generative AI to help shoppers figure out what they should wear based on occasion. In Japan, Aisin Corp has developed a generative AI app to empower people who are deaf or hard of hearing with tasks like navigation, communication and translation; and Canada-based startup Natural Reader is making education more accessible on-the-go for students with learning differences by improving AI voice quality with Azure AI. To solve one of the most complex engineering challenges — the design process for semiconductors — Synopsys is bringing in the power of generative AI to help engineering teams accelerate time-to-market.

As organizations continue to embrace AI Transformation, it is critical they develop clarity on how best to apply AI to meet their most pressing business needs. Microsoft is committed to helping our customers and partners accelerate pragmatic AI innovation and I am excited by the opportunities before us to enrich employee experiences, reinvent customer engagement, reshape business processes and bend the curve on innovation. As a technology partner of choice — from our differentiated copilot capabilities to our unparalleled partner ecosystem and unique co-innovation efforts with customers — we remain in service to your successful outcomes. We are also dedicated to preserving the trust we have built through our partnership approach, responsible AI solutions and commitments to protecting your data, privacy and IP. We believe this era of AI innovation allows us to live truer to our mission than ever before, and I look forward to continuing on this journey with you to help you achieve more.

The post Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/feed/ 0
Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/ http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/#respond Wed, 15 Nov 2023 16:08:46 +0000 The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security.

The post Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite appeared first on The Microsoft Cloud Blog.

]]>
The future of security with AI

The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security. Traditional tools are no longer enough to keep pace with the threats posed by cybercriminals. In just two years, the number of password attacks detected by Microsoft has risen from 579 per second to more than 4,000 per second.1 On average, organizations use 80 security tools to manage their environment, resulting in security teams facing data deluge, alert fatigue, and limited visibility across security solutions. Plus, the global cost of cybercrime is expected to reach $10.5 trillion by 2025, up from $3 trillion in 2015. Security teams face an asymmetric challenge: they must protect everything, while cyberattackers only need to find one weak point. And security teams must do this while facing regulatory complexity, a global talent shortage, and rampant fragmentation.

One of the advantages for security teams is their view of the data field—they know how the infrastructure, user posture, and applications, are set up before a cyberattack begins. To further tip the scale in favor of cyberdefenders, Microsoft Security offers a very large-scale data advantage—65 trillion daily signals, expertise of global threat intelligence, monitoring more than 300 cyberthreat groups, and insights on cyberattacker behaviors from more than 1 million customers and more than 15,000 partners.1

Our new generative AI solution—Microsoft Security Copilot—combined with our massive data advantage and end-to-end security, all built on the principles of Zero Trust, creates a flywheel of protection to change the asymmetry of the digital threat landscape and favor security teams in this new era of security.

To learn more about Microsoft Security’s vision for the future and the latest generative AI announcements and demos, watch the Microsoft Ignite keynote “The Future of Security with AI” presented by Charlie Bell, Executive Vice President, Microsoft Security, and I on Thursday, November 16, 2023, at 10:15 AM PT.  

Changing the paradigm with Microsoft Security Copilot

One of the biggest challenges in security is the lack of cybersecurity professionals. This is an urgent need given the three million unfilled positions in the field, with cyberthreats increasing in frequency and severity.2 

In a recent study to measure the productivity impact for “new in career” analysts, participants using Security Copilot demonstrated 44 percent more accurate responses and were 26 percent faster across all tasks.3 

According to the same study:

  • 86 percent reported that Security Copilot helped them improve the quality of their work. 
  • 83 percent stated that Security Copilot reduced the effort needed to complete the task. 
  • 86 percent said that Security Copilot made them more productive. 
  • 90 percent expressed their desire to use Security Copilot next time they do the same task. 

Check out the Security Copilot Early Access Program—with Microsoft Defender Threat Intelligence included at no additional charge—that adds speed and scale for scenarios like security posture management, incident investigation and response, security reporting, and more—now available to interested and qualified customers. For example, one early adopter from Willis Towers Watson (WTW) said “I envision Microsoft Security Copilot as a change accelerator. The ability to do threat hunting at pace will mean that I’m able to reduce my mean time to investigate, and the faster I can do that, the better my security posture will become.”  Keep reading for a full list of capabilities.

Introducing the industry’s first generative AI-powered unified security operations platform with built-in Copilot

Security operations teams struggle to manage disparate security toolsets from siloed technologies and apps. This challenge is only exacerbated given the scarcity of skilled security talent. And while organizations have been investing in traditional AI and machine learning to improve threat intelligence, deploying AI and machine learning comes with its unique challenges and its own shortage of data science talent. It’s time for a step-change in our industry, and thanks to generative AI, we can now close the talent gap for both security and data professionals. Securing an organization today requires an innovative approach that prevents, detects, and disrupts cyberattacks at machine speed, while delivering simplicity and and approachable, conversational experiences to help security operations center (SOC) teams move faster, and bringing together all the security signals and threat intelligence currently stuck in disconnected tools. Today, we are thrilled to announce the next major step in this industry-defining vision: combining the power of leading solutions in security information and event management (SIEM), extended detection and response (XDR), and generative AI for security into the first unified security operations platform.

By bringing together Microsoft Sentinel, Microsoft Defender XDR (previously Microsoft 365 Defender), and Microsoft Security Copilot, security analysts now have a unified incident experience that streamlines triage and provides a complete, end-to-end view of threats across the digital estate. With a single set of automation rules and playbooks enriched with generative AI, coordinating response is now easier and quicker for analysts of every level. In addition, unified hunting now gives analysts the ability to query all SIEM and XDR data in one place to uncover cyberthreats and take appropriate remediation action. Customers interested in joining the preview of the unified security operations platform should contact their account team.

Further, Microsoft Security Copilot is natively embedded into the analyst experience supporting both SIEM and XDR and equipping analysts with step-by-step guidance and automation for investigating and resolving incidents, without the reliance of data analysts. Complex tasks, such as analyzing malicious scripts or crafting Kusto Query Language (KQL) queries to hunt across data in Microsoft Sentinel and Defender XDR, can be accomplished simply by asking a question in natural language or accepting a suggestion from Security Copilot. If you need to update your chief information security officer (CISO) on an incident, you can now instantly generate a polished report that summarizes the investigation and the remediation actions that were taken to resolve it.

To keep up with the speed of cyberattackers, the unified security operations platform catches cyberthreats at machine speed and protects your organization by automatically disrupting advanced attacks. We are extending this capability to act on third-party signals, for example with SAP signals and alerts. For SIEM customers who have SAP connected, attack disruption will automatically detect financial fraud techniques and disable the native SAP and connected Microsoft Entra account to prevent the cyberattacker from transferring any funds—with no SOC intervention. The attack disruption capabilities will be further strengthened by new deception capabilities in Microsoft Defender for Endpoint—which can now automatically generate authentic-looking decoys and lures, so you can entice cyberattackers with fake, valuable assets that will deliver high-confidence, early stage signal to the SOC and trigger automatic attack disruption even faster.

Lastly, we are building on the native XDR experience by including cloud workload signals and alerts from Microsoft Defender for Cloud—a leading cloud-native application protection platform (CNAPP)—so analysts can conduct investigations that span across their multicloud infrastructure (Microsoft Azure, Amazon Web Services, and Google Cloud Platform environments) and identities, email and collaboration tools, software as a service (SaaS) apps, and multiplatform endpoints—making Microsoft Defender XDR one of the most comprehensive native XDR platforms in the industry.

Customers who operate both SIEM and XDR can add Microsoft Sentinel into their Microsoft Defender portal experience easily, with no migration required. Existing Microsoft Sentinel customers can continue using the Azure portal. The unified security operations platform is now available in private preview and will move to public preview in 2024.

Expanding Copilot for data security, identity, device management, and more 

Security is a shared responsibility across teams, yet many don’t share the same tools or data—and they often don’t collaborate with one another. We are adding new capabilities and embedded experiences of Security Copilot across the Microsoft Security portfolio as part of the Early Access Program to empower all security and IT roles to detect and address cyberthreats at machine speed. And to enable all roles to protect against top security risks and drive operational efficiency, Microsoft Security Copilot now brings together signals across Microsoft Defender, Microsoft Defender for Cloud, Microsoft Sentinel, Microsoft Intune, Microsoft Entra, and Microsoft Purview into a single pane of glass.

New capabilities in Security Copilot creating a forced multiplier for security and IT teams

SECURE AND GOVERN YOUR DATA IN THE ERA OF AIView session 

Microsoft Purview: Data security and compliance teams review a multitude of complex and diverse alerts spread across multiple security tools, each alert containing a wealth of rich insights. To make data protection faster, more effective, and easier, Security Copilot is now embedded in Microsoft Purview, offering summarization capabilities directly within Microsoft Purview Data Loss PreventionMicrosoft Purview Insider Risk ManagementMicrosoft Purview eDiscovery, and Microsoft Purview Communication Compliance workflows, making sense of profuse and diverse data, accelerating investigation and response times, and enabling analysts at all levels to complete complex tasks with AI-powered intelligence at their fingertips. Additionally, with AI translator capabilities in eDiscovery, you can use natural language to define search queries, resulting in faster and more accurate search iterations and eliminating the need to use keyword query language. These new data security capabilities are also available now in the Microsoft Security Copilot standalone experience.

SECURE ACCESS IN THE AI ERA: WHAT’S NEW IN MICROSOFT ENTRAView session 

Microsoft Entra: Password-based attacks have increased dramatically in the last year, and new attack techniques are now trying to circumvent multifactor authentication. To strengthen your defenses against identity compromise, Security Copilot embedded in Microsoft Entra can assist in investigating identity risks and help with troubleshooting daily identity tasks, such as why a sign-in required multifactor authentication or why a user’s risk level increased. IT administrators can instantly get a risk summary, steps to remediate, and recommended guidance for each identity at risk, in natural language. Quickly get to the root of an issue for a sign-in with a summarized report of the most relevant information and context. Additionally, in Microsoft Entra ID Governance, admins can use Security Copilot to guide in the creation of a lifecycle workflow to streamline the process of creating and issuing user credentials and access rights. These new capabilities to summarize users and groups, sign-in logs, and high-risk users are also available now in the Microsoft Security Copilot standalone experience.

FORTIFIED SECURITY AND SIMPLICITY COME TOGETHER WITH MICROSOFT INTUNEView session 

Microsoft Intune: The evolving device landscape is driving IT complexity and risk of endpoint vulnerabilities—and IT administrators play a critical security role in managing these devices and protecting organizational data. We are introducing Security Copilot embedded in Microsoft Intune in the coming weeks for select customers of the Early Access Program, marking a meaningful advancement in endpoint management and security. This experience offers unprecedented visibility across security data with full device context, provides real-time guidance when creating policies, and empowers security and IT teams to discover and remediate the root cause of device issues faster and easier. Now IT administrators and security analysts are empowered to drive better and informed outcomes with pre-deployment, AI-based guard rails to help them understand the impact of policy changes in their environment before applying them. With Copilot, they can save time and reduce complexity of gathering near real-time device, user, and app data and receive AI-driven recommendations to respond to threats, incidents, and vulnerabilities, fortifying endpoint security. 

BOOST MULTICLOUD SECURITY WITH A COMPREHENSIVE CODE TO CLOUD STRATEGYView session 

Microsoft Defender for Cloud: Maintaining a strong cloud security posture is a challenge for cybersecurity teams, as they face siloed visibility into risks and vulnerabilities across the application lifecycle, due to the rise of cloud-native development and multicloud environments. With Security Copilot now embedded in Microsoft Defender for Cloud, security admins are empowered to identify critical concerns to resources faster with guided risk exploration that summarizes risks, enriched with contextual insights such as critical vulnerabilities, sensitive data, and lateral movement. To address the uncovered critical risks more efficiently, admins can use Security Copilot in Microsoft Defender for Cloud to guide remediation efforts and streamline the implementation of recommendations by generating recommendation summaries, step-by-step remediation actions, and scripts in a preferred language, and directly delegate remediation actions to key resource users. These new cloud security capabilities are also available now in the Microsoft Security Copilot standalone experience. 

Microsoft Defender for External Attack Surface Management (EASM): Keeping up with tracking assets and their vulnerabilities can be overwhelming for security teams, as it requires time, coordination, and research to understand which assets pose a risk to the organization. New Defender for EASM capabilities are available in the Security Copilot standalone experience and enable security teams to quickly gain insights into their external attack surface, regardless of where the assets are hosted, and feel confident in the outcomes. These capabilities provide security operations teams with a snapshot view of their external attack surface, help vulnerability managers understand if their external attack surface is impacted by a particular common vulnerability and exposure (CVE), and provide visibility into vulnerable critical and high priority CVEs to help teams know how pervasive they are to their assets, so they can prioritize remediation efforts.

Custom plugins to trusted third-party tools: Security Copilot provides more robust, enriched insight and guidance when it is integrated with a broader set of security and IT teams’ tools. To do so, Security Copilot must embrace a vast ecosystem of security partners. As part of this effort, we are excited to announce the latest integration now available to Security Copilot customers with ServiceNow. For customers who want to bring onboard their trusted security tools and integrate their own organizational data and applications, we’re also introducing a new set of custom plugins that will enable them to expand the reach of Security Copilot to new data and new capabilities.

Securing the use of generative AI for safeguarding your organization

As organizations quickly adopt generative AI, it is vital to have robust security measures in place to ensure safe and responsible use. This involves understanding how generative AI is being used, protecting the data that is being used or created by generative AI, and governing the use of AI. As generative AI apps become more popular, security teams need tools that secure both the AI applications and the data they interact with. In fact, 43 percent of organizations said lack of controls to detect and mitigate risk in AI is a top concern.4 Different AI applications pose various levels of risk, and organizations need the ability to monitor and control these generative AI apps with varying levels of protection.

ADVANCED CLOUD-NATIVE SECURITY WITH MICROSOFT DEFENDER FOR CLOUDView session 

Microsoft Defender: Microsoft Defender for Cloud Apps is expanding its discovery capabilities to help organizations gain visibility into the generative AI apps in use, provide extensive protection and control to block risky generative AI apps, and apply ready-to-use customizable policies to prevent data loss in AI prompts and AI responses. This new feature supports more than 400 generative AI apps, and offers an easy way to sift through low- versus high-risk apps. 

HOW MICROSOFT PURVIEW HELPS YOU PROTECT YOUR DATAView session 

Microsoft Purview: New capabilities in Microsoft Purview help comprehensively secure and govern data in AI, including Microsoft Copilot and non-Microsoft generative AI applications. Customers can gain visibility into AI activity, including sensitive data usage in AI prompts, comprehensive protection with ready-to-use policies to protect data in AI prompts and responses, and compliance controls to help easily meet business and regulatory requirements. Microsoft Purview capabilities are integrated with Microsoft Copilot, starting with Copilot for Microsoft 365, strengthening the data security and compliance for Copilot for Microsoft 365.

Further, to enable customers to gain a better understanding of which AI applications are being used and how, we are announcing the preview of AI hub in Microsoft Purview. Microsoft Purview can provide organizations with an aggregated view of total prompts being sent to Copilot and the sensitive information included in those prompts. Organizations can also see an aggregated view of the number of users interacting with Copilot. And we are extending these capabilities to provide insights for more than 100 of the most commonly used consumer generative AI applications, such as ChatGPT, Bard, DALL-E, and more.

Expanding end-to-end security for comprehensive protection everywhere

Keeping up with daily protection requirements is a security challenge that can’t be ignored—and the struggle to stay ahead of cyberattackers and safeguard your organization’s data is why we’ve designed our security features to evolve with the digital threat landscape and provide comprehensive protection against cyberthreats.

Strengthen your code-to-cloud defenses with Microsoft Defender for Cloud. To cope with the complexity of multicloud environments and cloud-native applications, security teams need a comprehensive strategy that enables code-to-cloud defenses on all cloud deployments. For posture management, the preview of Defender for Cloud’s integration with Microsoft Entra Permissions Management helps you apply the least privilege principle for cloud resources and shows the link between access permissions and potential vulnerabilities across Azure, AWS, and Google Cloud. Defender for Cloud also has an improved attack path analysis experience, which helps you predict and prevent complex cloud attacks—and provides more insights into your Kubernetes deployments across Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters and APIs insights to prioritize cloud risk remediation.

To strengthen security throughout the application lifecycle, preview of the GitLab Ultimate integration gives you a clear view of your application security posture and simplifies code-to-cloud remediation workflows across all major developer platforms—GitHub, Azure DevOps, and GitLab within Defender for Cloud. Additionally, general availability of Defender for APIs, which offers machine learning-driven protection against API threats and agentless vulnerability assessments for container images in Microsoft Azure Container Registries. Defender for Cloud now offers a unified vulnerability assessment engine spanning all cloud workloads, powered by the strong capabilities of Microsoft Defender Vulnerability Management.

MDTI: NOW ANYONE CAN TAP INTO GAME-CHANGING THREAT INTELLIGENCEView session 

Leverage Microsoft Defender Threat Intelligence for elevating your threat intelligence. Available in Microsoft Defender XDR, Microsoft Defender Threat Intelligence offers valuable open-source intelligence and internet data sets found nowhere else. These capabilities now enhance Microsoft Defender products with crucial context around threat actors, tooling, and infrastructure at no additional cost to customers. Available in the Threat Intelligence blade of Defender XDR, Detonation Intelligence enables users to search, look up, and contextualize cyberthreats as well as detonate URLs and view results to quickly understand a malicious file or URL. Defender XDR customers can quickly submit an indicator of compromise (IoC) to immediately view the results. Vulnerability Profiles put intelligence collected from the Microsoft Threat Intelligence team about vulnerabilities all in one place. Profiles are updated when new information is discovered and contains a description, Common Vulnerability Scoring System scores (CVSS), a priority score, exploits, and deep and dark web chatter observations.

Use Microsoft Purview to extend data protection capabilities across structured and unstructured data types. In the past, securing and governing sensitive data across these diverse elements of your digital estate would have required multiple providers, adding a heavy integration tax. But today, with Microsoft Purview, you can gain visibility across your entire data estate, secure your structured and unstructured data, and detect risks across clouds. Microsoft Purview’s labeling and classification capabilities are expanding beyond Microsoft 365, offering access controls for both structured and unstructured data types. Users will have the ability to discover, classify, and safeguard sensitive information hosted in structured databases such as Microsoft Azure SQL and Azure Data Lake Storage (ADLS)—also extending these capabilities into Amazon Simple Storage Service (S3) buckets.

Detect insider risk with Microsoft Purview Insider Risk Management, which offers ready-to-use risk indicators to detect critical insider risks in Azure, AWS, and SaaS applications, including Box, Dropbox, Google Drive, and GitHub. Admins with appropriate permissions will no longer need to manually cross-reference signals in these environments. They can now utilize the curated and preprocessed indicators to obtain a more holistic view of a potential insider incident.

Simplify access security with Microsoft Entra. Securing access points is critical and can be complex when using multiple providers for identity management, network security, and cloud security. With Microsoft Entra, you can centralize all your access controls together to more fully secure and protect your environment. Microsoft’s Security Service Edge solution is expanding with several new features.

  • By the end of 2023, Microsoft Entra Internet Access preview will include context-aware secure web gateway (SWG) capabilities for all internet apps and resources with web content filtering, Conditional Access controls, compliant network check, and source IP restoration.
  • Microsoft Entra Private Access for private apps and resources has extended protocol support so you can seamlessly transition from your traditional VPN to a modern Zero Trust Network Access (ZTNA) solution, and the ability to add multifactor authentication to all private apps for remote and on-premises users.
  • Now with auto-enrollment into Microsoft Entra Conditional Access policies you can enhance security posture and reduce complexity for securing access. Easily create and manage a passkey, a free phishing-resistant credential based on open standards, in the Microsoft Authenticator app for signing into Microsoft Entra ID-managed apps.
  • Promote enforcement of least-privilege access for cloud resources with new integrations for Microsoft Entra Permissions Management. Permissions Management has a new integration with ServiceNow that enables organizations to incorporate time-bound access permission requests to existing approval workflows in ServiceNow.

Unify, simplify, and delight users by the Microsoft Intune Suite. We’re adding three new solutions to the Intune Suite, available in February 2024. These solutions further unify critical endpoint management workloads in Intune to fortify device security posture, power better experiences, and simplify IT and security operations end-to-end. We will also be able to offer these solutions coupled with the existing Intune Suite capabilities to agencies and organizations of the Government Community Cloud (GCC) in March 2024.

  • Microsoft Cloud PKI offers a comprehensive, cloud-based public key infrastructure and certificate management solution to simply create, deploy, and manage certificates for authentication, Wi-Fi, and VPN endpoint scenarios.
  • Microsoft Intune Enterprise Application Management streamlines third-party app discovery, packaging, deployment, and updates via a secure enterprise catalog to help all workers stay current.
  • Microsoft Intune Advanced Analytics extends the Intune Suite anomaly detection capabilities and provides deep device data insights as well as battery health scoring for administrators to proactively power better, more secure user experiences and productivity improvements.

Partner opportunities and news

There are several partners participating in our engineer-led Security Copilot Partner Private Preview to validate usage scenarios and provide feedback on functionality, operations, and APIs to assist with extensibility. If you are joining us in person at Microsoft Ignite, watch the demos at the Customer Meet-up Hub, presented by Microsoft Intelligent Security Association (MISA) members sponsoring at Microsoft Ignite. And if you’re a partner interested in staying current, join the Security Copilot Partner Interest Community.

Join us in creating a more secure future

Embracing innovation has never been more important for an organization, not only with respect to today’s cyberthreats but also in anticipation of those to come. Recently, to create a more secure future, we launched the Secure Future Initiative—a new initiative to pursue our next generation of cybersecurity protection.

Microsoft Ignite 2023

Join Vasu Jakkal and Charlie Bell at Microsoft Ignite to watch “the Future of Security and AI” on November 16, 2023, at 10:15 AM PT.

Watch the keynote 

AI is changing our world forever. It is empowering us to achieve the impossible and it will usher in a new era of security that favors security teams. Microsoft is privileged to be a leader in this effort and committed to a vision of security for all.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (formerly known as Twitter) (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2023.

2Cybersecurity Workforce Study, ISC2. 2022.

3Microsoft Security Copilot randomized controlled trial conducted by Microsoft Office of the Chief Economist, November 2023.

4Data Security Index: Trends, insights, and strategies to secure data, Microsoft.

The post Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/feed/ 0
Announcing Microsoft Secure Future Initiative to advance security engineering http://approjects.co.za/?big=en-us/security/blog/2023/11/02/announcing-microsoft-secure-future-initiative-to-advance-security-engineering/ http://approjects.co.za/?big=en-us/security/blog/2023/11/02/announcing-microsoft-secure-future-initiative-to-advance-security-engineering/#respond Thu, 02 Nov 2023 15:00:00 +0000 As I’m sure you’ve all seen, cyberattacks have grown rapidly and dangerously in recent years.

The post Announcing Microsoft Secure Future Initiative to advance security engineering appeared first on The Microsoft Cloud Blog.

]]>
Hi all,

As I’m sure you’ve all seen, cyberattacks have grown rapidly and dangerously in recent years. We now see daily headlines of major industrial disruption, attacks on medical services, and other critical aspects of our daily lives. The sheer speed, scale, and sophistication of the attacks we’re seeing is a reminder for our industry and the world on how advanced digital threats have become. As computing has evolved from packaged software to cloud services, from waterfall to agile development, and with the new advances in AI, we must also evolve how we do security.

At Microsoft, we have a unique responsibility and leading role to play in securing the future for our customers and our community. We have a long and proud history of delivering innovative and impactful products and services that have shaped the industry and transformed the lives of billions of people around the world. We have also been at the forefront of developing and adopting security best practices, standards and tools that have helped us protect our customers and ourselves from cyberthreats and risks. Our move to Zero Trust, multifactor authentication, modern device management, and enhanced telemetry and detections have driven an embedded security culture across our company.

Satya Nadella, Microsoft Chief Executive Officer; Rajesh Jha, Microsoft Executive Vice President, Experiences and Devices; Scott Guthrie, Microsoft Executive Vice President, Cloud and AI; and I have put significant thought into how we should anticipate and adapt to the increasingly more sophisticated cyberthreats. We have carefully considered what we see across Microsoft and what we have heard from customers, governments, and partners to identify our greatest opportunities to impact the future of security. As a result, we have committed to three specific areas of engineering advancement we will add to our journey of continually improving the built-in security of our products and platforms. We will focus on 1. transforming software development, 2. implementing new identity protections, and 3. driving faster vulnerability response.

These advances comprise what we’re calling the Secure Future Initiative. Collectively, they improve security for customers both in the near term and in the future, against cyberthreats we anticipate will increase over the horizon. We recognize that not all of you will be deeply involved in all of the advances we must make. After all, the first priority is security by default. But all of you will be engaged and, more importantly, your constant attention to security in everything you build and operate will be the source of continuous innovation for our collective secure future. Please read on, absorb the “what” and the “why,” and contribute your ideas on innovation. We are all security engineers.

First, we will transform the way we develop software with automation and AI so that we do our best work in delivering software that is secure by design, by default, in deployment, and in operation. Microsoft invented the Security Development Lifecycle (SDL) and made it a bedrock principle of software trust and engineering. We will evolve it to “dynamic SDL” (dSDL). This means we’re going to apply the concept of continuous integration and continuous delivery (CI/CD) to continuously integrate protections against emerging patterns as we code, test, deploy, and operate. Think of it as continuous integration and continuous security.

We will accelerate and automate threat modeling, deploy CodeQL for code analysis to 100 percent of commercial products, and continue to expand Microsoft’s use of memory safe languages (such as C#, Python, Java, and Rust), building security in at the language level and eliminating whole classes of traditional software vulnerability.

We must continue to enable customers with more secure defaults to ensure they have the best available protections that are active out-of-the-box. We all realize no enterprise has the luxury of jettisoning legacy infrastructure. At the same time, the security controls we embed in our products, such as multifactor authentication, must scale where our customers need them most to provide protection. We will implement our Azure tenant baseline controls (99 controls across nine security domains) by default across our internal tenants automatically. This will reduce engineering time spent on configuration management, ensure the highest security bar, and provide an adaptive model where we add capability based on new operational learning and emerging adversary threats. In addition to these defaults, we will ensure adherence and auto-remediation of settings in deployment. Our goal is to move to 100 percent auto-remediation without impacting service availability.

One example from the past of secure defaults is widescale multifactor authentication adoption. Over the past year, we have learned a great deal as we made multifactor authentication on by default for new customers. Those learnings and our communications with customers helped pave the way for our introduction of wider multifactor authentication default policies for wider bands of customer tenants. By focusing on communications as well as engineering—explaining where we are focused on defaults and how customers benefit—we achieve more durable security for our customers. Multifactor authentication is just one area of defaults for us, but over the next year you will see us accelerate security defaults across the board, energized by our learnings and customer feedback. You will all be “customer zero” as we introduce these.

Second, we will extend what we have already created in identity to provide a unified and consistent way of managing and verifying the identities and access rights of our users, devices, and services, across all our products and platforms. Our goal is to make it even harder for identity-focused espionage and criminal operators to impersonate users. Microsoft has been a leader in developing cutting-edge standards and protocol work to defend against rising cyberattacks like token theft, adversary-in-the-middle attacks, and on-premises infrastructure compromise. We will enforce the use of standard identity libraries (such as Microsoft Authentication Library) across all of Microsoft, which implement advanced identity defenses like token binding, continuous access evaluation, advanced application attack detections, and additional identity logging support. Because these capabilities are critical for all applications our customers use, we are also making these advanced capabilities freely available to non-Microsoft application developers through these same libraries.

To stay ahead of bad actors, we are moving identity signing keys to an integrated, hardened Azure HSM and confidential computing infrastructure. In this architecture, signing keys are not only encrypted at rest and in transit, but also during computational processes as well. Key rotation will also be automated allowing high-frequency key replacement with no potential for human access, whatsoever.

Lastly, we are continuing to push the envelope in vulnerability response and security updates for our cloud platforms. As a result of these efforts, we plan to cut the time it takes to mitigate cloud vulnerabilities by 50 percent. We are in a position to achieve this because of our long investment and learnings in automation, monitoring, safe deployment, and AI-driven tools and processes. We will also take a more public stance against third-party researchers being put under non-disclosure agreements by technology providers. Without full transparency on vulnerabilities, the security community cannot learn collectively—defending at scale requires a growth mindset. Microsoft is committed to transparency and will encourage every major cloud provider to adopt the same approach.

These advances are not independent or isolated, but interdependent. They will work together to create a more holistic and comprehensive security infrastructure that can address both current and future cyberthreats. They are also aligned and consistent with our company’s mission, vision, and values, and they support and enable our business goals and objectives. Over the coming months and year, you will see us announce milestones along the execution paths of the above.

As we enter the age of AI, it has never been more important for us to innovate, not only with respect to today’s cyberthreats but also in anticipation of those to come. We are confident making these changes will improve the security, availability and resilience of our systems as well as increase our speed of innovation. In the coming weeks, Rajesh, Scott, and I will be meeting with our teams to share more details about these changes and how they will affect our organization, our processes, and our deliverables. We will also solicit your feedback and input on how we can implement them effectively and efficiently. We want this to be a collaborative and transparent effort that involves all of you as key stakeholders and contributors.

Security is not just a technical problem, but a human one. It affects millions of people around the world who rely on our products and services to communicate, work, learn, and play. We have the talent, the passion, and the vision to make a positive impact on the world through our work.

We appreciate your attention and your dedication.

-Charlie, Rajesh, Scott

Learn more
Learn more about Microsoft’s Secure Future Initiative.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (formerly known as “Twitter”) (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Announcing Microsoft Secure Future Initiative to advance security engineering appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2023/11/02/announcing-microsoft-secure-future-initiative-to-advance-security-engineering/feed/ 0
A new world of security: Microsoft’s Secure Future Initiative https://blogs.microsoft.com/on-the-issues/2023/11/02/secure-future-initiative-sfi-cybersecurity-cyberattacks/ https://blogs.microsoft.com/on-the-issues/2023/11/02/secure-future-initiative-sfi-cybersecurity-cyberattacks/#respond Thu, 02 Nov 2023 15:00:00 +0000 We’re launching today across the company a new initiative to pursue our next generation of cybersecurity protection – what we’re calling our Secure Future Initiative (SFI).

The post A new world of security: Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
The past year has brought to the world an almost unparalleled and diverse array of technological change. Advances in artificial intelligence are accelerating innovation and reshaping the way societies interact and operate. At the same time, cybercriminals and nation-state attackers have unleashed opposing initiatives and innovations that threaten security and stability in communities and countries around the world.

In recent months, we’ve concluded within Microsoft that the increasing speed, scale, and sophistication of cyberattacks call for a new response. Therefore, we’re launching today across the company a new initiative to pursue our next generation of cybersecurity protection – what we’re calling our Secure Future Initiative (SFI).

This new initiative will bring together every part of Microsoft to advance cybersecurity protection. It will have three pillars, focused on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats. Charlie Bell, our Executive Vice President for Microsoft Security, has already shared the Secure Future Initiative details with our engineering teams and what this action plan means for our software development practices.

I share below our perspective on the changes that have led us to take these new steps, as well as more information on each part of our Secure Future Initiative.

The changing threat landscape

In late May, we published information showing new nation-state cyber activity targeting critical infrastructure organizations across the United States. The activity was disconcerting not only because of its threat to civilians across the country, but because of the sophistication of the techniques involved. As we highlighted in May, the attacks involved sophisticated, patient, stealthy, well-resourced, and government-backed techniques to infect and undermine the integrity of computer networks on a long-term basis. We witnessed similar activities this summer targeting cloud services infrastructure, including at Microsoft.

These attacks highlight a fundamental attribute of the current threat landscape. Even as recent years have brought enormous improvements, we will need new and different steps to close the remaining cybersecurity gap. As we shared last month in our annual Microsoft Digital Defense Report, the implementation of well-developed cyber hygiene practices now protect effectively against a large majority of cyberattacks. But the best-resourced attackers have responded by pursuing their own innovations, and they are acting more aggressively and with even more sophistication than in the past.

Brazen nation-state actors have become more prolific in their cyber operations, conducting espionage, sabotage, destructive attacks, and influence operations against other countries and entities with more patience and persistence. Microsoft estimates that 40% of all nation-state attacks in the past two years have focused on critical infrastructure, with state-funded and sophisticated operators hacking into vital systems such as power grids, water systems, and health care facilities. In each of these sectors, the consequences of potential cyber disruption are obviously dire.

At the same time, improving protection has raised the barriers to entry for cybercriminals, but has enabled some market consolidation for a smaller but more pernicious group of sophisticated actors. Microsoft’s Digital Crimes Unit is tracking 123 sophisticated ransomware-as-a-service affiliates, which lock or steal data and then demand a payment for its return. Since September 2022, we estimate that ransomware attempts have increased by more than 200%. While firms with effective security can manage these threats, these attacks are becoming more frequent and complex, targeting smaller and more vulnerable organizations, including hospitals, schools, and local governments. More than 80% of successful ransomware attacks originate from unmanaged devices, highlighting the importance of expanding protective measures to every single digital device.

Today’s cyber threats emanate from well-funded operations and skilled hackers who employ the most advanced tools and techniques. Whether they work for geopolitical or financial motives, these nation states and criminal groups are constantly evolving their practices and expanding their targets, leaving no country, organization, individual, network, or device out of their sights. They don’t just compromise machines and networks; they pose serious risks to people and societies. They require a new response based on our ability to utilize our own resources and our most sophisticated technologies and practices.

AI-based cyber defense

The war in Ukraine has demonstrated the tech sector’s ability to develop cybersecurity defenses that are stronger than advanced offensive threats. Ukraine’s successful cyber defense has required a shared responsibility between the tech sector and the government, with support from the country’s allies. It is a testament to the coupling of public-sector leadership with corporate investments and to combining computing power with human ingenuity. As much as anything, it provides inspiration for what we can achieve at an even greater scale by harnessing the power of AI to better defend against new cyber threats.

As a company, we are committed to building an AI-based cyber shield that will protect customers and countries around the world. Our global network of AI-based datacenters and use of advanced foundation AI models puts us in a strong position to put AI to work to advance cybersecurity protection.

As part of our Secure Future Initiative, we will continue to accelerate this work on multiple fronts.

First, we are taking new steps to use AI to advance Microsoft’s threat intelligence. and the Microsoft Threat Analysis Center (MTAC) are using advanced AI tools and techniques to detect and analyze cyber threats. We are extending these capabilities directly to customers, including through our Microsoft security technologies, which collects and analyzes customer data from multiple sources.

One reason these AI advances are so important is because of their ability to address one of the world’s most pressing cybersecurity challenges. Ubiquitous devices and constant internet connections have created a vast sea of digital data, making it more difficult to detect cyberattacks. In a single day, Microsoft receives more than 65 trillion signals from devices and services around the world. Even if all 8 billion people on the planet could look together for evidence of cyberattacks, we could never keep up.

But AI is a game changer. While threat actors seek to hide their threats like a needle in a vast haystack of data, AI increasingly makes it possible to find the right needle even in a sea of needles. And coupled with a global network of datacenters, we are determined to use AI to detect threats at a speed that is as fast as the Internet itself.

Second, we are using AI as a gamechanger for all organizations to help defeat cyberattacks at machine speed. One of the world’s biggest cybersecurity challenges today is the shortage of trained cybersecurity professionals. With a global shortage of more than three million people, organizations need all the productivity they can muster from their cybersecurity workforce. Additionally, the speed, scale, and sophistication of attacks creates an asymmetry where it’s hard for organizations to prevent and disrupt attacks at scale. Microsoft’s Security Copilot combines a large language model with a security-specific model that has various skills and insights from Microsoft’s threat intelligence. It generates natural language insights and recommendations from complex data, making analysts more effective and responsive, catching threats that may have been missed and helping organizations prevent and disrupt attacks at machine speed.

Another vital ingredient for success is the combination of these AI-driven advances with the use of extended detection and response capabilities in endpoint devices. As noted above, today more than 80% of ransomware compromises originate from unmanaged or “bring-your-own devices” that employees use to access work-related systems and information. But once managed with a service like Microsoft Defender for Endpoint, AI detection techniques provide real-time protection that intercepts and defeats cyberattacks on computing endpoints like laptops, phones, and servers. Wartime advances in Ukraine have provided extensive opportunities to test and extend this protection, including the successful use of AI to identify and defeat Russian cyberattacks even before any human detection.

Third, we are securing AI in our services based on our Responsible AI principles. We recognize that these new AI technologies must move forward with their own safety and security safeguards. That’s why we’re developing and deploying AI in our services based on our Responsible AI principles and practices. We are focused on evolving these practices to keep pace with the changes in the technology itself.

While most of our cybersecurity services protect consumers and organizations, we are also committed to building stronger AI-based protection for governments and countries. Just last week, we announced that we will spend $3.2 billion to extend our hyperscale cloud computing and AI infrastructure in Australia, including the development of the Microsoft-Australian Signals Directorate Cyber Shield (MACS). In collaboration with this critical agency in the Australian Government, this will enhance our joint capability to identify, prevent, and respond to cyber threats. It’s a good indicator of where we need to take AI in the future, building more secure protection for countries around the world.

New engineering advances

In addition to new AI capabilities, a more secure future will require new advances in fundamental software engineering. That’s why Charlie Bell is sending to our employees this morning an email co-authored with his engineering colleagues Scott Guthrie and Rajesh Jha. This launches as part of our Secure Future Initiative a new standard for security by advancing the way we design, build, test, and operate our technology.

You can read Charlie’s entire email here. In summary, it contains three key steps:

First, we will transform the way we develop software with automation and AI. The challenges of today’s cybersecurity threats and the opportunities created by generative AI have created an inflection point for secure software engineering. The steps Charlie is sharing with our engineers today represent the next evolutionary stage of the Security Development Lifecycle (SDL), which Microsoft invented in 2004. We will now evolve this to what we’re calling “dynamic SDL,” or dSDL. This will apply systematic processes to continuously integrate cybersecurity protection against emerging threat patterns as our engineers code, test, deploy, and operate our systems and services. As Charlie explains, we will couple this with other additional engineering measures, including AI-powered secure code analysis and the use of GitHub Copilot to audit and test source code against advanced threat scenarios.

As part of this process, over the next year we will enable customers with more secure default settings for multifactor authentication (MFA) out-of-the-box. This will expand our current default policies to a wider band of customer services, with a focus on where customers need this protection the most. We are keenly sensitive to the impact of such changes on legacy computing infrastructure, and hence we will focus on both new engineering work and expansive communications to explain where we are focused on these default settings and the security benefits this will create.

Second, we will strengthen identity protection against highly sophisticated attacks. Identity-based threats like password attacks have increased ten-fold during the past year, with nation-states and cybercriminals developing more sophisticated techniques to steal and use login credentials. As Charlie explains, we will protect against these changing threats by applying our most advanced identity protection through a unified and consistent process that will manage and verify the identities and access rights of our users, devices, and services across all our products and platforms. We will also make these advanced capabilities freely available to non-Microsoft application developers.

As part of this initiative, we also will migrate to a new and fully automated consumer and enterprise key management system with an architecture designed to ensure that keys remain inaccessible even when underlying processes may be compromised. This will build upon our confidential computing architecture and the use of hardware security modules (HSMs) that store and protect keys in hardware and that encrypts data at rest, in transit, and during computation.

Third, we are pushing the envelope in vulnerability response and security updates for our cloud platforms. We plan to cut the time it takes to mitigate cloud vulnerabilities by 50%. We also will encourage more transparent reporting in a more consistent manner across the tech sector.

We no doubt will add other engineering and software development practices in the months and years ahead, based on learning and feedback from these efforts. Like Trustworthy Computing more than two decades ago, our SFI initiatives will bring together people and groups across Microsoft to evaluate and innovate across the cybersecurity landscape.

Stronger application of international norms

Finally, we believe that stronger AI defenses and engineering advances need to be combined with a third critical component – the stronger application of international norms in cyberspace.

In 2017, we called for a Digital Geneva Convention, a set of principles and norms that would govern the behavior of states and non-state actors in cyberspace. We argued that we needed to enforce and augment the norms needed to protect civilians in cyberspace from a broadening array of cyberthreats. In the six years since that call, the tech sector and governments have taken numerous steps forward in this space, and the precise nature of what we need has evolved. But in spirit and at its heart, I believe the case for a Digital Geneva Convention is stronger than ever.

The essence of the Geneva Convention has always been the protection of innocent civilians. What we need today for cyberspace is not a single convention or treaty but rather a stronger, broader public commitment by the community of nations to stand more resolutely against cyberattacks on civilians and the infrastructure on which we all depend. Fundamentally, we need renewed efforts that unite governments, the private sector, and civil society to advance international norms on two fronts. We will commit Microsoft’s teams around the world to help advocate for and support these efforts.

First, we need to stand together more broadly and publicly to endorse and reinforce the key norms that provide the red lines no government should cross.

We should all abhor determined nation-state efforts that seek to install malware or create or exploit other cybersecurity weaknesses in the networks of critical infrastructure providers. These bear no connection to the espionage efforts that governments have pursued for centuries and instead appear designed to threaten the lives of innocent civilians in a future crisis or conflict. If the principles of the Geneva Convention are to have continued vitality in the 21st century, the international community must reinforce a clear and bright red line that places this type of conduct squarely off limits.

Therefore, all states should commit publicly that they will not plant software vulnerabilities in the networks of critical infrastructure providers such as energy, water, food, medical care, or other providers. They should also commit that they will not permit any persons within their territory or jurisdiction to engage in cybercriminal operations that target critical infrastructure.

Similarly, the past year has brought increasing nation-state efforts to target cloud services, either directly or indirectly, to gain access to sensitive data, disrupt critical systems, or spread misinformation and propaganda. Cloud services themselves have become a critical piece of support for every aspect of our societies, including reliable water, food, energy, medical care, information, and other essentials.

For these reasons, states should recognize cloud services as critical infrastructure, with protection against attack under international law.

This should lead to three related commitments:

States should not engage in or allow any persons within their territory or jurisdiction to engage in cyber operations that would compromise the security, integrity, or confidentiality of cloud services.
States should not indiscriminately compromise the security of cloud services for the purposes of espionage.
States should construct cyber operations to avoid imposing costs on those who are not the target of operations.
Second, we need governments to do more together to foster greater accountability for nation states that cross these red lines. The year has not been lacking in hard proof of nation-state actions that violate these norms. What we need now is the type of strong, public, multilateral, and unified attributions from governments that will hold these states accountable and discourage them from repeating the misconduct.

Tech companies and the private sector play a major role in cybersecurity protection, and we are committed to new steps and stronger action. But especially when it comes to nation-state activity, cybersecurity is a shared responsibility. And just as tech companies need to do more, governments will need to do more as well. If we can all come together, we can take the types of steps that will give the world what it deserves – a more secure future.

The post A new world of security: Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/2023/11/02/secure-future-initiative-sfi-cybersecurity-cyberattacks/feed/ 0