Thomas Hansen, Author at The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog Build the future of your business with AI Thu, 09 Apr 2026 16:00:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/wp-content/uploads/2026/04/cropped-favicon-32x32.png Thomas Hansen, Author at The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog 32 32 Why cloud migration is key to realizing AI value in financial services http://approjects.co.za/?big=en-us/microsoft-cloud/blog/financial-services/2026/03/30/why-cloud-migration-is-key-to-realizing-ai-value-in-financial-services/ Mon, 30 Mar 2026 16:00:00 +0000 Financial services leaders modernize with Microsoft Cloud to build AI‑first, secure, compliant foundations for Frontier Firms.

The post Why cloud migration is key to realizing AI value in financial services appeared first on The Microsoft Cloud Blog.

]]>
For years, the merits of digital transformation have been debatable in financial services. The benefits of migrating to modern cloud platforms have always been clear, but many firms have been slow to give up the legacy systems that long served as their operational backbones, often with good reason. However, with the advent of game-changing new AI capabilities, the choice to stick with older architectures becomes riskier by the day.

Across banking, capital markets, and insurance, some of the fastest-moving institutions are not simply “adopting AI.” They are becoming Frontier Firms, AI-powered organizations built around human-agent collaboration. In a sector where the cost of error is high, the financial services sector is emerging as an early proving ground for the Frontier Firm model.

The Microsoft 2025 Work Trend Index highlights a widening AI divide. While many organizations remain stuck in pilot mode, Frontier Firms are scaling agentic AI across their operations.

Our work with financial services leaders worldwide shows a clear pattern. The winners in the next generation of innovation will be those that combine human judgment with AI and agents, without compromising security, compliance, or customer trust. Critically, these advantages are best enabled through migration to a modern cloud foundation that can scale AI responsibly and reliably.

The crossroad: Modernize or let legacy debt grow?

Legacy systems have powered financial services for decades. Yet the very qualities that once made them indispensable—custom integrations, tightly coupled architectures, and deeply embedded processes—now create friction and fragility. Increasingly, they can be expensive to maintain, slow to change, and difficult to secure end-to-end. Worse, they can inherently constrain data access across the business, which limits advanced analytics and AI from delivering full value in key areas like customer engagement, fraud prevention, credit decisions, underwriting, and financial crime.

In many institutions, this accumulated technical debt is, in effect, an understated balance-sheet liability. It can increase operational overhead, complicate resilience planning, and broaden the cyber-attack surface. At the same time, regulators are demanding that firms prove stronger controls while, competitively, digital-native challengers are showing what’s possible when technology is designed for continuous change.

Modernization can help answer many of these challenges by helping position firms to gain competitive advantages that go well beyond cost efficiency. As workloads become increasingly cloud-native (in other words, designed to be built, updated, and scaled continuously in the cloud rather than tied to legacy infrastructure), organizations can launch new services faster, respond with agility, and use AI as part of everyday operations.

Waiting to migrate can increase risk and cost

A variety of factors are converging to increase the urgency of modernizing.

  • Regulatory pressure is growing. Requirements for operational resilience, third-party risk oversight, data governance, and AI accountability are becoming more explicit and more enforceable. In Europe, the Digital Operational Resilience Act (DORA) raises the bar on stress testing, incident reporting, and information and communication technology (ICT) governance. In parallel, the European Union AI Act introduces demanding expectations for high-risk AI, including transparency, explainability, and bias mitigation. Globally, frameworks shaped by Basel guidance and securities regulators continue to push for stronger risk management, auditability, and controls across financial operations.
  • Customer expectations are becoming non-negotiable. “Digital-first” now means more than building a polished mobile app. It means enabling instant transactions, proactive service, and personalized guidance—delivered consistently across channels. Doing all this at scale means that data must move securely and quickly, products should evolve continuously, and controls must be embedded rather than bolted on.
  • The threat landscape is getting scarier. Threat actors are using automation and AI to increase both scale and sophistication. In a legacy environment, security improvements often arrive as point solutions, unevenly applied, and hard to validate. Cloud architectures, implemented with the right governance, help enable consistent identity controls, continuous monitoring, and policy-based protection that can be audited and improved over time.

Migration as a lever for innovation

Migration is too often framed as a technology initiative. For business and risk leaders, the more useful long-term view is as to regard it as a control and value strategy, a way to embed governance into the operating fabric of the firm.

This is why many transformation leaders manage cloud adoption as a sequence rather than a singular initiative, with a pathway from rehosting (“lift-and-shift”) through optimization and ultimately to AI acceleration. In this framing, modernization is not the finish line; it is the first step of compounding advantage.

Cloud migration, when managed well, can support a compliance‑by‑design approach, by which policy, identity, and data protections are consistently enforced. It can strengthen operational resilience through architectures that are built for redundancy, automated recovery, and continuous validation. And it can create an innovation pathway by making agentic AI practical to deploy and manage.

The AI-first divide: Cloud as operating model

As we see with Frontier Firms in financial services, innovation leaders tend to treat cloud architecture as more than an infrastructure choice. They use it as an operating model to standardize controls, build reusable platforms, and design processes that are increasingly AI-operated but human-led. The payoff can show up in faster deployment cycles, a lower cost per transaction, and predictive insights that make customer experiences more personal and operations more resilient.

Reaching that maturity typically requires progress across four transformation engines:

  • Infrastructure modernization
  • Legacy systems migration
  • Systems modernization (including new business systems)
  • Data modernization with AI integration

Financial services firms face stricter scrutiny than most industries, so the differentiator is not speed alone, it’s the ability to sustain speed while continuously demonstrating security, compliance, and control effectiveness.

We see this in practice across the industry. For example, UBS, following its acquisition of Credit Suisse, migrated a mission‑critical records platform from mainframe to a cloud‑native service on Microsoft Azure, reducing total cost of ownership by nearly 60% and improving their ability to meet regulatory demands. After LSEG migrated its high-volume, mission-critical Autex Trade Route (ATR) trading network from on-premises to Azure, the gains in scalability and resilience helped them absorb a sudden 400% surge in trading volumes with zero incidents. And the National Bank of Greece modernized document processing to improve accuracy and enable faster, more digital customer journeys. The common thread is not a single tool or model, it’s a cloud foundation that supports governed data, resilient operations, and repeatable innovation.

Turning migration into long-term value

For many firms, the hardest part of migration is not the technology; it’s making the journey auditable, repeatable, and aligned to risk appetite. That’s why a structured approach matters.

The Microsoft Cloud Adoption Framework, tailored for financial services, is designed to help institutions align cloud modernization to business outcomes while addressing the governance realities of the industry: data sovereignty expectations, operational resilience, and security-by-design. Importantly, cloud migration need not undermine data sovereignty; done right, migration strengthens locality, control, and compliance through governed architectures.

In practice, migration means helping businesses to build a compliant foundation, innovate responsibly, and maintain continuous control visibility as they scale. Microsoft supports this with financial-services-ready architectures, built-in governance and security capabilities, and a broad set of certifications and controls. Just as importantly, we work closely with customers and regulators globally to help ensure that cloud adoption can be evidenced properly in terms of risk reduction, resilience, and measurable operating improvement.

Trustworthy AI starts with the cloud foundation

Boards and regulators are right to focus on AI governance. Generative AI, agentic systems, and intelligent automation can improve productivity and customer outcomes, but only when they operate on governed data, with strong identity controls, clear lineage, and auditable policies. Those prerequisites are difficult to achieve in fragmented legacy environments.

Cloud migration creates the conditions for AI to be adopted responsibly, with modern data platforms and pipelines, elastic compute for experimentation and scale, consistent policy enforcement, and continuous monitoring.

To help institutions navigate migration with confidence, Microsoft combines a financial-services-tailored methodology with practical tooling and built-in governance. The Cloud Adoption Framework for financial services provides a proven, risk-aligned approach to planning and executing secure migrations. Azure Migrate and the Azure cloud migration and modernization programs help accelerate discovery, modernization, and execution with guidance and incentives. And capabilities like Microsoft Purview and Microsoft Defender for Cloud help establish compliance guardrails and security posture management from day one.

Lead the next generation with cloud

Migration is not the end state of digital transformation. It is the foundation for Frontier transformation, one which can enable firms to innovate faster, demonstrate stronger controls, and adapt quickly to new demands and opportunities.

The financial services firms that lead in the next generation of financial services will not be those that move the fastest in a single quarter. They will be the ones who modernize with technology that is durable, designed for operational resilience and evidence-based governance, and that makes innovation repeatable. Cloud migration is the inflection point where these powerful advantages become possible.

Learn more

The post Why cloud migration is key to realizing AI value in financial services appeared first on The Microsoft Cloud Blog.

]]>
AI for nuclear energy: Powering an intelligent, resilient future http://approjects.co.za/?big=en-us/microsoft-cloud/blog/energy-and-resources/2026/03/24/ai-for-nuclear-energy-powering-an-intelligent-resilient-future/ Tue, 24 Mar 2026 15:00:00 +0000 AI and digital twins are helping nuclear developers accelerate permitting, design, and operations. Discover how Microsoft and NVIDIA are enabling faster, safer delivery of carbon-free power with an AI-driven digital ecosystem on Azure.

The post AI for nuclear energy: Powering an intelligent, resilient future appeared first on The Microsoft Cloud Blog.

]]>
The world is racing to meet a historic surge in power demand with an infrastructure pipeline built for the analog age. Driven by the exponential expansion of digital technologies and the reindustrialization of supply chains, the mandate for always-on, carbon-free power is urgent and absolute. Nuclear energy is the essential backbone for this future, but the industry remains trapped in a delivery bottleneck. Before a shovel even hits the dirt, critical projects are slowed by highly customized engineering, fragmented data, and mountains of manual regulatory review.

That is where AI comes in. To break the infrastructure bottleneck and shift the industry from ambition to delivery, Microsoft is announcing an AI for nuclear collaboration with NVIDIA, to provide end-to-end tools that streamline permitting, accelerate design, and optimize operations across the industry.

This set of technologies brings disciplined engineering to the entire lifecycle of a nuclear plant—spanning site permitting, design, construction, and continuous operations. By enabling these capabilities within a connected, AI-powered foundation, we are empowering energy developers to make highly complex work repeatable, traceable, secure, and predictable—slashing development timelines and eliminating rework without sacrificing safety.

The digital foundation for nuclear at scale

The only thing that may be more complex than building a nuclear plant is designing and permitting one. Permitting alone can take years, cost hundreds of millions of dollars, and involve an immense amount of data processing and reporting. It’s not a lack of need, knowledge, or even willingness that’s holding development back, but rather the inability to progress efficiently and consistently through rigorous permitting and development processes.

Engineers can spend thousands of hours drafting, cross-referencing, formatting, searching, reviewing, and reworking materials. They have to identify and fix inconsistencies across tens of thousands of pages. It is little wonder that plants have been notorious for construction delays and cost overruns.

To break this infrastructure bottleneck, we need to move away from highly customized engineering towards repeatable, reference-based delivery—while maintaining regulatory standards and engineering accountability.

With AI, we can identify tiny documentation inconsistencies and resolve them quickly. By unifying data and simulation across the lifecycle, we ensure complex work remains:

  • Traceable: Every engineering decision is digitally linked to the evidence and regulations that back it up.
  • Audit-Ready: The system keeps a perfect “paper trail,” ensuring that regulators can verify safety instantly.
  • Secure: High-level intelligence is applied within a governed, protected environment.
  • Predictable: High-fidelity simulations map time and cost, catching delays before they happen in the real world.

This isn’t just about speed; it’s about trust. Engineers and regulators are freed to focus on what matters most: building a safe, secure, high-capacity, carbon-free power source that’s on-time and on-budget.

Here is how AI and Digital Twins can carry a project from the initial phases to efficient operations:

  • Design and engineering: Digital Twins and high-fidelity simulations enable faster iteration. Engineers can reuse proven patterns and instantly see how a tiny design change impacts the entire model, creating a validated plan before breaking ground.
  • Licensing and permitting: Generative AI handles the heavy lifting of document drafting and gap analysis. It unifies all project information, ensuring comprehensive applications aligned with historical permits. This allows expert regulators to focus their time on safety judgments rather than reconciling thousands of pages of text.
  • Construction and delivery: While traditional 3D models only map physical space, 4D (time scheduling) and 5D (cost tracking) simulations can virtually construct the plant before shovels hit the dirt. AI and Digital Twins allow developers to track physical progress against the digital plan in real-time, catching potential delays and preventing the schedule collisions that lead to expensive rework.
  • Operations and maintenance: AI-powered sensors and operational digital twins detect anomalies early, ensuring higher uptime and predictive maintenance that keeps the grid stable with human operators firmly in control.

By unifying data, traceability, and simulation across phases, AI accelerates design validation with high-fidelity 3D models and Digital Twins, improves licensing consistency through AI-assisted document workflows, and connects design assumptions to operational performance—giving operators, regulators, and stakeholders clearer, continuous visibility.

Accelerating delivery: How Aalo Atomics, Idaho National Labs, and Southern Nuclear are deploying AI for nuclear

The proof is in the progress. Our collaboration is already changing the pace of nuclear delivery.

Aalo Atomics

Aalo Atomics has reduced the time-intensive permitting process by 92% using the Microsoft Generative AI for Permitting solution, saving an estimated $80 million a year. For Aalo, the value of the Microsoft and NVIDIA collaboration isn’t just speed—it’s confidence.

Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.”

—Yasir Arafat, Chief Technology Officer, Aalo Atomics

Southern Nuclear

Southern Nuclear has developed and deployed agents using Microsoft Copilot across its fleet, including engineering and licensing, to improve consistency, reuse knowledge faster, and support better decision-making in key workstreams.

Idaho National Laboratory

When it comes to the public sector and specifically United States Federal, Idaho National Laboratory (INL) has become an early adopter of AI for nuclear technology. By using the AI capabilities to automate the assembly of complex engineering and safety analysis reports, INL is streamlining the review process and creating standard methodologies for regulators to adopt these tools safely, further speeding deployment.

Expanding the ecosystem: How Everstar and Atomic Canyon are operationalizing AI for nuclear on Microsoft Azure

Microsoft is actively expanding this secure ecosystem. Everstar—an NVIDIA Inception startup—brings domain-specific AI for nuclear to Azure to modernize how the industry manages project workflows and governed data pipelines.

The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.”

—Kevin Kong, Chief Executive Officer, Everstar

We are also excited to highlight Atomic Canyon, whose Neutron platform is now available in the Microsoft Marketplace, allowing nuclear developers to deploy these capabilities with consistency and control through trusted procurement pathways.

Progress at the pace this moment requires

AI is enabling the energy industry to deliver more power, faster, and safely. This Microsoft and NVIDIA collaboration provides the path to do exactly that for advanced developers, owners, and operators. By turning fragmented, high-variance workflows into governed, auditable systems, we can compress timelines without compromising rigor. By unifying data, simulation, and evidence across design, permitting, construction, and operations, we are accelerating the deployment of firm, carbon-free power while strengthening regulatory confidence and operational resilience.

The AI for nuclear operations collaboration brings together NVIDIA Omniverse, NVIDIA Earth 2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer to create a comprehensive, AI-powered digital ecosystem for nuclear energy on Azure.

Microsoft, NVIDIA, and Aalo Atomics will be presenting this AI-lead industry perspective at CERAWeek 2026 in a session entitled “A Digital Age for Nuclear: Aalo Atomics, NVIDIA, and Microsoft.”

Discover more

Ready to move from ambition to delivery? See how the Microsoft and NVIDIA nuclear for AI collaboration can drive change within your organization.

Contact us to learn more.

The post AI for nuclear energy: Powering an intelligent, resilient future appeared first on The Microsoft Cloud Blog.

]]>
Manufacturing at the 2026 inflection point: How Frontier companies are entering the agentic era http://approjects.co.za/?big=en-us/microsoft-cloud/blog/manufacturing/2026/03/16/manufacturing-at-the-2026-inflection-point-how-frontier-companies-are-entering-the-agentic-era/ Mon, 16 Mar 2026 15:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/ms-industry/manufacturing-at-the-2026-inflection-point-how-frontier-companies-are-entering-the-agentic-era/ Microsoft is powering manufacturing’s 2026 inflection point—turning AI from pilots into orchestrated, end‑to‑end intelligence.

The post Manufacturing at the 2026 inflection point: How Frontier companies are entering the agentic era appeared first on The Microsoft Cloud Blog.

]]>
With 2026 underway, manufacturing is reaching a clearer inflection point in how intelligence is defined and applied. Not long ago, the focus was on sensors, automation, and raw computing power. Today, the real story is orchestration—how companies connect fragmented data, processes, and people into an intelligent system that can sense, decide, and act across the research and development (R&D) lab, the shop floor, and the supply chain.

Manufacturing is moving beyond local optimization toward a closed loop of end-to-end intelligent orchestration. Looking back at CES 2026, we can see that the industry narrative is quiet but fundamentally shifting. 

Across what we’re seeing with customers globally, three shifts stand out. First, the system shift. The operational foundation is evolving from digital to intelligent: more resilient, more real-time, and critically, more governable. Second, the data shift. The digital thread is no longer a static archive. It is becoming a living system—continuously updated and directly powering decisions as conditions change. Third, the work shift. We’re moving from copilots that assist individuals to agents that can collaborate and take on tasks—so the workflows themselves become more self-driving.

Together, these forces are raising the bar. Companies now need an end-to-end intelligent chain that turns AI from isolated point solutions into an organizational capability—reusable, scalable, and auditable. Drawing on Microsoft’s long-term work with manufacturers worldwide, and on how technology is evolving, I’d like to offer a practical framework for building that intelligent chain—so leaders can convert insight into action, and pilots into capabilities that scale.

AI use-case map for manufacturing: End-to-end intelligence from design to service

Scene One: Digital Engineering: Turning R&D into a profit engine

The role of the digital thread is evolving. Traditionally, it served primarily as a system of record—aggregating and archiving data. With AI and a unified data platform, it is becoming a real-time decision backbone spanning design, manufacturing, and service. Knowledge generated at one stage can now be applied immediately to improve outcomes in another. Generative and agentic AI are accelerating the core engineering loop—design, simulation, manufacturability analysis, and engineering change management—shortening iteration cycles and pushing risk discovery earlier in the process. Engineering data is no longer an R&D-only asset; it increasingly informs factory scheduling, quality strategies, maintenance policies, and service feedback loops.

This shift is already visible in practice. HARTING, a leader in industrial connectors, has deployed an AI assistant powered by Azure OpenAI and Microsoft Cloud for Manufacturing, making connector design faster, simpler, and more intuitive than ever before. Customers can describe their requirements in natural language, and the AI translates these inputs into technical specifications, guiding them to the right product within a minute. Customers can also visualize their configurations in 3D, enhancing confidence in their decisions. Siemens DI provides comprehensive cutting-edge software, hardware, and product lifecycle management solutions for industries including automotive and aerospace.

Using Microsoft Azure AI, Siemens DI developed a Microsoft Teams application for its industry-leading product lifecycle management (PLM) solution, Teamcenter. This solution analyzes unstructured voice content in multiple languages, automatically generates summary reports, and delivers information precisely to the relevant design, engineering, or manufacturing experts within Teamcenter. Through this intelligent collaboration mechanism, field issues are resolved faster, and knowledge transfer efficiency is significantly enhanced.

Scene Two: Intelligent Factory: AI is rewriting scheduling, quality, and maintenance

Production, maintenance, quality, and inventory remain the four core modules of factory operations—and that does not change in a smart‑factory context. What is changing is how these modules run. AI is systematically reshaping their operating logic: inventory management is moving from static rules to dynamic optimization based on real-time signals; quality management is shifting toward earlier, more precise judgments through computer vision, time‑series forecasting, and anomaly detection; and maintenance is evolving from after‑the‑fact repairs to predictive maintenance—progressing further toward adaptive process control.

As OT and IT capabilities mature, factories are gaining the ability to reason and respond directly at the point of value creation—on the shop floor, in real time. Frontline teams, empowered by multimodal Microsoft Copilot, can push the boundaries of what they can diagnose, decide, and execute. Over time, this human‑machine collaboration forms operational “agents” that can be deployed into production lines and day‑to‑day routines—turning intelligence into repeatable execution.

Global candy maker Mars operates manufacturing facilities across 124 locations worldwide. To safeguard its global equipment network, Mars partnered with Microsoft to deploy the Microsoft Defender for IoT solution. This enables visual management and threat detection for industrial equipment within stringent air-gapped production environments. Simultaneously, the solution transmits critical security data to a centralized system, enhancing data visibility while ensuring production continuity. International technology group Körber has transformed its market-leading PAS-X MES product into a cloud-based software as a service (SaaS) solution to address the stringent and multifaceted production management demands of the pharmaceutical sector. Using the robust stability of Microsoft Azure, Microsoft for Manufacturing, and Microsoft Azure Kubernetes Service, this solution enables customers to achieve greater flexibility and scalability. Simultaneously, by integrating data from IT and OT systems such as enterprise resource planning (ERP), supply chain management (SCM), and manufacturing execution system (MES), it delivers near real-time, actionable insights from diverse systems to employees. This significantly enhances equipment uptime, employee productivity, product quality, and overall output.

Scene Three: Resilient supply chain: From insight to execution with agentic AI

Early AI in supply chains mostly provided forecasts and dashboards. Valuable as they were, humans still needed to translate insights into action. The next step is agentic AI that executes—coordinating with suppliers, triggering replenishment or re-planning, optimizing inventory, and managing exceptions in logistics. When this happens, the traditional plan–execute–feedback loop transforms into a continuous intelligent system. The result is more than improved service levels—it enhances structural resilience and sustainability, as the system senses disruptions earlier, acts faster, and learns continuously.

A China-based electronics manufacturer, Xiaomi has built a unified after-sales supply chain management platform based on Microsoft Dynamics 365 and Microsoft Power Platform, using Azure for system integration and multilingual support. Utilizing Dynamics 365 Customer Service, Xiaomi has created a work platform that integrates financial processes, data integration, and security authentication across multiple communication channels. This platform also visualizes current inventory and proactively monitors and manages inventory levels in real time, enabling collaborative management between frontline services and backend supply chains. As a global leader in the smart terminal and home electronics industry, TCL is reshaping the industry landscape with its “Hardware + AI + Ecosystem” strategy, building a full-scenario ecosystem spanning multiple devices. Beyond driving innovative applications of Azure cloud and AI technologies in manufacturing, supply chains, and user experiences, TCL has pioneered the integration of Azure OpenAI, multimodal interaction, the intelligent Microsoft Copilot® assistant, and the Artificial Intelligence Generated Content (AIGC) ecosystem into smart TVs, smartphones, tablets, air conditioners, and other home appliances. This enables seamless cross-device connectivity and immersive experiences.

Scene Four: Connected customer: The product doesn’t end at delivery

In an AI-native model, product delivery is no longer the finish line. Customer experience continues through Over-the-Air (OTA) updates, AI-guided diagnostics, predictive service, and personal recommendations. AI enables a true closed loop—from customer feedback to engineering, factory, service, and back—turning experience into a growth driver rather than a cost center. None of this can scale without trust. As AI moves from recommendation to execution, governance becomes essential. Organizations need model governance, data and access control, OT and endpoint security, and explainability with rollback capabilities. This layer underpins all use cases, ensuring AI operates safely and reliably.

Epiroc, a Swedish mining and infrastructure equipment manufacturer, uses Microsoft Azure Machine Learning to build predictive maintenance and equipment performance models, transforming machine data into actionable customer insights. By identifying potential failures in advance and optimizing maintenance planning, Epiroc delivers a more proactive and transparent service experience, deepening customer relationships while opening new service-driven growth opportunities. Lenovo partnered with Microsoft to deploy the Microsoft Dynamics 365 Sales platform, thereby transforming its global customer relationship management (CRM) system.

By consolidating fragmented customer data and standardizing sales processes onto a unified digital platform, Lenovo achieved end-to-end visibility from lead management to opportunity tracking. The transformation improved collaboration efficiency, strengthened data-driven decision-making, and reinforced a more customer-centric operating model. In the “Hyper-Competition in High Dimensions” of the smart electric vehicle industry, NIO significantly boosts R&D efficiency by generating 610,000 lines of code daily through its intelligent GitHub Copilot® copilot, achieving an acceptance rate of up to 33%. The in-vehicle assistant NOMI, built on Azure OpenAI, enhances driving safety and user experience through precise contextual interaction. Simultaneously, Microsoft security solutions safeguard NIO’s complex IT environment and hybrid AI platform, automating daily threat detection and enabling cross-device security coordination.

Scene Five: Trust, safety, and OT security: The non-negotiable foundation

None of these AI use cases can scale without trust. Once AI moves from a recommendation system to an execution system, governance becomes essential. Manufacturing organizations need four core trust capabilities: model governance (ModelOps and Responsible AI), data and access control (Zero Trust architecture and industrial data protection,) OT and endpoint security, and explainability with controllability and rollback, so decisions can be understood, constrained, and safely reversed when needed. This is not a separate chapter; it forms the operating layer beneath all use cases, ensuring AI operates safely and reliably across the organization.

Ford, a longstanding automotive manufacturer synonymous with innovation, has deployed Microsoft solutions—including Microsoft Defender, Microsoft Sentinel, and Microsoft Purview—across its global operations. This initiative enhances visibility, automates responses, and strengthens data governance within its hybrid environment as companies worldwide face escalating cybersecurity threats. AI models learn from every interaction to improve detection capabilities and reduce false positives. With a unified security platform, Ford can focus on business strategy while reducing complexity and boosting operational efficiency. Smart pet device leader PETKIT is currently upgrading its systems on the Azure platform to achieve standardized device connectivity, telemetry data aggregation, and global compliance and security for users worldwide. Microsoft’s products and services not only enhance the company’s technological depth but also provide a cloud-plus-AI platform for global market replication.

2026: The inflection point when AI shifts from “more” to “different”

Once an end-to-end intelligent chain is in place, AI’s role inevitably shifts from offering advice to executing processes—and manufacturing moves from isolated efficiency gains toward full system redesign. In this sense, 2026 will be the year this transformation is proven on a scale. It will be a demanding moment for industry, but also a rare opportunity for leaders to make a true step change. This shift is becoming visible across several dimensions.

In 2026, AI in manufacturing will no longer exist as a collection of pilots. Instead, it will function as an enterprise nervous system—continuously sensing, learning, and coordinating decisions across functions. Organizations will move from experimenting with AI to running with AI, shifting from exploratory adoption to responsible, repeatable execution at scale.

Second, the ability to scale AI will become a key competitive differentiator. AI should not be confined to isolated applications but integrated into cross-departmental and cross-business collaboration to unlock its full potential. In other words, the gap between enterprises no longer lies in whether they deploy AI, but in their ability to achieve scalable implementation across the entire end-to-end value chain. Research from MIT and McKinsey suggests that leading enterprises can achieve up to four times the impact in half the time by building unified data and governance foundations.1

Third, technical readiness will help define 2026. Edge inference, OT and IT integration, industrial networking, and model governance have matured to the point where AI can operate directly where value is created—on the plant floor, in real time, and within the flow of work. AI is moving beyond general content generation toward deep operational integration, spanning equipment, processes, quality, and logistics, and becoming an integral part of closed-loop industrial control.

Beyond technology, people, governance, and culture will emerge as true differentiators. In 2026, the primary constraint for many manufacturers will be organizational readiness—the ability to share data responsibly, collaborate across silos, and build AI literacy and operating rhythms that sustain change. Research on scaling AI highlights the “10–20–70 rule”: roughly 10% of success comes from algorithms, 20% from technology and data foundations, and 70% from people and processes.1 Scaling AI effectively therefore requires building skills, accountability, and safety-and-governance capabilities in parallel with the technology itself.

Finally, the maturation of industry standards and ecosystems will accelerate broader AI adoption. Manufacturers face converging pressures—from geopolitics and cost to compliance and supply chain resilience. According to public records, 81% of manufacturers cite fear of falling behind as a primary driver of adoption.2 The implication is clear: the question is no longer “Do we need AI?” but “Can we afford not to evolve?” As industrial data semantics, standardized APIs, reference architectures, and increasingly packaged solutions mature, time-to-value will shorten and complexity will fall—making AI feasible for a much broader set of manufacturers.

From insight to action: A 2026 checklist for manufacturing leaders

At this point, the question is no longer abstract: can your organization turn AI capabilities into sustainable, day-to-day operations—rather than pilots and demos? In conversations with manufacturers around the world, this question consistently separates leaders from laggards:

  • Strategic clarity: Have you defined the core business problems AI must solve, beyond simply “adopting AI”?
  • Data foundation: Can your data platform support real deployment, not just proof-of-concept results?
  • Operational readiness: Are your factories and supply chains prepared for AI-powered routines in daily execution?
  • Workforce capability: Does your workforce have the baseline skills to work effectively with AI systems?
  • Ecosystem usage: Do your partners and platforms support continuous upgrades and rapid scaling?
  • Governance and security: Is governance strong enough for AI to move from recommendation to execution?
  • Resilience impact: Is AI measurably strengthening operational resilience?

We can already see the direction of travel toward the future. But trends alone do not create leaders. Execution does. The real differentiator will be who can turn AI from concept into action, from tool into capability, and ultimately from capability into resilience.

Advancing intelligent manufacturing with Microsoft

Manufacturing is entering a new phase—powered by actionable data, increasingly autonomous systems, and a more empowered workforce. Companies that unify their data, drive autonomy across planning and execution, and integrate the value chain through digital threads and digital twins will be best positioned to convert operational excellence and innovation into sustained growth.

Against this backdrop, Microsoft continues to work closely with manufacturers to expand what is possible across design, production, supply chain, and service. By combining cloud, data, and AI platforms that are advanced yet practical to deploy, we aim to help organizations build end-to-end intelligent operations—accelerating innovation while maintaining security, responsibility, and scale.


1 KPMG, Intelligent manufacturing A blueprint for creating value through AI-driven transformation.

2 businesswire, Ninety-Five Percent of Manufacturers Are Investing in AI to Navigate Uncertainty and Accelerate Smart Manufacturing, June 2023.

The post Manufacturing at the 2026 inflection point: How Frontier companies are entering the agentic era appeared first on The Microsoft Cloud Blog.

]]>
A new study explores how AI shapes what you can trust online https://news.microsoft.com/signal/articles/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/ Thu, 12 Mar 2026 15:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/2026/03/12/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/ Microsoft examines how media authentication, provenance, and watermarking can strengthen trust as AI‑generated content accelerates.

The post A new study explores how AI shapes what you can trust online appeared first on The Microsoft Cloud Blog.

]]>
You see it over your social feeds: Videos of adorable babies saying oddly grown-up things, public figures making wildly uncharacteristic statements, nature photos too far-fetched to be true. In the era of AI, seeing isn’t always believing.

Deepfakes threaten trust in news, elections, brands and everyday interactions, leading us to question what’s real. Determining what’s authentic or manipulated is the subject of Microsoft’s “Media Integrity and Authentication: Status, Directions, and Futures” report, published today. The study evaluates today’s authentication methods to better understand their limitations, explore potential ways to strengthen them and help people make informed decisions about the online content they consume.

The authors conclude that no single solution can prevent digital deception on its own. Methods such as provenance, watermarking and digital fingerprinting can offer useful information like who created the content, what tools were used and whether it has been altered.

Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.
Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.

People can be deceived by media if they lack information like its origin and history, or if its information is low-quality or misleading. The goal of the report is to provide a roadmap to deliver more high-assurance provenance information the public can rely on, according to Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.

Helping people recognize higher-quality content indicators is increasingly important as deepfakes become more disruptive and provenance legislation in various countries, including the U.S., introduce even more ways to help people authenticate content later this year.

Media provenance has been evolving for years, with Microsoft pioneering the technology in 2019 and cofounding the Coalition for Content Provenance and Authenticity (C2PA) in 2021 to standardize media authenticity.

Young, co-chair of the study, explains more about what it all means:

What prompted the study?

“The motivation was two-fold,” Young says. “The first is the recognition of the moment we’re in right now. We know generative AI capabilities are becoming increasingly powerful. It’s becoming more challenging to distinguish between authentic content — like content that was captured by a camera versus sophisticated deepfakes — and as a result, there’s a huge uptick right now in interests and requirements to use those technologies that exist to disclose and verify if content was generated or manipulated by AI.

“The moment has been building, and we have a desire to help ensure that these technologies ultimately drive more benefit than harm, based on how they’re used and understood.”

Young adds that the paper is meant to inform the greater media integrity and authentication ecosystem, including creators, technologists, policymakers and others to understand what is and isn’t possible currently and how we can build on it for the future.

What did the study accomplish, and what did you learn?

The report outlines a path to increase confidence in the authenticity of media. The authors propose a direction they refer to as “high-confidence authentication” to mitigate the weaknesses of various media integrity methods.

Linking C2PA provenance to an imperceptible watermark can bring relatively high confidence about media’s provenance, she says.

She notes the report has a lot of caveats too, such as how provenance from traditional offline devices like cameras, which often lack critical security features, can be less trustworthy because it’s easier to alter.

It isn’t possible to prevent every attack or stop certain platforms from stripping provenance signals, so the challenge, Young says, “is figuring out how to surface the most reliable indicators with strong security built in — and, when necessary, reinforce them with additional methods that allow recovery or support manual digital-forensics work.”

How is this study different from others?

Young says their study investigated two “underexplored” lines of thought for the three methods of verification. They define the first as sociotechnical attacks, where provenance information or the media itself could be manipulated to make authentic content appear synthetic or fake content seem real during the validation process.

“Imagine you see an authentic image of a global sporting event with 80% of the crowd cheering for the home team,” she says. “The away team engages in an online argument claiming, ‘Hey, no, that’s all a fake crowd.’ Someone could make one small, insignificant edit to a person in the corner of the picture and current methods would deem it AI generated — even if the crowd size was real. These methods that are supposed to support authenticity are now reinforcing a fake narrative, instead of the real one.

“So, knowing how different validators work, even through really subtle modifications, you could manipulate the results the public would see to try to deceive them about content,” she says. The second key topic builds on the C2PA’s work to make content credentials more durable, while also addressing reliability. This is where the research is especially novel, Young says. “We looked at how provenance information can be added and maintained across different environments — from high-security systems to less secure, offline devices — and what that means for reliability.”

Why is verifying digital media so difficult?

Authenticating media is complex because there’s not a one-size-fits-all solution, Young says.

“You have different formats that have different limitations or trade-offs for the signals they can contain,” she explains. “Whether it’s images, audio, video — not to mention text, which has a whole different array of challenges — and how strong the solutions can be applied there.”

Young says there are different requirements and opinions about what level of transparency is appropriate as well. In some cases, users might not want any of their personal information included in the digital provenance of a piece of media, while in others, creators or artists might want attribution and to opt-in for having their information included.

“So, you have different requirements or even considerations about what goes into that provenance information,” she says. “And then, similar to the field of security, no solution is foolproof. So, all the methods are complementary, but each has inherent limitations.”

Where do we go from here?

Young says that as AI-made or edited content becomes more commonplace, the use of secure provenance of authentic content is becoming increasingly important. Publishers, public figures, governments and businesses have good reason to certify the authenticity of the content they share. If a news outlet shoots photos of an event, for example, tying secure provenance information to those images can help show their audience the content is reliable.

“Government bodies also have an interest in the public knowing that their formal documents or media are reliable information about public interest matters,” Young says.

She adds that as AI modifications to media become “increasingly common” for legitimate purposes, secure provenance can provide important context to help prevent an average reader or viewer from simply dismissing that content as fake or deceptive.

“For the industry and for regulators, we note how important continued user research in this area is to drive towards more consistent and helpful display of this information to the public — to make sure it’s actually meaningful and useful in practice,” Young says.

“We have a limited set of technologies that can assist us, and we don’t want them to backfire from being misunderstood or improperly used.”

Learn more on the Microsoft Research Blog.

The post A new study explores how AI shapes what you can trust online appeared first on The Microsoft Cloud Blog.

]]>
What Frontier healthcare leaders are doing differently with AI http://approjects.co.za/?big=en-us/microsoft-cloud/blog/healthcare/2026/03/10/what-frontier-healthcare-leaders-are-doing-differently-with-ai/ Tue, 10 Mar 2026 15:00:00 +0000 Frontier Transformation in healthcare means moving beyond AI pilots to redesign workflows with governance, trust, and scalable impact.

The post What Frontier healthcare leaders are doing differently with AI appeared first on The Microsoft Cloud Blog.

]]>
AI is no longer a side experiment in healthcare. It’s showing up in exam rooms, call centers, revenue cycles, and security operations. But what’s becoming clear is this: some organizations are redesigning how work gets done, and others are still running pilots.

Research we conducted with senior healthcare executives in the United States, published in the New England Journal of Medicine, revealed a growing readiness divide. As some systems build governance, security, and workforce models to scale AI safely, others are still in proof-of-concept mode. The result? Diverging outcomes in productivity, workforce strain, cost-to-serve, and resilience.

The question is no longer whether AI belongs in healthcare. It’s how quickly organizations can operationalize it—safely, responsibly, and at scale.

Microsoft works with more than 170,000 healthcare customers globally to move from pilot to production with enterprise-grade security, privacy, and compliance.

So what does Frontier Transformation actually look like? The following examples show how healthcare organizations are embedding AI into core workflows—moving beyond pilots to deliver real, scalable impact with the governance and trust required in clinical environments.

Accelerating discovery and clinical development with AI

Frontier organizations are reinventing discovery by treating AI as an always-on research partner. It compresses the time it takes to find, synthesize, and act on evidence across functions. The result isn’t just faster tasks; it’s faster decisions and a more scalable path from insight to impact. As these capabilities become table stakes, organizations that can’t industrialize knowledge of work will fall behind in speed-to-trial, speed-to-market, and ultimately speed-to-patient.

UCB: Scaling agent-based AI with a secure internal platform

UCB built SKAI, a secure internal platform on Microsoft Azure for generative and agent-based AI, helping teams apply knowledge faster and operationalize AI with governance built in.

Syneos Health: Streamlining complex data to bring therapies to patients faster

Syneos Health is using AI to help teams analyze large, complex data sets across the clinical development lifecycle. With faster, more consistent synthesis of study inputs and operational signals, biopharma customers can make decisions with greater speed and confidence. Syneos Health reported reducing time for clinical trial site activation by about 10%, helping remove friction from a critical step in getting lifesaving therapies to patients. Enhanced predictive modeling and forecasting tools also allow teams to identify risks earlier, model scenarios, and engage customers and clinical partners more effectively.

Advancing care delivery with AI in the flow of clinical work

In care delivery, transformation happens when AI shows up in the flow of work. It reduces cognitive and documentation load and gives time back to clinicians. Frontier organizations use AI to shift capacity toward patients, not screens, while improving consistency and quality. As patient expectations rise and workforce shortages persist, the ability to deliver more care with the same (or fewer) resources is quickly becoming a differentiator.

Intermountain Health: Rehumanizing care by reducing documentation burden

Intermountain Health adopted Microsoft Dragon Copilot to reduce the administrative load that can pull clinicians away from patients. By supporting clinical documentation and automating routine tasks, clinicians at Intermountain Health reported experiencing a 27% reduction in time spent on notes per appointment, reducing cognitive burden and enabling more meaningful patient engagement by incorporating AI as a core part of their clinical workflow.

Cooper University Health Care: Giving clinicians time back in the flow of care 

Cooper University Health Care is using AI-powered clinical documentation to reduce the administrative burden that pulls clinicians away from patients. By embedding AI directly into clinical workflows, clinicians at Cooper reported saving more than four minutes per patient visit on documentation, experiencing less burnout, and engaging more meaningfully with patients—demonstrating how AI optimized workflows can rehumanize care at scale.

Mercy: Bringing ambient AI to nursing workflows

Nurses are at the center of care delivery and often at the center of documentation burden. Mercy has been using AI capabilities to transform nursing care. By capturing and structuring information in the flow of work, Mercy reported 8 to 24 minutes saved per shift for high-use nurses, a 21% reduction in documentation latency and a 4.5% increase in patient satisfaction from their initial rollout.

Streamlining operations and experiences across the healthcare organization

Frontier Transformation requires more than point solutions. It takes an AI-ready operating foundation that connects people, processes, and data across the organization. Frontier organizations use copilots and agents to standardize work, automate routine interactions, and deliver more consistent experiences at scale. Those that treat AI as isolated experiments often find themselves outpaced by peers who can improve service levels while bending the cost curve.

Bupa APAC: Building an AI-ready foundation to improve customer experiences

Bupa APAC is streamlining operations, automating routine processes, and making customer experiences more seamless thanks to AI. With an emphasis on AI readiness—skills, governance, and secure access to information—Bupa APAC upskilled its workforce with Microsoft 365 Copilot and GitHub Copilot, generating more than 410,000 lines of AI-assisted code, initiating more than 30,000 Copilot chats, and accelerating more than 100 AI use cases to improve care.

CareSource: Scaling compassionate service with cloud and AI

CareSource is applying AI to support operational scale while keeping a human touch. By modernizing platforms and automating processes that can slow service delivery, CareSource reduced documentation time by 75%, saved over USD125,000 on automation, and boosted developer productivity by up to 30%, helping their teams focus on the needs of members, providers, and communities.

Strengthening cyber resilience with AI

Cyber resilience is a transformation prerequisite. As care becomes more digital, AI must help defenders move at machine speed while maintaining trust and compliance. Frontier organizations use AI to triage, investigate, and report faster—reducing risk and freeing experts for the threats that matter most. In a sector where disruption can compromise patient safety, lagging security maturity can erase hard-won gains in digital transformation.

St. Luke’s University Health Network: Saving nearly 200 hours per month with AI-powered security agents

As healthcare expands its digital footprint, cyber defense becomes inseparable from patient safety and trust. St. Luke’s University Health Network is using Microsoft Security Copilot agents to accelerate phishing alert triage and to generate incident reports in minutes instead of hours. The organization reported saving nearly 200 hours per month, freeing security teams to focus on higher-value investigations and improving speed to response across its environment.

Act now to lead the future

If you’re looking at these examples and wondering where to start, focus on a few moves that help you learn quickly and scale safely.

  • Start with workflows, not technology: Identify the highest-friction moments (such as documentation, imaging backlogs, complex data synthesis, member service, and security triage) and design AI interventions that measurably reduce time, effort, and risk.
  • Get your foundation right, early: Prioritize secure access, identity, and data governance so copilots and agents have the right context, without compromising privacy or compliance.
  • Make it real, and make it stick: Operationalize responsible AI (like oversight, evaluation, and human-in-the-loop), measure quality and safety, and invest in change management so adoption scales beyond early enthusiasts.

Start your Frontier Transformation today

3 strategies for frontier transformation

Read the blog ›

These organizations show what Frontier Transformation looks like in practice—embedding intelligence across clinical, operational, and administrative work to deliver faster insights, reduced burden, strengthen security, and create better experiences at scale. The competitive bar is moving quickly. Waiting to act can mean higher costs, slower throughput, and greater strain on already-stretched teams. With deep healthcare experience and a global customer base, Microsoft can help organizations scale AI responsibly from the first workflow to redesign to enterprise-wide adoption.

The post What Frontier healthcare leaders are doing differently with AI appeared first on The Microsoft Cloud Blog.

]]>
Introducing the First Frontier Suite built on Intelligence + Trust https://blogs.microsoft.com/blog/2026/03/09/introducing-the-first-frontier-suite-built-on-intelligence-trust/ Mon, 09 Mar 2026 13:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/2026/03/09/introducing-the-first-frontier-suite-built-on-intelligence-trust/ Frontier Transformation is a holistic reimagining of business, aligning AI with human ambition to achieve an organization’s highest aspirations.

The post Introducing the First Frontier Suite built on Intelligence + Trust appeared first on The Microsoft Cloud Blog.

]]>
Today Microsoft is announcing:

  • Wave 3 of Microsoft 365 Copilot
  • Expanded model diversity with Claude and next-gen OpenAI models available today
  • General availability of Agent 365 on May 1 for $15 per user
  • General availability of the new Microsoft 365 E7: The Frontier Suite on May 1 for $99 per user

Frontier Transformation is a holistic reimagining of business, aligning AI with human ambition to achieve an organization’s highest aspirations. It is the next evolution of AI Transformation — not only do we need to deliver efficiency and productivity, but we need to democratize intelligence and do more for humanity. Companies do not want or need more AI experimentation. They need AI that delivers real business outcomes and growth.

In my daily conversations with customers and partners, they typically question what the most important components of an AI solution are. Is it the model? Is it silicon? At Microsoft, we believe the two most essential elements of Frontier Transformation are Intelligence + Trust. Organizations need to harness their own unique work intelligence as they build agents and solutions; and all AI artifacts across their technology stack must be observed, managed and secured to ensure they are delivering value responsibly. 

Intelligence that shows up in real work 

I often say that zero-shot artifact creation is nothing more than a parlor trick. Models can reason over data, produce draft documents, presentations and spreadsheets, but they do not understand work. Real differentiation comes from intelligence — deep work context, embedded in the tools people already use. AI should amplify your intelligence but do so in a manner that protects your differentiation and unique value.

Work IQ amplifies an individual’s IQ by tapping into your organization’s IQ. It is the intelligence layer that enables Microsoft 365 Copilot and agents to know how you work, with whom you work, and the content upon which you collaborate. That is why Copilot is faster, more accurate and more trusted than solutions built on models and connectors alone.

This month, we are unleashing Work IQ with our next generation of agentic experiences in Wave 3 of Microsoft 365 Copilot in Word, Excel, PowerPoint and Outlook. Employees will have an enhanced chat experience in Copilot with the ability to create and augment artifacts, and the power to build their own agents within the canvas they work in every day.

Microsoft 365 Copilot is model diverse by design. Rather than betting on a single model, we built a system that makes every model useful at work. Customers get the choice, performance and flexibility in an open, heterogenous environment.  Copilot leverages leading models from OpenAI and Anthropic, operating openly across clouds and data services without locking customers in. Claude is now available in mainline chat in Copilot via the Frontier program, alongside the latest generation of OpenAI models.

Microsoft 365 Copilot Wave 3 is not just a singular release of new capabilities but rather a commitment to continuous innovation. We will bring frontier capabilities with enterprise promises for our customers in an open and model diverse manner. Another great example of this is Copilot Cowork, which is in research preview. Built in close collaboration with Anthropic, we are bringing the technology that powers Claude Cowork into Microsoft 365 Copilot to enable long-running, multi-step work that unfolds over time.  Click here to learn about our Wave 3 news in more detail.

These announcements come as our customers across industries are already seeing the value of Microsoft 365 Copilot. Microsoft recently delivered its strongest quarter yet with Copilot, with paid seats growing more than 160% year over year and daily active usage up ten times, as customers increasingly make Copilot a core part of everyday work. Expansion is also accelerating as the number of customers deploying Copilot at significant scale — more than 35,000 seats — tripled year over year. Just last week, Mercedes Benz announced a global rollout of Microsoft 365 Copilot, following recent investments from NASA, Fiserv, ING, the University of Kentucky, the University of Manchester, the U.S. Department of the Interior and Westpac. This is in addition to the 90 percent of the Fortune 500 who now use Copilot.

Trust: from agent experimentation and sprawl to enterprise control 

The speed of agent development and proliferation tells us customers see value, but without guardrails the pace of adoption turns into blind spots, diminished ROI and real security risk. As AI agents become more capable and autonomous, trust is nonnegotiable. IDC predicts 1.3B agents in circulation by 2028, and 80% of the Fortune 500 are already using Microsoft agents, led by operationally complex industries like manufacturing, financial services and retail.

That is why I am excited to announce the May 1 general availability of Microsoft Agent 365, the control-plane for AI agents. Priced at $15 per user, Agent 365 gives IT and security leaders a single place to observe, govern, manage and secure agents across the organization — using the same infrastructure, applications and protections they rely on to manage people today.

We are seeing tremendous momentum with our preview customers. In just two months, tens of millions of agents have appeared in the Agent 365 Registry. We have tens of thousands of customers that are already adopting Agent 365 to securely govern and scale AI agents across enterprise workflows.

At Microsoft, we are also using Agent 365 as Customer Zero and the early signals are clear. We now have visibility into more than 500,000 agents across the company with the most widely used focused on research, coding, sales intelligence, customer triage and HR self-service. That adoption is translating into real work. Over the past 28 days alone, agents have been generating more than 65,000 responses every day for employees. This is evidence that we are not simply experimenting, we are embedding agents in the flow of everyday work and empowering human ambition.

Introducing the Frontier Suite

To meet this demand, I am thrilled to announce we are bringing Intelligence + Trust together with Microsoft 365 E7: The Frontier Suite. Microsoft 365 E7 unifies Microsoft 365 E5, Microsoft 365 Copilot and Agent 365 into a single solution powered by Work IQ and integrated with the apps and security stack customers already rely on. It includes Microsoft Entra Suite and advanced Defender, Intune and Purview security capabilities, delivering comprehensive protection across agents and employees.

Customers have told us E5 alone is no longer enough; they do not want multiple tools stitched together, they want one trusted solution. At $99 per user, E7 is priced below purchasing these capabilities à la carte, giving customers a simpler, more cost-effective way to deploy enterprise AI at scale.

With the general availability of Agent 365 and the latest agentic experiences in Microsoft 365 Copilot offered as one Frontier suite, AI moves from experimentation to durable, enterprise-wide value, built on a foundation of Intelligence + Trust. This is how we make Frontier Transformation real. Microsoft is not just imagining the future of AI, we are empowering organizations across industries and around the world to build it.

The post Introducing the First Frontier Suite built on Intelligence + Trust appeared first on The Microsoft Cloud Blog.

]]>
Unify. Simplify. Scale: Microsoft Dragon Copilot meets the moment at HIMSS 2026 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/healthcare/2026/03/05/unify-simplify-scale-microsoft-dragon-copilot-meets-the-moment-at-himss-2026/ Thu, 05 Mar 2026 15:00:00 +0000 At HIMSS 2026, Microsoft Dragon Copilot advances unified AI workflows to help clinicians reduce complexity and stay focused on patients.

The post Unify. Simplify. Scale: Microsoft Dragon Copilot meets the moment at HIMSS 2026 appeared first on The Microsoft Cloud Blog.

]]>
Healthcare has never moved faster—or asked more of the people delivering care. Clinicians are navigating rising complexity, fragmented systems, and relentless administrative demands, all while trying to stay present for their patients. At HIMSS 2026, Microsoft is introducing meaningful new advancements in Microsoft Dragon Copilot, strengthening its role as a unified AI clinical assistant that brings clinical intelligence, work context, and partner innovation together inside everyday workflows.

New capabilities include the ability to surface relevant work-related information alongside patient data for customers using Microsoft 365 Copilot; partner-built AI apps and agents available through Microsoft Marketplace that extend intelligence across revenue cycle, clinical insights, and decision support; and expanded role-based experiences for physicians, nurses, and radiologists designed to scale securely across settings and geographies.

Today, more than 100,000 clinicians rely on Dragon Copilot as part of their daily practice—supporting care for millions of patients every month. That kind of adoption doesn’t happen by accident; it happens when technology earns trust, fits naturally into clinical workflows, and proves its value day after day. As healthcare continues to accelerate, the question facing organizations is no longer if AI will be part of care delivery, but how quickly they can equip their teams with tools that scale safely, work across roles, and keep clinicians focused on patients. The new Dragon Copilot capabilities we’re introducing at HIMSS 2026 build on this proven foundation—extending trusted clinical support beyond documentation to meet the growing demands of modern care.

Clinicians need more than access to data—they need an AI assistant that works alongside them, understands context, and supports action across systems and settings. Built on Microsoft Azure, Dragon Copilot delivers this capability with enterprise‑grade security, responsible AI, and cloud scale—giving organizations the confidence to deploy broadly and grow with care teams wherever they work.

We ultimately went with Microsoft because of the security, the compliance, the scalability, and the fact that they’ve delivered reliable solutions for years.”

—Snehal Gandhi, MD, Vice President and Chief Medical Information Officer, Cooper University Health Care

See what Dragon Copilot has to offer:

Unifying the disparate—so care teams can move faster, with confidence

By unifying information from across systems and sources, Dragon Copilot reduces fragmentation and unnecessary searching—bringing patient data, trusted clinical content, and partner powered AI insights into a single, contextual experience within the clinical workflow.

What makes this approach different is not just access to information, but how intelligence is delivered and applied. Clinicians can naturally query, summarize, create, and act using voice or text—without toggling between tools. Insights are surfaced instantly in one place, enabling care teams to move fluidly from understanding to action while spending less time navigating systems and more time with patients.

That intelligence is grounded in a broad set of trusted sources, including:

  • Prebuilt trusted clinical content with citations
  • Patient data like diagnoses, labs, medications, and allergies
  • Organizational content such as policies, procedures, schedules, and communications

When needed, reliable web information can also be accessed through a safety‑first pathway—ensuring responses remain appropriate for clinical use.

Care delivery depends on more than clinical facts—it also depends on fast access to the work context around care. With Microsoft 365 Copilot, powered by Work IQ and accessible inside Dragon Copilot, clinicians can pull in relevant work-related information from connected apps and enterprise data, right where they’re already working. Work IQ is the intelligence layer that helps Copilot understand how people collaborate across emails, files, meetings, and chats—so responses are grounded in the right context. The result is a more unified experience that reduces time spent searching across tools and keeps momentum inside the clinical workflow.

Dragon Copilot extends clinical intelligence beyond any single system or screen. Instead of being locked into one interface, clinicians can invoke powerful AI capabilities wherever they’re already working—across applications, EHRs, and web pages. By simply clicking or highlighting text, Dragon Copilot can read, understand, and apply its intelligence directly in context, without forcing clinicians to switch tools or reenter information.

For example, a clinician reviewing a note can place their cursor over a sentence and say, “Add more detail about what the patient shared regarding their cardiac history.” Dragon Copilot immediately expands the documentation using the surrounding clinical context—no copying, no pasting, and no workflow disruption—helping clinicians move faster while keeping their focus on the patient, not the screen.

Building on this foundation, Dragon Copilot further unifies innovation through AI apps and agents available in Microsoft Marketplace. Developed by partners such as Canary Speech, Humata Health, Optum, and Regard, these solutions deliver capabilities across clinical insights, revenue cycle management, prior authorization, and clinical decision support. Organizations can easily purchase, deploy, and scale partner innovation—while clinicians experience those insights directly within their existing workflows.

Sentara Health is integrating Regard’s diagnosis and documentation technology within Dragon Copilot to save time, improve revenue integrity, and most importantly improve care.

By combining Dragon’s ambient conversation capture with Regard’s ability to surface key insights from data, we expect to help our clinicians identify comorbidities and relevant diagnoses in real time without adding steps to their workflow. Our goal is straightforward: strengthen the clinical picture, reduce documentation burden, and support more informed decision-making at the point of care.”

Dr. Joseph Evans, Vice President, Chief Health Information Officer at Sentara Health

Simplifying the complex—so care teams can be present with patients

Dragon Copilot streamlines clinical documentation and routine tasks, so clinicians spend less time navigating systems and more time focused on patient care. By simplifying physician and nursing charting, notes, flowsheets, and radiology reporting, it reduces rework and cognitive burden—helping care teams work more efficiently and confidently across the day.

This simplification is powered by healthcare-grade AI models built for clinical accuracy, with clinical note quality evaluated using the Provider Document Summarization Quality Instrument (PDSQI9)—an industry standard developed with leading academic and healthcare institutions to ensure clear, consistent, and clinically appropriate outputs.

Beyond documentation, Dragon Copilot automates high friction tasks across the workflow. Persona specific note types, automated referral letters and after‑visit summaries, summaries of prior radiology reports, and proactive coding guidance reduce manual effort and unnecessary toggling—allowing care teams to focus on decisions, not data entry.

New and expanded capabilities include:

  • Proactive ICD‑10 specificity suggestions, delivered during note review to support timely, accurate reimbursement.
  • Reusable custom clinical documents, created from prompts or examples and managed as templates, allowing clinicians to get additional unique content created automatically, such as custom letters.
  • Pull-forward workflow support to jump-start new documentation from prior notes.
  • Multilingual conversation capture, connecting with patients in their language. Captures the conversation in 58 languages and automatically converts the encounter into a note written in the primary language used in each country.
  • Seamless migration from Dragon Medical One, preserving existing commands, vocabularies, profiles, templates, and AutoTexts.

Scaling across roles, geographies, and devices

Dragon Copilot is designed with role-based experiences that deliver the right capabilities to each clinician, when and where they’re needed. Physicians, nurses, radiologists, and other care team members benefit from workflows tailored to their unique responsibilities—from documentation and care coordination to image interpretation—while organizations maintain consistency, security, and compliance at scale. With a single solution spanning multiple roles, including the only experience built for radiologists and demonstrated outcomes for nurses, healthcare organizations can simplify their technology footprint and drive greater return on investment.

Physicians

Dragon Copilot supports physicians across care settings through EHR‑integrated workflows and a dedicated app available on mobile (iOS and Android), web, and desktop. Physicians can document more efficiently, access timely clinical information, and reduce cognitive load—whether at the point of care or on the go.

Together with partners, Dragon Copilot continues to scale globally and is now available in U.S., Canada, the UK, Ireland, France, Germany, Austria, Belgium, and the Netherlands.

Nurses

Dragon Copilot enhances nursing workflows by ambiently capturing documentation at the point of care and transforming conversations into structured flowsheet entries. With expanded support for all med-surg flowsheet templates and lines, drains, and airways (LDAWs) additions and removalsnurses can document more completely without disrupting care.

Through a dedicated app available on mobile (such as iOS and Android), web, and desktop, nurses can also access information from trusted medical sources, query transcripts to surface key patient details, and create concise summaries—without leaving their workflow—reducing clicks, and keeping focus on patient care.

Dragon Copilot gives power back to nurses to spend time at the bedside with face-to-face interactions.”

—Stephanie Whitaker, MSN, Registered Nurse, Chief Nursing Officer, Mercy

Nurses using Dragon Copilot have reported reduced cognitive load, faster documentation, and improved patient experience, reinforcing the value of role‑specific AI designed for frontline care. The Dragon Copilot nursing experience is available in the United States.

“I can say that without a doubt, using Dragon Copilot has significantly reduced the time that I’m focused and worrying about sitting down and getting my charting done behind the computer.”

—Christine Dupire, Registered Nurse, Mercy

Radiologists

Paired with PowerScribe One, Dragon Copilot helps minimize repetitive tasks such as reviewing prior reports and automates routine steps in report creation. It surfaces relevant clinical context, integrates customizable AI experiences, and provides intelligent access to credible information—helping radiologists stay focused and deliver high‑quality reports with confidence. The Dragon Copilot radiology experience is currently in preview in the United States.

As we embrace the next frontier of AI, we know that having cloud-based solutions that work seamlessly with our existing products and systems is paramount. Having Dragon Copilot as a companion for PowerScribe One gives me confidence that I can test and benefit from the latest AI advancements with minimal disruptions and distractions.”

—Sean Cleary, MD, Vice Chair of Informatics for Imaging Sciences University of Rochester Medical Center

Restoring humanity to healthcare through AI

AI will only transform healthcare if it truly serves the people delivering care. Dragon Copilot is built for that purpose—bringing role‑based experiences, hands‑free workflows, and proactive clinical intelligence together in a way that fits naturally into how clinicians work. By unifying information, reducing friction, and extending trusted intelligence across the workflow, Dragon Copilot helps clinicians spend less time managing tasks and more time connecting with patients—restoring focus, confidence, and humanity to the practice of medicine.

Join the more than 100,000 clinicians already using Dragon Copilot

The post Unify. Simplify. Scale: Microsoft Dragon Copilot meets the moment at HIMSS 2026 appeared first on The Microsoft Cloud Blog.

]]>
How to bring human expertise and AI together: 3 impactful initiatives http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/25/how-to-bring-human-expertise-and-ai-together-3-impactful-initiatives/ Wed, 25 Feb 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/2026/02/25/how-to-bring-human-expertise-and-ai-together-3-impactful-initiatives/ See how Microsoft teams combine human expertise and AI to modernize workflows, scale learning, and drive measurable business impact.

The post How to bring human expertise and AI together: 3 impactful initiatives appeared first on The Microsoft Cloud Blog.

]]>
AI is redefining research, content maintenance, and the global learner experience at Microsoft Global Skilling

Microsoft Global Skilling helps people and organizations build the skills they need to thrive in an AI‑powered world. Within Global Skilling, the Learning Lab is the innovation engine—a team focused on designing, testing, and evolving modern learning experiences to continuously improve how skills are developed, validated, and applied in the flow of work. 


AI is reshaping how organizations work. Teams aren’t just adopting new tools—they’re also figuring out how those tools fit into existing workflows, roles, and expectations, all while trying to keep pace with business demands in a rapidly changing landscape. It’s a heavy lift. As the leader of the Learning Lab team, I’m navigating these same pressures, along with my team members, as we balance day-to-day delivery with the need to evolve our processes in real time. That’s why we’re embedding AI assistants and agentic workflows into internal processes—using them not only to work differently but also to learn differently. Through experimentation, we’re uncovering new ways to streamline operations and improve the learner experience for our global audience.  

This blog highlights three of our team’s most impactful AI initiatives that could also benefit your organization. Inspired by these projects, we developed A Practical Guide for Bringing AI into Your Business Processes, featuring real-world examples and actionable ideas for integrating AI and human expertise across your organization. 

A Practical Guide for Bringing AI into Your Business Processes

3 impactful AI initiatives leading the way

1. Reducing time-intensive coordination to optimize research 

The challenge of coordinating teams for research  

Before any learning materials can be built, our team conducts extensive research to understand new technologies, identify required skills, and validate what learners need. This early-stage analysis requires input from multiple stakeholders and a deep review of internal documentation, product roadmaps, and existing training materials.  

How AI is helping accelerate our research tasks and optimize cross-team input 

One of the biggest bottlenecks for our research workflows has been the time it takes to synthesize information and align teams around what a course should achieve. To improve this, we began experimenting with Researcher in Microsoft 365 Copilot and persona-based agents to support our research and planning stages. Our new process looks like this: 

  • Researcher synthesizes internal documentation, product roadmaps, and existing training materials to surface emerging themes and identify knowledge gaps. With the ability to process thousands of pages in minutes, it flags potential course objectives the team might have missed.
  • In parallel, persona-based agents simulate the perspectives of stakeholders from varying teams to help validate ideas before bringing them to the key decision-makers.
  • Throughout this process, our team members guide these AI tools through every step—providing the business context, analyzing AI outputs to identify gaps or inconsistencies, refining direction, and ensuring consideration of broader business objectives.  

In our experience with AI handling synthesis and early-stage validation, we’ve reduced the time required for core research processes from two weeks to just one day. This significant time savings extends to every course developed with this method, enabling us to redirect focus toward shaping stronger strategies, aligning content with business impact, and accelerating decision-making across teams.

Applying this approach in your organization 

AI-supported research and planning can help you make sense of complex information faster and build alignment earlier in your decision cycles. By using AI to synthesize documents, surface patterns, and validate assumptions, you can reduce the effort required to get teams on the same page. Your team members can then focus on refining strategy, confirming business priorities, and shaping higher-impact decisions. This combination improves speed and clarity throughout cross-functional work.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Drafting onboarding plans that human resources (HR) leaders can tailor to company culture.
  • Developing quarterly sales plays informed by shifting buyer behavior and competitor activity.
  • Creating campaign briefs rooted in audience insights, market trends, and performance data.
  • Developing forecasting assumptions by synthesizing inputs from sales, operations, and historical data. 

2. Transitioning from manual maintenance to continuous quality improvements 

The challenge of shorter content lifecycles  

We maintain thousands of courses and lab environments as part of our skilling initiatives for Microsoft technologies. With the fast pace of product evolution, it can be challenging to keep learning content accurate and functional.  

3 skilling insights

Read the blog ›

How GitHub Copilot became the maintenance partner for the team 

We recognized that the demands for maintaining learning content were increasing beyond our capacity to manage effectively. So we integrated GitHub Copilot into the content maintenance workflow like this: 

  • GitHub Copilot tools analyze content repositories—flagging inconsistencies, identifying outdated examples, and recommending updates based on current documentation.
  • Throughout this process, our team reviews and refines the AI-generated recommendations. When GitHub Copilot flags an issue, we evaluate how those changes might apply to other training courses. We also ensure that all revisions align with learning objectives and verify that security and accessibility standards are met.
  • Then GitHub Copilot helps implement some of the suggested updates, like generating new code samples or suggesting environmental configurations that align with the latest product releases. 

As a result, our team has reduced the time we spend on routine content maintenance by up to 25%. And with these time savings, team members can shift from reactive updates to proactive innovation—evaluating emerging skills, shaping next-generation modules, and exploring how agents, simulations, and personalized learning could improve outcomes. 

Applying this approach in your organization 

AI-assisted maintenance can help you keep large, fast-changing content ecosystems accurate and up to date without overwhelming your teams. By using AI to surface inconsistencies, flag outdated material, and recommend updates, you can dramatically reduce time spent on routine fixes. Your experts can then focus on reviewing changes for accuracy, regulatory needs, and strategic intent. This balance enables you to maintain quality at scale while freeing your teams to invest in higher-value innovation.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Maintaining and updating sales enablement content as product and service offerings evolve.
  • Keeping product messaging frameworks and campaign assets consistent and up to date.
  • Updating help center articles and support workflows after feature releases.
  • Updating contract templates and clause libraries to align with new regulatory guidance.

3. Delivering inclusive learning at scale through diverse content formats 

The challenge of content relevance and engagement  

Our learners span every continent, speak dozens of languages, and have their own preferred learning methods. Creating multimodal, accessible, and inclusive learning experiences while managing constant content updates was stretching the team thin.  

How AI helps scale and translate content for global learners  

To support different learning styles and languages, we’re piloting how to create immersive, inclusive learning through two experiments with AI: 

  1. We’re using AI tools to turn a single source of training content, like a session transcript or recording, into multiple formats, such as videos, podcasts, and recap summaries. This multimodal output lets us update learning materials at the pace required by our global audience and helps ensure that we’re reaching learners in their preferred formats.
  2. We’re piloting an AI-powered tool that not only translates content but also generates avatars that deliver multilingual voiceovers with more natural lip-sync, eliminating one of the most distracting elements of dubbed content. 

Early results show that we can now recover up to 15 hours per course we develop—time our team can spend on more nuanced work that AI can’t do, like adapting cultural references, verifying that tone and pacing match learning objectives, and maintaining brand voice. 

Applying this approach in your organization 

AI-powered localization can help you deliver content that feels native to every audience you service, no matter the language or market. By pairing AI’s speed in translation, voiceover, and prompt generation with your team’s expertise in cultural nuance and brand standards, you can scale global engagement without diluting quality. This combination lets you reach more learners, customers, and employees while keeping your message consistent and relevant across regions.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Localizing campaign assets for regional markets across languages and cultural norms.
  • Tailoring pitch decks and demos for industry-specific or region-specific buyers.
  • Creating multilingual chatbot responses and support scripts for global customers.
  • Adapting standard operating procedure and process documentation for different facilities or regional regulations. 

Building skills and strengthening our AI strategy

As AI becomes an extension to the Learning Lab, we’ve discovered that it’s much more than just implementing new tools—it’s also a journey of building technical and human skills across the team. Our experiments require every team member to stretch into new capabilities, from process optimization and innovation to strengthening collaboration and creative problem-solving. As a result, we’ve been able to spend less time on repetitive tasks and to dedicate more energy to the kind of creative, relationship-driven work that leads to exceptional learning experiences. 

3 strategies to start your frontier transformation

Read the blog ›

Looking to build skills for you and your teams? Explore AI Skills Navigator, the agentic learning space that brings together AI-powered skilling experiences and credentials that help individuals build career skills and organizations worldwide accelerate their business.

The post How to bring human expertise and AI together: 3 impactful initiatives appeared first on The Microsoft Cloud Blog.

]]>
Microsoft accelerates telecom return on intelligence with a unified, trusted AI platform http://approjects.co.za/?big=en-us/microsoft-cloud/blog/telecommunications/2026/02/24/microsoft-accelerates-telecom-return-on-intelligence-with-a-unified-trusted-ai-platform/ Tue, 24 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/ms-industry/microsoft-accelerates-telecom-return-on-intelligence-with-a-unified-trusted-ai-platform/ AI is driving measurable ROI for telecoms, with Microsoft showcasing new capabilities and unified intelligence at MWC 2026.

The post Microsoft accelerates telecom return on intelligence with a unified, trusted AI platform appeared first on The Microsoft Cloud Blog.

]]>

AI is delivering real, measurable returns for telecom

AI is already delivering measurable business impact across industries, and telecom is among the leaders. A recent IDC study shows operators are achieving 2.8 times return on generative and agentic AI investments, with many leading companies reaching up to 5 times return. Frontier telecoms are realizing even greater returns from AI by making it foundational to how their business operates—from employees and core workflows to the end‑to‑end value chain. These leaders are moving beyond incremental efficiency by using an end-to-end AI platform and unified data approach that embeds AI into everyday operations, enabling faster decisions, tighter execution, and continuous performance improvements across their organization.

With more than 80% of the Fortune 500 building active AI agents, Microsoft Copilot is rapidly becoming essential to how employees think, collaborate, and deliver results. As AI proves its value, telecoms are moving beyond pilots to connected intelligence that elevates customer experiences, replaces manual workflows with autonomous operations, hardens and self‑heals networks, and drives new revenue opportunities. Connected intelligence will differentiate fast-moving telecoms across every area of business operation.

Read more about Frontier telecoms here.

Return on intelligence and trust

For telecoms, achieving value from scalable AI depends on two factors: intelligence and trust. Built from three complementary IQ elements—Work IQ, Fabric IQ, and Foundry IQ—Microsoft IQ is the intelligence layer that connects AI, data, and context across the business. It gives AI agents deep awareness of how people work, how the business operates, and how decisions are made. This intelligence layer accelerates decisions, improves customer experiences, automates operations and networks, and unlocks new ways to monetize AI‑based services. Trust is built through Microsoft’s carrier‑grade control plane, which provides built‑in monitoring and governance across the entire AI platform, including AI agents from our partner ecosystem, to allow telecoms to innovate responsibly, support regulatory compliance, and scale AI with confidence.

At MWC 2026, Microsoft is announcing new technologies that will help telecoms move forward with AI and use intelligence to drive the business.  Microsoft delivers this through a single platform that brings AI, unified data, trust, and governance together to enable telecoms with connected, actionable insights to accelerate innovation and growth.

Building the sovereign, AI-ready edge for telecom

For telecoms, thriving in the era of agentic intelligence begins with a resilient foundation. Today, we’re advancing Microsoft Sovereign Cloud with fully disconnected operations, extending cloud capabilities and AI-ready infrastructure deeper into operator networks than ever before. As demand accelerates for low-latency services, real-time processing, and stronger assurances around data sovereignty and regulatory compliance, the edge has become a critical extension of telecom networks and foundational layer of modern digital infrastructure. This is especially true for regulated industries and mission-critical scenarios where operational resilience and control over data are paramount.

To support these confidential environments, Azure Local offers full stack capabilities that support customers across connected, intermittently connected, and fully disconnected modes. This is essential for sovereign environments where uninterrupted access to operational, network, or customer-facing systems is non-negotiable. Azure Local disconnected operations keeps critical services running securely without connectivity to the cloud. At the same time, Foundry Local will be able to offer modern infrastructure and support for large AI models. Using the latest graphics processing unit (GPU) infrastructure from partners like NVIDIA means customers with sovereign needs will now be able to run models locally on their own hardware, inside strict sovereign boundaries enabling powerful, local AI inferencing in fully disconnected environments.

Customers can deploy and govern workloads inside their own datacenters, using familiar Microsoft Azure experiences and consistent policies, without depending on continuous connection to public cloud services.

AT&T uses Azure to support its cloud and edge strategy, enabling consistent operations across a distributed network footprint. By applying Azure’s management and governance capabilities across environments, AT&T can bring compute and data processing closer to where services are delivered while maintaining strong security and operational oversight.

As we expand our network edge capabilities, Azure plays a key role in helping us apply cloud-native principles across our distributed infrastructure. The scalability and flexibility of Azure’s adaptive cloud approach allow us to deploy services closer to our customers, maintaining control while providing the reliability and performance they expect from AT&T. This long-standing partnership enables us to innovate and deliver next-generation experiences at the edge.”

—Sherry McCaughan, Vice President, Mobility Core and Services

Azure’s cloud-native management capabilities and global platform enable organizations like AT&T to modernize and scale edge environments, supporting next-generation services while maintaining consistent governance, security, and operational control. Native management capabilities and a global platform enable organizations like AT&T to modernize and scale edge environments, supporting next generation services while maintaining consistent governance, security, and operational control. 

We’re also investing in multi-rack deployment capabilities for Azure Local, extending scale points to support large-scale infrastructure for the most demanding, mission-critical workloads. Customers will be able to expand from single-node and cluster deployments to multi-rack environments designed for high availability, fault isolation, and operational simplicity at scale. Multi-rack deployment on Azure Local is currently in preview and will be available in the coming months.

Microsoft is collaborating with telecom operators to deliver sovereign cloud platforms and managed services that combine hyperscale innovation with local control, enabling enterprises to meet data residency, regulatory, and security requirements while accelerating trusted digital and AI transformation.

Agentic customer experiences that drive growth

With this foundation in place, telecoms can move beyond isolated use cases to scale intelligence across experiences and growth models. The same agentic capabilities that transform customer engagement also unlock new ways to monetize services, reduce cost to serve, and create differentiated value.

Today, telecom customer journeys are fragmented. Customers often switch channels to complete tasks, driving abandonment and cost. AI agents turn customer intent into end‑to‑end action across systems. Microsoft is now offering a telecom agentic store reference framework to replace click‑based journeys with natural‑language interaction. Coordinated AI agents handle discovery, sales, service, billing, and partner offers in the background—customers state their goal and agents deliver the outcome. The result is higher digital completion, faster resolution, and better experiences. This framework also creates a new monetization platform, enabling federated AI marketplaces with built‑in identity, billing, and sovereign deployment for trusted ecosystem commerce at scale. Telecoms are already working with Microsoft and system integrators to adopt this architecture—unifying sales and service, reducing cost‑to‑serve, and creating a scalable foundation for partner‑led innovation. 

FiberCop modernizes edge cloud and contact center

FiberCop runs Italy’s most advanced, far-reaching and pervasive digital network infrastructure. FiberCop recently announced that it has integrated Azure Local into its network, transforming the access infrastructure into an edge cloud platform capable of delivering cloud-native services, virtualized network functions, and advanced workloads while meeting sovereignty and compliance requirements. Today, FiberCop announces that it is accelerating its agentic transformation, moving to an AI‑first contact center model where autonomous AI agents, Copilot, and human expertise work together. By adopting Dynamics 365 Contact Center, FiberCop has begun modernizing customer engagement with unified data, intelligent routing, and AI‑powered self‑service and assisted service that delivers more efficient operations and better customer experiences at scale.

Introducing Ericsson Enterprise 5G Connect to reimagine customer experience

Ericsson announces ongoing collaboration with Microsoft introducing the Ericsson Enterprise 5G Connect solution—validated on Microsoft Surface 5G Copilot+ PCs and built on top of Windows 11’s Enterprise Cellular Managed Connectivity (ECMC) capabilities. This new offering enables enterprises to centrally manage secure, seamless 5G connectivity for mobile and hybrid employees, using automatic network switching and robust policy enforcement to enhance productivity and security. IT teams gain scalable management and control, while end users benefit from uninterrupted, AI-powered experiences across private and public 5G networks. The solution is currently being piloted by Ericsson and is in private preview. To learn more, visit our Windows IT Pro blog.

Intelligent business operations, built for telecom

Delivering connected customer experiences depends on what happens behind the scenes. Telecom operations require trusted, governed access to network and customer data. That’s why operators are moving from legacy data warehouses to a modern lakehouse that unifies business and network data.

Microsoft Fabric provides a single, policy‑governed data foundation for real‑time, operational, and analytical data to speed AI insights at scale. Building on this foundation, today we’re announcing Azure Databricks Lakebase will be available in March 2026, giving telecom operators a managed PostgreSQL environment with next generation separation of storage and compute for transactional data, providing instant availability, instant clones, and scale-to-zero. This brings online transaction processing (OLTP) capabilities to the Databricks Data Intelligence Platform on Azure designed for developer performance with low total cost of ownership (TCO), eliminating the traditional gap between operational systems and the lakehouse.

Partners are building on this data foundation as well. For example, Nokia integrates its data suite with Fabric to securely unify network telemetry and reduce AI use case development time by up to 80%.

MTN transforms fraud prevention with AI

In today’s rapidly evolving digital landscape, identity theft and first-party fraud are escalating at alarming rates, posing significant risks to individuals and businesses across South Africa. MTN has made a bold move to transform its fraud management approach by harnessing advanced Microsoft technologies. Shifting from traditional, reactive methods to a proactive, AI-powered ecosystem, MTN is not only protecting its customers and strengthening revenue defenses but also reinforcing national digital resilience and contributing to a safer, more secure digital economy for all.

Amdocs powers intelligent business operations

Amdocs is making several announcements with Microsoft that deepen integration to deliver next-generation solutions. The first is AI-powered application modernization through the Amdocs Agentic Services platform, embedding Microsoft AI solutions such as Azure OpenAI and Microsoft Foundry into end-to-end modernization and migration to Azure. Second, Amdocs Cognitive Core platform built on amAIz, offering prebuilt agent libraries, cross-domain insights, and telecom-specific AI that integrates with any business or operating system stack and runs securely on Azure. Colt Technology is working with Amdocs and Microsoft to streamline operations and accelerate service delivery with real-time insight.

To learn more about transforming the OSS/BSS with agentic AI, read this blog.

Power autonomous networks with built-in trust and control

As intelligence is embedded across data and operations, the next frontier is the network itself. Agent-driven operations enables networks to move from reactive management to autonomous executions that respond faster, reduce risk, and improve resilience at scale.

Learn more about NOA

Read the blog ↗

To help operators move from pilots to production at scale, Microsoft is evolving its network operations agent (NOA) reference architecture—a proven framework shaped by real world deployments, industry collaboration, and learnings from Microsoft’s NetAI program.

NOA is built for today’s telecom realities: exploding event volumes, rising complexity, and persistent skills gaps. The latest evolution deepens integration with Microsoft AI and collaboration platforms, strengthens alignment with open standards, and delivers a modular, production-ready path to autonomy—without compromising telco-grade safety, governance, or human oversight. Operators engage AI directly through Microsoft 365 Copilot and Microsoft Teams, while Microsoft Foundry and the Microsoft Agent Framework provide a governed, observable runtime for multi-agent orchestration at scale. Expanded support for TM Forum Open APIs helps ensure interoperability across existing business and operations support systems, making NOA an open, secure foundation for autonomous networks. Read more about NOA.

Leading operators such as Far EasTone Telecom and Vodafone are already applying this blueprint to modernize network operations, reduce human error, accelerate recovery times, and enable engineers to focus on higher value work.

Far EasTone Telecom (FET) is turning agentic AI into real operations impact 

FET exemplifies how leading operators are turning this architecture into real operational impact. FET is using the NOA framework to redefine cloud native network operations by embedding agentic AI across its NOC and change management workflows. Today, nearly 60% of its NOC operations are AI-assisted, with about 10,500 operational tasks executed per month, including incident summaries, automated ticket closure, network checks, and proactive voice notifications. AI agents now handle largescale alarm correlation and root cause analysis in seconds, supporting nearly 7,000 monthly operational queries with an average response time of 16 seconds, and enabling most maintenance actions to complete within one minute. This shift has significantly reduced human error, accelerated recovery times, and allowed engineers to focus on higher value work.

Vodafone’s journey toward intelligent network operations

Vodafone is working with Microsoft to apply this proven AI‑powered blueprint for autonomous network operations across transport infrastructure and field‑force management. The collaboration combines Vodafone’s deep network expertise with Microsoft Foundry and the NOA framework to modernize how large‑scale telecom networks are operated.

This blueprint is built on Microsoft’s own experience running autonomous agents across its global Azure transport network, where AI continuously monitors performance, identifies root causes, and autonomously manages more than 65% of fiber‑break field dispatches—improving time to repair by up to 25% and accelerating root‑cause analysis by 80%. By applying these proven capabilities to Vodafone’s transport network, the two companies are accelerating the shift toward intelligent, automated transport network operations across the telecom industry.

By working with Microsoft, we’re combining deep network expertise with proven AI‑powered operations to create something greater than either could achieve alone. Together, we’re building intelligent, automated transport network operations that empower our teams and deliver faster, more resilient connectivity networks for our customers.”

—Alberto Ripepi, Chief Network Officer, Vodafone

Other operators, including AT&TT-MobileTelefónica, and MEO, are adopting Microsoft Foundry as a blueprint for scaling agentic AI across complex, multi-vendor networks. 

Today, Kenmei announces it is collaborating with Microsoft to help operators accelerate their path toward autonomous networks by combining Kenmei’s telecom intelligence offer with Azure and Microsoft Fabric to enable scalable analytics and agentic AI–powered operations. Already in use at leading operators like Telefónica and Etisalat (e&), this collaboration brings proven deployments into a broader cloud and AI ecosystem designed to reduce manual effort, speed decision‑making, and unlock new levels of network automation.

As telecoms scale intelligence across networks, operations, and experiences, connectivity remains the starting point. Because AI only delivers impact where access exists, expanding internet access is foundational to an intelligent and inclusive telecom future.

Today, 2.2 billion people around the world remain unconnected.1 To help overcome this barrier, Microsoft pledged to bring access to 250 million people by 2025. We are pleased to share that we’ve expanded internet access to 299 million people through the power of technology and partnerships in communities around the globe. But we know there is more work to do to support unconnected communities and enable global participation in the AI economy.

In support of this on-going effort, we are unveiling a new collaboration with Starlink designed to bring Microsoft’s experience with governments, local operators, and community partners together. With more than 9,000 satellites in low-Earth orbit, Starlink will extend digital infrastructure to rural, agricultural, and hard-to-reach communities. You can read more about how we met this milestone and are continuing to extend AI-enabled connectivity aligned with community needs.

Join us at MWC 2026 to learn more

Frontier telecoms are already proving what’s possible when AI, data, trust, and governance come together on a single platform to power faster operations, autonomous networks, intent-driven engagement, and real return on intelligence.

Join Microsoft At MWC 2026 to see how operators and partners are moving from AI promise to production through real deployments, live demos, and customer stories.


1Facts and Figures 2025, ITU.

The post Microsoft accelerates telecom return on intelligence with a unified, trusted AI platform appeared first on The Microsoft Cloud Blog.

]]>
Agentic AI in revenue growth management: From hype to decision intelligence http://approjects.co.za/?big=en-us/microsoft-cloud/blog/retail-and-consumer-goods/2026/02/18/agentic-ai-in-revenue-growth-management-from-hype-to-decision-intelligence/ Wed, 18 Feb 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/ms-industry/agentic-ai-in-revenue-growth-management-from-hype-to-decision-intelligence/ Revenue growth management is becoming the connective tissue between growth strategy and execution. Learn how agentic AI can accelerate decision intelligence—grounded in financial truth, governance, and human judgment.

The post Agentic AI in revenue growth management: From hype to decision intelligence appeared first on The Microsoft Cloud Blog.

]]>
This post is co-authored by Asper.AI, Chief Product and AI Officer, Soudip Roy Chowdhury, and RGM Business Unit Lead, Vibhor Mishra 

Revenue growth management (RGM) has never been more essential—or more difficult to execute well.

For years, many consumer goods companies could rely on a relatively stable set of playbooks: predictable shopper behavior, consistent channel economics, and promotional mechanics that reliably delivered results in a more stable environment. Consumers are increasingly price-aware and deal-oriented. Digital platforms make comparison shopping effortless, and agentic commerce accelerates the journey from intent to purchase, while the margin-volume equation continues to shift. In short: what used to be good enough in pricing, promotions, assortment, and trade investment is now a structural risk.¹

At the same time, the broader fast-moving consumer goods (FMCG) model is under pressure. Industry incumbents are navigating slower demand, a reshaping of channels, erosion of traditional scale advantages, and the relentless rise of digitally enabled business models.² The stakes are clear: RGM is no longer a specialized capability sitting inside sales or finance. It is becoming the connective tissue between growth strategy and execution.

However, many RGM organizations continue to operate with fragmented systems, inconsistent definitions, and analytics that struggle to keep pace with change. In a recent discussion I had with leaders from Asper.AI, Chief Product Officer Soudip Roy Chowdhury and RGM Business Unit Lead Vibhor Mishra, we went straight at this reality—and what it will actually take for Agentic AI to deliver outcomes in RGM, rather than headlines.

Why RGM has to change (and why the timing is urgent)

Boston Consulting Group (BCG) recently argued that, amid economic uncertainty, consumer companies must shift their RGM bias from higher profits and productivity to higher volume and market share—and that winners will master three challenges: winning shopper missions, cross-functional orchestration, and rebuilding infrastructure on AI-enabled tools.³

That framing resonates because it reflects what I see in the field: shoppers are changing faster than conventional processes can interpret, and traditional analytic cycles are too slow for today’s volatility. The opportunity is real—but only if we confront the operational reality inside many organizations:

  • Trade and spend decisions managed through disconnected tools (sometimes Excel and email).
  • Siloed dashboards built by business-unit fringes because there is no shared platform.
  • Inconsistent key performance indicator (KPI) definitions across teams, markets, and retail customers.
  • A shortage of scalable decision support for complex trade-offs (price versus volume, promo ROI versus. brand equity, distribution versus mix).

If we don’t fix these foundational issues, agentic narratives risk turning into overly optimistic technology narratives—where the technology story races ahead of the business systems required to benefit from it.

One of the most helpful parts of my conversation with Soudip Roy Chowdhury was his crisp distinction between vanilla retrieval and what truly makes an agentic system useful in RGM.

As he described it, the differentiator is grounding beyond data—combining domain knowledge with organizational knowledge and role-based interpretation. That means capturing not only what the metric is, but how different people in the organization use it to make decisions.

Asper.AI grounds RGM insights on domain, organizational, and role knowledge.  Therefore the insights retrieved for a CFO is different from a Head of RGM, because their KPIs are very different.  This approach increases the utility of an agentic system than just vanilla retrieval systems.

Soudip Roy Chowdhury, Chief Product Officer, Asper.AI

This matters enormously in RGM because success is not about producing “an answer.” It’s about navigating trade-offs across levers—pricing, promotions, assortment, trade terms—while reconciling the differing objectives of sales, marketing, finance, and category teams.

In the discussion, Soudip Roy Chowdhury explained how role-specific grounding can live in a knowledge base (for example, a domain ontology in the form of a graph for knowledge organization and reasoning) that maps KPI meaning, data sources, and how business entities relate—enabling agents to respond with nuance rather than generic output.

The RGM foundation: From System of Record to System of Intelligence to Agentic AI

Then came a moment I loved—because it turned a complex topic into an executive-ready mental model.

Vibhor Mishra described the prerequisites for an RGM assistant as two foundational layers:

  1. System of Record: The authoritative source of spend decisions, financial data, and account-level profit and loss truth.
  2. System of Intelligence: The ability to bring data together, standardize mappings/assumptions, and operationalize analytics and models (elasticity, forecasting, simulation).

This is the reality check: Agentic AI cannot compensate for missing financial truth, fragmented trade data, or absent governance. It can accelerate and augment—but it cannot conjure decision-quality inputs out of thin air.

At the same time, Vibhor Mishra offered an important nuance: organizations don’t have to finish the foundation journey before starting agentic work. The two can move in parallel, with agentic value expanding as maturity improves.

From dashboards to orchestration: Why central governance matters

We also discussed the dashboard sprawl many consumer packaged goods (CPG) companies face today. Vibhor Mishra nailed one of the root causes: siloed dashboards often exist because a centralized platform doesn’t—so teams build what they need locally, using their own assumptions and definitions.

And that’s where agentic AI can become a forcing function—not by replacing dashboards overnight, but by creating a new layer above them: an orchestrator that can interpret signals, run scenarios, and recommend actions across levers.

But we were aligned on a key warning: if KPI definitions and return on investment (ROI) logic aren’t governed centrally, then agentic experiences will reproduce the same fragmentation—just faster. Vibhor Mishra emphasized that enterprise design choices (what must be standardized versus configurable) are as important as the technology itself.

The hidden value of agents: Speed to insight (not autonomy)

Perhaps the most provocative point in our discussion was the productivity shift.

Soudip Roy Chowdhury described how a decision request that typically takes a large team of analysts a week or two—to consolidate data, run analysis, iterate, and prepare a leadership-ready view—can become near-instant in an agentic model for information extraction and synthesis (not automated action).

This is where I think many leaders misjudge the adoption path. The near-term breakthrough isn’t “autonomous revenue management.” It’s radically faster cycles of decision intelligence—enabling business users to explore scenarios, pressure-test assumptions, and then bring analysts in to critique and deepen, rather than to assemble.

Human judgment remains central. Agents should recommend, suggest, and collaborate—not override.

Agentic AI is not magic, and it is not meant to replace the hard work of real Revenue Growth Management. What it actually does is cut through the noise so teams can focus on the judgment calls that matter. When you move from scattered dashboards to true decision intelligence, you do not get hype. You get clarity, speed and better choices

Marco Casalaina, Microsoft VP Product Core AI

Where Microsoft innovation fits: Horizontal platforms meet domain depth

A question I care deeply about—and asked directly—is how domain players stay aligned as Microsoft accelerates investments in AI platforms and agents.

Soudip Roy Chowdhury described a co-evolution dynamic: Microsoft provides horizontal capabilities, while domain solutions pressure-test them in real enterprise contexts—sending product feedback, such as benchmarking agent performance, and collaborating with Microsoft teams using tools like Microsoft Foundry and opensource components such as LangChain.

This is how modern enterprise innovation scales: platform, partner, and practitioner. Microsoft’s agentic investments can provide the secure foundation—identity, access, orchestration patterns, and governed data experiences—while domain partners bring the deep RGM decision journeys, ontologies, and workflow embedding required for adoption.

A practical takeaway: A readiness lens leaders can actually use

If you’re a CPG leader evaluating agentic RGM, here’s the simplest way I’d frame it:

  1. Confirm your system of record: Do you have account-level financial truth for trade and spend? Can you allocate funding cleanly across retailers and levers?
  2. Strengthen your system of intelligence: Can you standardize definitions, map data reliably, and operationalize models and simulations?
  3. Deploy agentic experiences where speed creates advantage: Start where faster insight loops deliver measurable wins: scenario exploration, cross-lever interpretation, anomaly detection, and recommendation support—with humans firmly in the loop.
  4. Add a deliberation layer that turns insights into action: Once data-driven hypotheses are formed, the agent convenes the right collaborators to pressure-test assumptions, build consensus, route decisions into the operational workflow, and continuously monitor outcomes—creating a living learning system that blends human and digital labor to execute complex work with end-to-end traceability.

This is how we move agentic AI in RGM from hype to durable value: decision intelligence grounded in business reality.

Explore solutions and more

  • Explore how Microsoft AI for Retail helps consumer goods organizations modernize pricing, promotions, and decision intelligence.
  • Learn how Microsoft embeds governance and accountability into AI systems through its Responsible AI practices.

1 McKinsey: Harnessing revenue growth management for sustainable success

2 BCG: Fast-Moving Consumer Goods (FMCG)

3 BCG: Driving Volume-Led Growth in Consumer Markets

The post Agentic AI in revenue growth management: From hype to decision intelligence appeared first on The Microsoft Cloud Blog.

]]>