AI resources | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/topic/ai-resources/ Build the future of your business with AI Wed, 01 Apr 2026 14:10:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 AI Decision Brief: How leaders can drive Frontier Transformation http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/31/ai-decision-brief-how-leaders-can-drive-frontier-transformation/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/31/ai-decision-brief-how-leaders-can-drive-frontier-transformation/#respond Tue, 31 Mar 2026 15:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7987 While adoption of AI technology is now widespread, impact is not. Many organizations are experimenting and running pilot programs, but far fewer have the operating discipline to become what we call Frontier Firms—companies that scale AI in ways that meaningfully reshape work, decisions, and value creation.

The post AI Decision Brief: How leaders can drive Frontier Transformation appeared first on The Microsoft Cloud Blog.

]]>
Microsoft executives answer eight key questions on how to succeed in the new era of AI at work

While adoption of AI technology is now widespread, impact is not. Many organizations are experimenting and running pilot programs, but far fewer have the operating discipline to become what we call Frontier Firms—companies that scale AI in ways that meaningfully reshape work, decisions, and value creation. According to IDC’s Business Opportunity of AI Survey (August 2025), 68% of all respondents use GenAI and only 22% of organizations worldwide are Frontier Firms.1 These companies are seeing a return on investment in the technology that is several times greater than companies that are slow to adopt.

This gap is why Microsoft developed a newly revised 2026 edition of the AI Decision Brief, a handbook designed to help leaders and business decision-makers embrace the opportunities of Frontier Transformation. It addresses how AI can become a durable source of advantage: where to focus, how to measure value, how agents change workflows, and how trust, governance, and responsibility enable scale. “This is not simply the next stage of technology adoption,” writes Brad Smith, Microsoft Vice Chair and President. “Frontier Transformation is a leadership moment that asks organizations to fundamentally rethink how people, processes, and decisions work together.

We believe that this brief answers the questions many executives are asking about how to stay ahead of the curve. The questions below surface what we’re hearing from business leaders across industries as they plan investments, assess readiness, and look ahead. Each reflects a theme explored in depth in the AI Decision Brief and points to how organizations can begin turning AI execution into lasting impact. 

1. How can my company get the biggest impact from AI? 

The biggest impact comes when AI changes how the business operates—not just how fast someone answers an email. “Frontier Transformation is a holistic reimagining of business, aligning AI with human ambition to achieve an organization’s highest aspirations and growth potential,” writes Judson Althoff, CEO of Microsoft commercial business.

3 essentials for building a frontier organization

Get started ›

What does this mean in practice? Frontier Firms are leveraging AI to transform customer engagement, core processes, decision-making, and innovation. For them, AI isn’t confined to one team or one tool. Instead, it’s embedded across the enterprise in an average of seven business functions. That’s when the outcomes compound. These organizations are monetizing AI and outperforming slow adopters with roughly 3x higher returns.1 Agents are accelerating that shift because they don’t just make recommendations; they can take action and complete tasks.

2. How do you graduate beyond early wins with AI adoption?

While AI can boost individual productivity—drafting documents, summarizing meetings, and automating the more tedious aspects of jobs—it can do so much more, according to Jaime Teevan, Chief Scientist and Technical Fellow at Microsoft. “The real opportunity is bigger: not just helping individuals work faster, but enabling teams and organizations to work better, together,” she writes. 

Bring AI into processes

Read the blog ›

Most AI initiatives stall for the same reason most transformations stall: teams prove their value in specific use cases, but leaders don’t change the system around them. The model isn’t the bottleneck—processes, decision rights, and trust are. Frontier leaders, on the other hand, pick a small number of priority workflows and redesign them end to end. That’s how you move from “we got a nice pilot result” to “AI is embedded in how we run the business.”

3. How do I identify the priority workflows where AI can meaningfully change outcomes? 

“AI integration is often framed as a technical problem: which models to use, how to connect systems, how to mitigate risk,” writes Jared Spataro, Microsoft CMO of AI at Work. “But for most organizations, the real constraint on value is not technology, it’s how work is organized and governed. The bigger challenge is centered on management.”

Frontier organizations don’t ask, “Where can we plug in AI to automate a task?” They ask, “Which workflows most directly affect revenue, cost, risk, customer experience, or speed of decision-making?” Frontier leaders focus on embedding AI, agents, and data directly into those areas of high impact. 

4. As AI agents take more action on behalf of employees and teams, how does my role as a leader need to change?

Leadership has become even more important in the agentic era. “When AI systems can plan and execute over many steps, leadership and engineering rigor become the real bottlenecks,” writes Kevin Scott, CTO of Microsoft. “You need teams that are explicit about goals, careful about feedback and evaluation, and thoughtful about where autonomy is earned versus constrained.” 

The greatest risks are unclear intent, ownership, and accountability. Frontier leaders get ahead of this by redefining roles and decision rights early. Humans set outcomes, constraints, and success measures, while agents operate within clearly governed boundaries. That means treating agents like new employees or privileged service accounts—with named owners, least-privilege access, continuous monitoring, and regular review. 

5. How do you measure the success of AI when it’s embedded across workflows, decisions, and teams—not just individual tasks?

“Early productivity gains from AI are now expected,” writes Alysa Taylor, Microsoft CMO of Commercial Cloud and AI. “But Frontier leaders see beyond those short-term efficiency wins. They understand how AI can also help grow revenue, increase customer acquisitions, reshape processes, and improve operational efficiency.” 

Frontier leaders measure ROI the way they run the business: at the workflow and outcome level, not by counting isolated tasks. Yes, they track early productivity signals, but they don’t stop there—they tie AI to business metrics like faster cycle times, higher quality and consistency, better customer experience, lower risk, and faster decision-making.  

6. We’re under pressure to move fast with AI. Can we tackle security later on?

Great question! The answer is simple: absolutely not. “The AI opportunity is incredible, but speed without security, observability and governance opens the door to significant risk. By embedding these elements from the start, organizations can innovate rapidly while building and fostering trust,” writes Vasu Jakkal, CVP of Microsoft Security Business. 

The moment AI moves beyond pilots and starts touching real data, customers, and decisions, issues with security and accountability can offset gains in efficiency. According to Microsoft’s 2026 Data Security Index, less than half (47%) of companies have fully implemented data security controls for AI. Frontier leaders build observability, Zero Trust security, and clear ownership from day one, so teams can move faster with confidence instead of stopping to clean things up later.  

7. How do you scale AI across an organization without losing control or trust?

“Scaling AI is less about deploying tools and more about preparing people,” writes Nathalie D’Hers, Microsoft CVP of Employee Experience. “A workplace culture grounded in a growth mindset is more important than ever.” Frontier Firms embrace continuous learning and agility. This helps teams fundamentally reimagine processes and think bigger.  

Crucially, Frontier organizations also pair empowerment with guardrails. They give employees access to AI where work actually happens—through copilots, low-code tools, and approved platforms—so innovation isn’t bottlenecked by a small group of specialists. At the same time, they’re very clear about boundaries. That includes shared governance frameworks, approved data sources, identity and access controls, and observability at every layer. That’s what allows creation to scale safely.  

8. How do I balance Frontier Transformation with sustainability? 

“AI and sustainability are often treated as separate agenda items, but they are fundamentally connected,” writes Melanie Nakagawa, Chief Sustainability Officer at Microsoft. “Leaders should understand both sides of that equation: the resource footprint of AI as well as the opportunity it brings to help them operate more efficiently, build smarter, more resilient systems, and lower carbon emissions.”  

As AI grows, it brings real resource and trust questions about environmental impact, supply chains, community impact, and whether the benefits of AI are broadly shared. The Frontier view is that designing for efficiency, responsibility, and equitable diffusion isn’t a nice-to-have; it’s how you unlock durable growth while avoiding backlash, constraints, and extra work later.

At Microsoft, we’re building out AI infrastructure with sustainability in mind while also using AI as a force multiplier for climate progress by optimizing systems, accelerating materials discovery, and improving resource efficiency.     

Next steps to lead in the era of Frontier Transformation

Read the full AI Decision Brief to understand what it takes to lead in the era of Frontier Transformation. The insights, leadership advice, and practical tips found within our brief will help prepare your company to properly utilize and scale a powerful AI strategy. Once you have that knowledge base, you’ll need a trusted, reliable set of AI tools to execute that strategy. 

Explore Microsoft AI tools and solutions for your Frontier Transformation. 


IDC InfoBrief: sponsored by Microsoft, What Every Company Can Learn From Frontier Firms Leading the AI Revolution, IDC # US53838325, November 2025 

The post AI Decision Brief: How leaders can drive Frontier Transformation appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/31/ai-decision-brief-how-leaders-can-drive-frontier-transformation/feed/ 0
How to introduce agents into your workforce: 5 actions leaders can take http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/26/how-to-introduce-agents-into-your-workforce-5-actions-leaders-can-take/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/26/how-to-introduce-agents-into-your-workforce-5-actions-leaders-can-take/#respond Thu, 26 Mar 2026 15:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7946 How Microsoft helps organizations introduce AI agents responsibly—turning copilots into digital teammates that drive real business impact.

The post How to introduce agents into your workforce: 5 actions leaders can take appeared first on The Microsoft Cloud Blog.

]]>
Over the past year, organizations have focused on strengthening the human foundations of AI adoption—helping employees build confidence with copilots, reshaping workflows, and learning how to bring human expertise and machine intelligence together. These shifts have been essential. They created the readiness, skills, and muscle memory needed to move into the next stage of AI-enabled transformation: bringing AI agents into the workforce.

This is where the frontier is forming. While copilots help individuals be more effective, agents act on behalf of people. They carry out tasks, orchestrate multi-step workflows, and operate across systems continuously. And they’re moving quickly from experimentation to mainstream use. An IDC InfoBrief, sponsored by Microsoft, shows that 37% of organizations surveyed use agentic AI, another 25% are experimenting with it, and 24% are planning to use it the next 24 months.1 Organizations that have already invested in people, skills, and responsible practices may be better prepared to operationalize agents at scale—and convert AI’s promise into real business performance.

Five strategic moves for introducing agents responsibly

The new Agents in the Workforce Handbook builds on those earlier foundations. Where the first blog in this series focused on empowering your people, and the second explored how to pair human judgment with AI systems, this third chapter looks ahead: How do you introduce agents into your workforce responsibly and intentionally? Below are five strategic moves leaders should consider. These are high-level guideposts; the Handbook goes much deeper with templates, examples, and decision frameworks to support implementation.

1. Start with your most persistent pain points

When organizations begin exploring agentic AI, a common challenge is prioritization. Imagining use cases is easy. Choosing where to start is harder. Successful organizations don’t begin with futuristic ideas—they begin with the familiar, recurring friction points that quietly drain time and introduce risk.

These are often the workflows teams have learned to “live with”: manual triage, routine follow-up, coordination across systems, repeated reporting steps, or tasks with high error potential. Leaders should observe how work truly happens—shadowing teams, reviewing process maps, and asking simple but revealing questions:

  • Where do we lose time?
  • What gets done manually that shouldn’t be?
  • What feels broken—but no one owns?

These pain points typically offer the clearest path to early value. Addressing them not only frees capacity but also demonstrates to teams how agents can meaningfully improve the day-to-day. The Agents in the Workforce Handbook includes a readiness assessment and real-world patterns to help leaders identify and sequence the right opportunities.

2. Define your AI goal—and lead the change yourself

Introducing agents isn’t only a technical shift—it’s a leadership shift. Frontier Firms choose to align their early agent initiatives around bold, measurable goals: reducing manual work, accelerating cycle times, improving customer responsiveness, or expanding sales capacity. These goals create alignment and momentum, helping teams understand why agents matter and what success looks like.

But goals alone don’t change culture—leaders do. The organizations that move fastest are those whose executives personally model new ways of working. They use agents in their own workflows, talk openly about learnings, and recognize early adopters who demonstrate impact. They also acknowledge that change requires habit‑building. Experimenting with agents for even 20 to 30 minutes a day can materially improve adoption and confidence.

Skilling plays a central role. As Jeana Jorgensen, Corporate Vice President of Global Skilling, notes:

We’re hearing from many of our customers and partners that they expect employees across different roles to spend about 15 to 20% of their week learning and integrating AI into their daily work.

The Handbook offers guidance for identifying the roles, skills, and operating rhythms needed to support agent adoption.

3. Measure what works—and double down where it does

As with any transformative technology, early wins with agents need to be measurable and repeatable. Leaders should ensure visibility into how agents behave, how frequently they’re used, and the outcomes they produce. This isn’t about policing technology—it’s about giving teams the insights needed to improve and scale what’s working.

Effective organizations treat agent adoption like an operational discipline:

  • They log and monitor agent activity.
  • They measure time saved and business impact generated.
  • They expand agents that demonstrate clear value.
  • They refine or retire agents that don’t.

These data-driven insights help organizations move from experimentation to a consistent, enterprise-wide model for agent development—one where new ideas become shared services rather than isolated automations. The Handbook goes deeper into measurement strategies, including examples of what high-performing organizations track.

4. As agents become teammates, optimize continuously

Once an organization begins deploying agents across teams, a new challenge emerges: coordination. Agents that start out as individual productivity tools often become shared digital teammates—relied upon by multiple people, processes, and business functions. With that shift comes the need for thoughtful ownership, governance, and communication.

Successful organizations establish clear roles and responsibilities:

  • Who owns each agent?
  • Who can modify or update it?
  • How are changes communicated to the people who rely on it?
  • What happens when an agent’s behavior needs tuning?

Agents also require continuous improvement. As they’re used, they encounter edge cases, nuanced team preferences, and shifting processes. Over time, agents become more capable, and employees naturally evolve into “AI managers”—guiding digital apprentices the way they onboard and develop human teammates.

The Handbook provides deeper recommendations for governance models, centers of excellence, and cross-team alignment mechanisms that help organizations scale responsibly.

5. Reinvest the time saved—and push into innovation

While early value often shows up as efficiency, the long-term impact of agentic AI is much bigger: it creates renewed capacity for innovation. Frontier Firms understand that the goal isn’t to simply do the same work faster—it’s to free teams to pursue higher-value ideas, explore new business models, and elevate customer experiences.

Across industries, leading organizations are already demonstrating what this reinvestment looks like:

These examples highlight a crucial point: agents are not just workflow optimizers. They’re catalysts for reimagining how organizations deliver value. And the companies that begin investing now are positioning themselves for meaningful advantage.

Treat agents like teammates, not tools

The organizations achieving the strongest results view agents not as automations but as digital collaborators—systems that require feedback, tuning, and iteration. They integrate agents into team rhythms, treat them like growing contributors, and help their people evolve into confident AI managers.

This marks the natural third step in the Frontier journey: after empowering employees and strengthening the partnership between human expertise and AI (as explored in the first two blogs), organizations are now ready to bring digital teammates into the workflow in a structured, scalable way.

If your organization is ready to move from experimentation to scaled impact, the Agents in the Workforce Handbook offers the detailed guidance, examples, and templates to support your next phase of Frontier Transformation.


1 IDC InfoBrief: sponsored by Microsoft, What Every Company Can Learn From Frontier Firms Leading the AI Revolution, IDC # US53838325, November 2025.

The post How to introduce agents into your workforce: 5 actions leaders can take appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/26/how-to-introduce-agents-into-your-workforce-5-actions-leaders-can-take/feed/ 0
A new study explores how AI shapes what you can trust online https://news.microsoft.com/signal/articles/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/ https://news.microsoft.com/signal/articles/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/#respond Thu, 12 Mar 2026 15:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7902 Microsoft examines how media authentication, provenance, and watermarking can strengthen trust as AI‑generated content accelerates.

The post A new study explores how AI shapes what you can trust online appeared first on The Microsoft Cloud Blog.

]]>
You see it over your social feeds: Videos of adorable babies saying oddly grown-up things, public figures making wildly uncharacteristic statements, nature photos too far-fetched to be true. In the era of AI, seeing isn’t always believing.

Deepfakes threaten trust in news, elections, brands and everyday interactions, leading us to question what’s real. Determining what’s authentic or manipulated is the subject of Microsoft’s “Media Integrity and Authentication: Status, Directions, and Futures” report, published today. The study evaluates today’s authentication methods to better understand their limitations, explore potential ways to strengthen them and help people make informed decisions about the online content they consume.

The authors conclude that no single solution can prevent digital deception on its own. Methods such as provenance, watermarking and digital fingerprinting can offer useful information like who created the content, what tools were used and whether it has been altered.

Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.
Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.

People can be deceived by media if they lack information like its origin and history, or if its information is low-quality or misleading. The goal of the report is to provide a roadmap to deliver more high-assurance provenance information the public can rely on, according to Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft.

Helping people recognize higher-quality content indicators is increasingly important as deepfakes become more disruptive and provenance legislation in various countries, including the U.S., introduce even more ways to help people authenticate content later this year.

Media provenance has been evolving for years, with Microsoft pioneering the technology in 2019 and cofounding the Coalition for Content Provenance and Authenticity (C2PA) in 2021 to standardize media authenticity.

Young, co-chair of the study, explains more about what it all means:

What prompted the study?

“The motivation was two-fold,” Young says. “The first is the recognition of the moment we’re in right now. We know generative AI capabilities are becoming increasingly powerful. It’s becoming more challenging to distinguish between authentic content — like content that was captured by a camera versus sophisticated deepfakes — and as a result, there’s a huge uptick right now in interests and requirements to use those technologies that exist to disclose and verify if content was generated or manipulated by AI.

“The moment has been building, and we have a desire to help ensure that these technologies ultimately drive more benefit than harm, based on how they’re used and understood.”

Young adds that the paper is meant to inform the greater media integrity and authentication ecosystem, including creators, technologists, policymakers and others to understand what is and isn’t possible currently and how we can build on it for the future.

What did the study accomplish, and what did you learn?

The report outlines a path to increase confidence in the authenticity of media. The authors propose a direction they refer to as “high-confidence authentication” to mitigate the weaknesses of various media integrity methods.

Linking C2PA provenance to an imperceptible watermark can bring relatively high confidence about media’s provenance, she says.

She notes the report has a lot of caveats too, such as how provenance from traditional offline devices like cameras, which often lack critical security features, can be less trustworthy because it’s easier to alter.

It isn’t possible to prevent every attack or stop certain platforms from stripping provenance signals, so the challenge, Young says, “is figuring out how to surface the most reliable indicators with strong security built in — and, when necessary, reinforce them with additional methods that allow recovery or support manual digital-forensics work.”

How is this study different from others?

Young says their study investigated two “underexplored” lines of thought for the three methods of verification. They define the first as sociotechnical attacks, where provenance information or the media itself could be manipulated to make authentic content appear synthetic or fake content seem real during the validation process.

“Imagine you see an authentic image of a global sporting event with 80% of the crowd cheering for the home team,” she says. “The away team engages in an online argument claiming, ‘Hey, no, that’s all a fake crowd.’ Someone could make one small, insignificant edit to a person in the corner of the picture and current methods would deem it AI generated — even if the crowd size was real. These methods that are supposed to support authenticity are now reinforcing a fake narrative, instead of the real one.

“So, knowing how different validators work, even through really subtle modifications, you could manipulate the results the public would see to try to deceive them about content,” she says. The second key topic builds on the C2PA’s work to make content credentials more durable, while also addressing reliability. This is where the research is especially novel, Young says. “We looked at how provenance information can be added and maintained across different environments — from high-security systems to less secure, offline devices — and what that means for reliability.”

Why is verifying digital media so difficult?

Authenticating media is complex because there’s not a one-size-fits-all solution, Young says.

“You have different formats that have different limitations or trade-offs for the signals they can contain,” she explains. “Whether it’s images, audio, video — not to mention text, which has a whole different array of challenges — and how strong the solutions can be applied there.”

Young says there are different requirements and opinions about what level of transparency is appropriate as well. In some cases, users might not want any of their personal information included in the digital provenance of a piece of media, while in others, creators or artists might want attribution and to opt-in for having their information included.

“So, you have different requirements or even considerations about what goes into that provenance information,” she says. “And then, similar to the field of security, no solution is foolproof. So, all the methods are complementary, but each has inherent limitations.”

Where do we go from here?

Young says that as AI-made or edited content becomes more commonplace, the use of secure provenance of authentic content is becoming increasingly important. Publishers, public figures, governments and businesses have good reason to certify the authenticity of the content they share. If a news outlet shoots photos of an event, for example, tying secure provenance information to those images can help show their audience the content is reliable.

“Government bodies also have an interest in the public knowing that their formal documents or media are reliable information about public interest matters,” Young says.

She adds that as AI modifications to media become “increasingly common” for legitimate purposes, secure provenance can provide important context to help prevent an average reader or viewer from simply dismissing that content as fake or deceptive.

“For the industry and for regulators, we note how important continued user research in this area is to drive towards more consistent and helpful display of this information to the public — to make sure it’s actually meaningful and useful in practice,” Young says.

“We have a limited set of technologies that can assist us, and we don’t want them to backfire from being misunderstood or improperly used.”

Learn more on the Microsoft Research Blog.

The post A new study explores how AI shapes what you can trust online appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/signal/articles/a-new-study-explores-how-ai-shapes-what-you-can-trust-online/feed/ 0
How to bring human expertise and AI together: 3 impactful initiatives http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/25/how-to-bring-human-expertise-and-ai-together-3-impactful-initiatives/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/25/how-to-bring-human-expertise-and-ai-together-3-impactful-initiatives/#respond Wed, 25 Feb 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7751 See how Microsoft teams combine human expertise and AI to modernize workflows, scale learning, and drive measurable business impact.

The post How to bring human expertise and AI together: 3 impactful initiatives appeared first on The Microsoft Cloud Blog.

]]>
AI is redefining research, content maintenance, and the global learner experience at Microsoft Global Skilling

Microsoft Global Skilling helps people and organizations build the skills they need to thrive in an AI‑powered world. Within Global Skilling, the Learning Lab is the innovation engine—a team focused on designing, testing, and evolving modern learning experiences to continuously improve how skills are developed, validated, and applied in the flow of work. 


AI is reshaping how organizations work. Teams aren’t just adopting new tools—they’re also figuring out how those tools fit into existing workflows, roles, and expectations, all while trying to keep pace with business demands in a rapidly changing landscape. It’s a heavy lift. As the leader of the Learning Lab team, I’m navigating these same pressures, along with my team members, as we balance day-to-day delivery with the need to evolve our processes in real time. That’s why we’re embedding AI assistants and agentic workflows into internal processes—using them not only to work differently but also to learn differently. Through experimentation, we’re uncovering new ways to streamline operations and improve the learner experience for our global audience.  

This blog highlights three of our team’s most impactful AI initiatives that could also benefit your organization. Inspired by these projects, we developed A Practical Guide for Bringing AI into Your Business Processes, featuring real-world examples and actionable ideas for integrating AI and human expertise across your organization. 

A Practical Guide for Bringing AI into Your Business Processes

A close up of a purple and white surface

3 impactful AI initiatives leading the way

1. Reducing time-intensive coordination to optimize research 

The challenge of coordinating teams for research  

Before any learning materials can be built, our team conducts extensive research to understand new technologies, identify required skills, and validate what learners need. This early-stage analysis requires input from multiple stakeholders and a deep review of internal documentation, product roadmaps, and existing training materials.  

How AI is helping accelerate our research tasks and optimize cross-team input 

One of the biggest bottlenecks for our research workflows has been the time it takes to synthesize information and align teams around what a course should achieve. To improve this, we began experimenting with Researcher in Microsoft 365 Copilot and persona-based agents to support our research and planning stages. Our new process looks like this: 

  • Researcher synthesizes internal documentation, product roadmaps, and existing training materials to surface emerging themes and identify knowledge gaps. With the ability to process thousands of pages in minutes, it flags potential course objectives the team might have missed.
  • In parallel, persona-based agents simulate the perspectives of stakeholders from varying teams to help validate ideas before bringing them to the key decision-makers.
  • Throughout this process, our team members guide these AI tools through every step—providing the business context, analyzing AI outputs to identify gaps or inconsistencies, refining direction, and ensuring consideration of broader business objectives.  

In our experience with AI handling synthesis and early-stage validation, we’ve reduced the time required for core research processes from two weeks to just one day. This significant time savings extends to every course developed with this method, enabling us to redirect focus toward shaping stronger strategies, aligning content with business impact, and accelerating decision-making across teams.

Applying this approach in your organization 

AI-supported research and planning can help you make sense of complex information faster and build alignment earlier in your decision cycles. By using AI to synthesize documents, surface patterns, and validate assumptions, you can reduce the effort required to get teams on the same page. Your team members can then focus on refining strategy, confirming business priorities, and shaping higher-impact decisions. This combination improves speed and clarity throughout cross-functional work.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Drafting onboarding plans that human resources (HR) leaders can tailor to company culture.
  • Developing quarterly sales plays informed by shifting buyer behavior and competitor activity.
  • Creating campaign briefs rooted in audience insights, market trends, and performance data.
  • Developing forecasting assumptions by synthesizing inputs from sales, operations, and historical data. 

2. Transitioning from manual maintenance to continuous quality improvements 

The challenge of shorter content lifecycles  

We maintain thousands of courses and lab environments as part of our skilling initiatives for Microsoft technologies. With the fast pace of product evolution, it can be challenging to keep learning content accurate and functional.  

3 skilling insights

Read the blog ›

How GitHub Copilot became the maintenance partner for the team 

We recognized that the demands for maintaining learning content were increasing beyond our capacity to manage effectively. So we integrated GitHub Copilot into the content maintenance workflow like this: 

  • GitHub Copilot tools analyze content repositories—flagging inconsistencies, identifying outdated examples, and recommending updates based on current documentation.
  • Throughout this process, our team reviews and refines the AI-generated recommendations. When GitHub Copilot flags an issue, we evaluate how those changes might apply to other training courses. We also ensure that all revisions align with learning objectives and verify that security and accessibility standards are met.
  • Then GitHub Copilot helps implement some of the suggested updates, like generating new code samples or suggesting environmental configurations that align with the latest product releases. 

As a result, our team has reduced the time we spend on routine content maintenance by up to 25%. And with these time savings, team members can shift from reactive updates to proactive innovation—evaluating emerging skills, shaping next-generation modules, and exploring how agents, simulations, and personalized learning could improve outcomes. 

Applying this approach in your organization 

AI-assisted maintenance can help you keep large, fast-changing content ecosystems accurate and up to date without overwhelming your teams. By using AI to surface inconsistencies, flag outdated material, and recommend updates, you can dramatically reduce time spent on routine fixes. Your experts can then focus on reviewing changes for accuracy, regulatory needs, and strategic intent. This balance enables you to maintain quality at scale while freeing your teams to invest in higher-value innovation.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Maintaining and updating sales enablement content as product and service offerings evolve.
  • Keeping product messaging frameworks and campaign assets consistent and up to date.
  • Updating help center articles and support workflows after feature releases.
  • Updating contract templates and clause libraries to align with new regulatory guidance.

3. Delivering inclusive learning at scale through diverse content formats 

The challenge of content relevance and engagement  

Our learners span every continent, speak dozens of languages, and have their own preferred learning methods. Creating multimodal, accessible, and inclusive learning experiences while managing constant content updates was stretching the team thin.  

How AI helps scale and translate content for global learners  

To support different learning styles and languages, we’re piloting how to create immersive, inclusive learning through two experiments with AI: 

  1. We’re using AI tools to turn a single source of training content, like a session transcript or recording, into multiple formats, such as videos, podcasts, and recap summaries. This multimodal output lets us update learning materials at the pace required by our global audience and helps ensure that we’re reaching learners in their preferred formats.
  2. We’re piloting an AI-powered tool that not only translates content but also generates avatars that deliver multilingual voiceovers with more natural lip-sync, eliminating one of the most distracting elements of dubbed content. 

Early results show that we can now recover up to 15 hours per course we develop—time our team can spend on more nuanced work that AI can’t do, like adapting cultural references, verifying that tone and pacing match learning objectives, and maintaining brand voice. 

Applying this approach in your organization 

AI-powered localization can help you deliver content that feels native to every audience you service, no matter the language or market. By pairing AI’s speed in translation, voiceover, and prompt generation with your team’s expertise in cultural nuance and brand standards, you can scale global engagement without diluting quality. This combination lets you reach more learners, customers, and employees while keeping your message consistent and relevant across regions.  

Explore A Practical Guide for Bringing AI into Your Business Processes to learn more about how you can apply this in processes like: 

  • Localizing campaign assets for regional markets across languages and cultural norms.
  • Tailoring pitch decks and demos for industry-specific or region-specific buyers.
  • Creating multilingual chatbot responses and support scripts for global customers.
  • Adapting standard operating procedure and process documentation for different facilities or regional regulations. 

Building skills and strengthening our AI strategy

As AI becomes an extension to the Learning Lab, we’ve discovered that it’s much more than just implementing new tools—it’s also a journey of building technical and human skills across the team. Our experiments require every team member to stretch into new capabilities, from process optimization and innovation to strengthening collaboration and creative problem-solving. As a result, we’ve been able to spend less time on repetitive tasks and to dedicate more energy to the kind of creative, relationship-driven work that leads to exceptional learning experiences. 

3 strategies to start your frontier transformation

Read the blog ›

Looking to build skills for you and your teams? Explore AI Skills Navigator, the agentic learning space that brings together AI-powered skilling experiences and credentials that help individuals build career skills and organizations worldwide accelerate their business.

The post How to bring human expertise and AI together: 3 impactful initiatives appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/25/how-to-bring-human-expertise-and-ai-together-3-impactful-initiatives/feed/ 0
80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/#respond Tue, 17 Feb 2026 15:45:00 +0000 Read Microsoft’s new Cyber Pulse report for straightforward, practical insights and guidance on new cybersecurity risks.

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Read the latest Cyber Pulse report

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

Get the full Cyber Pulse report

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/feed/ 0
How to start your Frontier Transformation: 3 strategies to start with people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/09/how-to-start-your-frontier-transformation-3-strategies-to-start-with-people/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/09/how-to-start-your-frontier-transformation-3-strategies-to-start-with-people/#respond Mon, 09 Feb 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7705 Frontier Firms turn human ambition into ROI, using AI and agents to accelerate growth, margins, and employee confidence.

The post How to start your Frontier Transformation: 3 strategies to start with people appeared first on The Microsoft Cloud Blog.

]]>
AI is no longer experimental—it’s reshaping margins, reducing cycle times, and accelerating revenue growth for companies that move decisively. Frontier Firms are already capturing these gains, leaving slow adopters behind. According to a recent study from IDC, Frontier organizations see three times higher ROI from AI than slow adopters. Another differentiator emerged from our own research: 71% of employees at Frontier Firms say their company is thriving, compared with just 39% globally.

Frontier leaders aren’t simply bolting new technology onto their existing operations. As Microsoft Chief Executive Officer of Commercial Business Judson Althoff has shared in recent articles and keynotes, these leaders are taking a human-centered approach to AI transformation. The people closest to the work understand the real bottlenecks and opportunities. By equipping them with AI, leaders unlock practical solutions that drive measurable performance gains.

3 essentials for building a Frontier organization

Here’s Althoff’s outline for a Frontier approach to using AI and agents that puts capability directly into each employee’s hands.

1. Start with your employees to amplify ambition

At the heart of every Frontier business flow is the notion of democratizing intelligence. Human ambition is at the core, coupled with your AI assistants and agents to get real work done.

—Judson Althoff, Chief Executive Officer of Commercial Business, Microsoft

The idea: The point isn’t to simply deploy more technology, but to deploy it in ways that unlock more potential in people to solve their hardest problems and create more business impact. 

Why it matters: According to IDC,1 AI adoption is accelerating past the initial experimentation phase, with 68% of companies using AI and 37% using agents. However, providing access to the technology is not the same as providing the guidance and skilling needed to unlock its potential. 

The shift: Frontier leaders focus on applying agents where they matter most—the priority workflows that define performance and growth. AI is at its most powerful when employees have the space and the guidance they need to imagine, experiment, and pursue bolder ideas. 

The big picture: Frontier leaders don’t start with AI capabilities. They start with human ambition, then design the systems, workflows, and guardrails that allow that ambition to scale responsibly. This requires treating AI adoption as a management system—not an IT rollout—with executives and business decision-makers actively redesigning workflows end-to-end.

2. Expand across every business function

There’s a maker in every one of us, and the Frontier Firm has a maker in every room of the house.

—Judson Althoff, Chief Executive Officer of Commercial Business, Microsoft

The idea: The people closest to the challenge are often closest to the opportunity. As AI becomes more accessible, creativity moves from the edges of the organization to the center so that everyone is empowered to innovate.

A striking data point: Frontier Firms aren’t leaving AI adoption to the IT department—they are making it a company-wide leadership priority. According to IDC research, Frontier Firms are using the technology across seven business functions on average.

Real-world innovations: Mercedes-Benz scaled AI innovation across its global production network, diagnosing efficiency declines and reducing energy consumption of buildings and machines—including 20% energy savings in one paint shop. And Althoff highlights how Toyota is pioneering AI intelligence in manufacturing with the O-beya system, a multi-agent AI system that simulates expert discussions virtually. O-beya can auto-select AI agents in fields like fuel efficiency, along with drivability, noise and vibration, energy management, and power management to pinpoint causes and suggest solutions. 

The takeaway: Broadening access to agents can unleash innovation. Frontier leaders don’t need to script how employees should use the technology—they just need to ensure that there are proper guardrails around a wide space for experimentation.

3. Trust, governance, and integration determine ROI

The idea: AI can create more value when people trust it enough to use it. Trust is what allows AI-powered innovation to scale beyond isolated pilots. And that requires human oversight with “observability at every layer of the stack,” according to Althoff. 

The challenge: Not every organization has put the right safeguards in place yet. Microsoft’s 2026 Data Security Index reports that only 47% of companies have fully implemented data security controls for AI.   

The solution: Frontier leaders must ensure security and be explicit about human-in-the-loop observability as a cornerstone of transformation. People adopt AI confidently when they understand how decisions are made, how data flows, and how systems behave—and when to intervene as needed. Finally, Frontier organizations don’t implement new technology and then slow down—or backtrack—to implement responsible practices. They design for trust from the start so they can keep moving quickly. 

Actions you can take to drive measurable impact

The idea: The organizations that will win in the Frontier era are those that view AI not as a one-off tool rollout but as a leadership discipline. They start by clarifying ambition, giving people the space and agency to act, and building trust early so transformation can scale across the business. Importantly, they use AI themselves to guide decisions, surface insights, and stress test ideas about keeping humans at the center of their business transformation. 

Where to start: Microsoft’s new Prompt Guide for Business Leaders was designed to help leaders get a handle on the changing AI landscape and use the technology itself to stress test their ideas and strategies in response to it. The guide offers guidance on how to:  

  1. Assess readiness
  2. Identify value 
  3. Map workflows 
  4. Build roadmap 
  5. Plan for risk 
  6. Define actionable next steps

Example prompt: “Show me the top three workflows where agents could reduce cycle time by at least 20% based on our current operations.”

From vision to value in the Frontier era

The guide demonstrates how AI can be a thinking partner, and helps leaders develop a strategy to help their people harness the technology to achieve goals, innovate, and unlock more value.

Innovation with AI

What every company can learn from Frontier Firms leading the AI revolution

A close up of a curved object.

1 IDC InfoBrief: sponsored by Microsoft, What Every Company Can Learn From Frontier Firms Leading the AI Revolution, IDC # US53838325, November 2025.

The post How to start your Frontier Transformation: 3 strategies to start with people appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/02/09/how-to-start-your-frontier-transformation-3-strategies-to-start-with-people/feed/ 0
Beyond Davos 2026: 5 practices to align AI transformation and sustainability http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/01/28/beyond-davos-2026-5-practices-to-align-ai-transformation-and-sustainability/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/01/28/beyond-davos-2026-5-practices-to-align-ai-transformation-and-sustainability/#respond Wed, 28 Jan 2026 16:00:00 +0000 At Davos 2026, leaders are aligning AI transformation with sustainability—outlined in the Strategic Guide: Aligning AI Transformation with Sustainability Goals.

The post Beyond Davos 2026: 5 practices to align AI transformation and sustainability appeared first on The Microsoft Cloud Blog.

]]>
The conversations at the World Economic Forum meeting in Davos, Switzerland, are always centered on the pressing issues spanning business, politics, climate, and society. This year’s meeting was no different. AI has been at the center of these conversations over the past few years, although I noticed a shift in the tone this year. Leaders are beginning to view AI not as a standalone technology, but as a catalyst—one that will shape their environmental impact, their operational resilience, and their long term success. AI is no longer an abstract promise; it is a practical lever redefining how organizations work, scale, and create value while managing trust and responsibility.

At Microsoft, we see this shift clearly in our conversations with customers globally. Leaders are moving quickly to scale AI, while remaining accountable for sustainability commitments to customers, investors, regulators, and employees. Too often, these goals are positioned as tradeoffs. In practice, they are reinforcing. When AI transformation is approached with intent and discipline, it can drive stronger business performance while advancing sustainability outcomes.

That belief is the foundation of our new Strategic Guide: Aligning AI Transformation with Sustainability Goals.

Why AI transformation and sustainability belong together

The most meaningful impact from AI comes not from isolated pilots, but from transformation—when intelligence is embedded across strategy, operating model, and culture. That’s the premise of Microsoft’s Frontier transformation AI vision, where organizations are enriching employee experiences, reinventing customer engagement, reengineering core business processes, and bending the curve on innovation.

2025: the frontier firm is born

Read the blog ↗

What’s often overlooked is that these same shifts deliver sustainability gains. More efficient processes require less energy and fewer resources, better data reduces waste and overproduction, and modern cloud and AI architectures—when designed intentionally—can shrink digital footprints while increasing speed and resilience.

Five practices for sustainable AI transformation

Our new Strategic Guide: Aligning AI Transformation with Sustainability Goals makes this connection explicit and practical, offering five essential practices leaders can apply today to turn AI ambition into measurable business and sustainability outcomes.

  1. Adopt a modern cloud strategy.
    Moving workloads to efficient, hyperscale cloud environments is often the single biggest step organizations can take to reduce energy use while improving performance. Modern cloud platforms enable organizations to scale AI intelligently—optimizing compute, storage, and cooling in ways that are difficult to achieve on‑premises.
  2. Assess your cloud provider’s sustainability and trust goals.
    An organization’s environmental footprint increasingly extends beyond its own walls. Transparency, renewable energy commitments, and responsible datacenter operations matter because your partners’ practices become part of your sustainability equation.
  3. Manage data responsibly for efficient and accurate AI.
    Efficient data pipelines, strong governance, and thoughtful lifecycle management do more than reduce risk. They also reduce unnecessary compute and storage, helping AI systems become more accurate, scalable, and sustainable.
  4. Optimize cloud workloads.
    As AI moves from pilots to production, sustainability outcomes increasingly depend on how workloads are designed and run in the cloud. Right‑sizing compute, reducing idle resources, and streamlining data movement lowers energy use while improving performance and cost control.
  5. Fit the model to the mission.
    With efficient cloud foundations in place, leaders can focus on selecting the right AI models for the right jobs. Aligning model choice with business objectives, performance requirements, and sustainability goals enables organizations to scale AI responsibly—maximizing impact without unnecessary complexity or resource use.

Together, these practices help leaders move beyond aspiration to execution—delivering what the guide describes as a dual return: stronger business performance alongside reduced environmental impact.


What the research shows

AI can deliver better results—faster and more sustainably

In a simple experiment highlighted in the Strategic Guide: Aligning AI Transformation with Sustainability Goals, Microsoft set out to understand how efficiently AI could perform a common knowledge work task.

Five professionals were asked to summarize a 3,000-word technical report into 200 words. Completing the task took a median of 41 minutes and consumed an estimated 13.7 watthours of laptop energy.

Using a single prompt, Microsoft Copilot completed the same task in under a minute—using just 0.29 watthours of datacenter energy. That’s roughly 55 times faster and 47 times more energy efficient. Independent reviewers also rated the AI-generated summary higher for clarity, accuracy, completeness, and overall quality.

The takeaway is clear: when AI is applied thoughtfully, it can reduce time, energy consumption, and friction—while delivering stronger outcomes.


What this looks like in practice

Across industries, organizations are already demonstrating how AI transformation and sustainability reinforce one another.

ABB, a global leader in electrification and automation, is using AI to help energy and asset intensive industries operate more efficiently while meeting increasingly ambitious sustainability goals. The Genix Industrial AI Platform helps ABB customers deliver from 25% efficiency gains in data centers to 18% energy savings in cement production.

In the construction sector, Giatec is tackling one of the world’s most carbon intensive materials: concrete. Built on Microsoft Azure, Azure IoT Hub, and Azure OpenAI in Foundry Models, Giatec’s intelligent platform optimizes mix designs, reduced 2.5 million tons of carbon emissions, and increased profit margins for concrete producers by up to 100%.

Space Intelligence uses AI to turn vast amounts of satellite data into trusted, actionable insights for global climate and conservation efforts. The company moved to Microsoft Foundry and the Planetary Computer ecosystem to reduce the time required to map the world’s forests by 75%, completing coverage of more than 50 countries in just one year, something that would’ve taken six years—delaying the ability to drive and verify real world climate impact.

Becoming a Frontier organization—responsibly

These examples point to a broader trend: the organizations leading in AI are also redefining what responsible innovation looks like. Frontier organizations don’t treat sustainability as a separate initiative or reporting exercise. They design it into their transformation from the start.

Solving systemic challenges like climate change requires collaboration—across value chains, ecosystems, and sectors. It also requires leaders who are willing to ask better questions about how technology is deployed, measured, and governed.

This perspective is demonstrated by Microsoft’s recent announcement on community-first AI infrastructure. As we scale AI, we have a responsibility to consider not only what these systems can do, but how and where they are built. That means investing in infrastructure that supports local communities, prioritizes renewable energy, manages water responsibly, and is designed with transparency and long-term partnership in mind. Building AI responsibly isn’t just about reducing risk—it’s about earning trust and ensuring that the benefits of innovation are shared broadly—from the datacenter outward.

Used thoughtfully, AI can help us make smarter decisions, operate more efficiently, and unlock entirely new ways of creating value—while staying within planetary boundaries. Used carelessly, it risks accelerating the very challenges we’re trying to solve.

That’s why clarity matters. Frameworks matter. And practical guidance matters.

What leaders can do next

If you are responsible for shaping your organization’s AI strategy, sustainability agenda, or both, I encourage you to explore the Strategic Guide: Aligning AI Transformation with Sustainability Goals. It is designed to help you cut through complexity, identify where to start, and move forward with clear actionable strategies.

At Microsoft, we’re committed to helping our customers become Frontier organizations that lead with innovation, responsibility, and impact.

The challenges we face are complex. But with the right strategy, the right technology, and a shared commitment to progress, AI can help us build a more sustainable and prosperous future—for everyone.

Strategic Guide: Aligning AI Transformation with Sustainability Goals

A colorful abstract image

The post Beyond Davos 2026: 5 practices to align AI transformation and sustainability appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/01/28/beyond-davos-2026-5-practices-to-align-ai-transformation-and-sustainability/feed/ 0
How AI helps neurodivergent professionals showcase their strengths https://news.microsoft.com/source/features/ai/how-ai-helps-neurodivergent-professionals-showcase-their-strengths/ https://news.microsoft.com/source/features/ai/how-ai-helps-neurodivergent-professionals-showcase-their-strengths/#respond Tue, 13 Jan 2026 19:26:03 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7628 Explore how a number of business professionals with neurodivergent traits—including autism and attention-deficit/hyperactivity disorder (ADHD)—are finding greater confidence and efficiency through AI.

The post How AI helps neurodivergent professionals showcase their strengths appeared first on The Microsoft Cloud Blog.

]]>
Kim Akers settles into a corner table at a coffee shop near her Seattle home. The hum of conversation and clatter of cups fade into the background as the Microsoft executive begins another day leading large teams, managing family life and navigating complex challenges — not just in business, but in the way her mind works.

Akers lives with ADHD, dyslexia and dysgraphia, meaning tasks like reading, writing and organizing information require extra effort and creativity. She recalls having to turn down an invitation to read a passage at her brother’s wedding, and the confusion in one of the first teams she led at work when she referred to everyone by their first names, although several shared the same one, because she couldn’t easily read more complicated last names.

But as technology has evolved, so has Akers’ toolkit. AI-powered aids such as Copilot are helping her manage the cognitive load, shifting the focus from hurdles to strengths so she can communicate and lead in ways that once felt out of reach. She’s part of a growing wave of business professionals with neurodivergent traits — differences in brain function, including autism and attention-deficit/hyperactivity disorder (ADHD) — who are finding greater confidence and efficiency through AI.

There’s so many positive things that come out of having a brain that thinks differently.

Kim Akers
“When I saw the ability to take an input in, like here’s what I’m trying to communicate in an email, and then get it back in seconds and have it be 90% of the way there, that was a game changer,” says Akers, who uses Copilot at work and at home. “When the tech got good enough that you could use prompts, it really effectively cut down a lot of your prep work.”

Now that she can set her own meetings with Copilot’s help in Outlook, she has more control over her calendar and her days. She uses Microsoft 365 Copilot across the apps to do things like summarize documents, write emails and streamline meeting preparation by building lists of questions to ask her team about projects underway.

The tool helps her analyze sales data and draft outlines for presentations. It even helps her support her kids with their homework by generating practice problems or breaking down big assignments into manageable steps.

“Dr.
Dr. Cornelia C. Walther, a researcher and author who focuses on “prosocial AI” — systems designed to amplify human potential and foster equity (photo provided by Walther)
“Neurodivergent leaders who harness the full range of their natural and artificial assets are a beautiful illustration of the potential that the hybrid future offers for all of us,” says researcher and author Dr. Cornelia C. Walther, who focuses on “prosocial AI” — systems designed to amplify human potential and foster equity.

AI can be a bridge to greater inclusion and a connector that helps people participate more fully in society, says Walther, a senior fellow at the Wharton Neuroscience Initiative and Harvard’s Learning and Innovation Lab. The tools can help people with neurodivergence curate a new inner dialogue, moving beyond the self-judgment that can come with feeling different, she says.

“AI can serve as a sort of translator, not of language, but of ability,” Walther says. “It can make sure there is a path that connects your ability and makes it useful in the way in which society is currently normed.”

Recent research from professional services network EY underscores this, finding that generative AI can reduce barriers and support more inclusive ways of working. That’s significant for a workforce where an estimated 15-20% of people — and an even higher share of Gen Z — identify as neurodivergent.

In the EY survey of 300 employees with disabilities or neurodivergence across 17 organizations worldwide, respondents described how tools like Copilot helped with initiating tasks, organizing thoughts, spotting mistakes and improving accuracy. They said Copilot helped them stay on top of emails, focus in meetings instead of taking notes, and draft documents, spreadsheets and presentations — especially useful for those with dyslexia.

The study found Copilot’s impact goes beyond productivity. Participants said the tool’s support in making it easier to communicate, manage information and stay organized in turn boosted their confidence, motivation and impact. Many noted that Copilot helped them play to their strengths and overcome common hurdles, with 68% saying it reduced work anxieties and 71% saying it gave them hope.

“Hiren
Hiren Shukla, who founded the Neuro-Diverse Centers of Excellence at EY Global (photo provided by EY)
Neurodivergent professionals don’t just benefit from AI tools; they’re often the ones who find the most creative and effective ways to use them, says Hiren Shukla, who founded EY’s global neurodiversity program and lives with ADHD and dyslexia.

When EY ran a six-week innovation sprint with neurodivergent team members using Copilot earlier this year, Shukla says, ideas poured in: 60 to 80 process improvement suggestions, many sparked by the inventive approaches employees took to tackle problems.

“It’s not just AI helping neurodivergence,” Shukla says. “It’s the power of neurodivergence maximizing the use of Copilot. When you harness that divergence and partner with AI, you’ll see greater innovation, higher use cases, more ideation and application of AI.”

As organizations increasingly recognize the value of neurodivergent talent, and as AI tools become more inclusive, the ripple effects go beyond individual careers and corporate innovation to benefit everyone, he says.

This dynamic is especially pronounced at the leadership level, he says, where disclosure is often rare and role models are few.

“We hear a lot about frontline workers using AI, but not enough about neurodivergent leaders,” Shukla says. “Having executives like Kim Akers share their stories is crucial. It activates other leaders out there so they see themselves, lean in more and celebrate how they use AI, whether they disclose their neurodivergence or not.”

AI tools are creating opportunities for people who have been historically left out of mainstream companies and institutions, says Maitreya Shah, the American Association of People with Disabilities’ technology policy director.

“Maitreya
Maitreya Shah, the American Association of People with Disabilities’ technology policy director (photo provided by Shah)
“AI also gives you a level of independence and privacy for things you might not want to ask for help with from others,” he says, such as being able to communicate more effectively or understanding complicated yet sensitive health or financial documents. “That feeling of agency, of being able to do things independently, with AI helping you without involving family members or caregivers — all of that feels very transformative.”

As technology removes barriers, it also helps make room for the unique qualities neurodivergent professionals bring to their teams. For example, people with neurodivergence sometimes have a little extra empathy for and curiosity about others, Akers says, recognizing that they don’t necessarily know “what everybody’s bringing to the table.”

That curiosity draws Akers to set aside time every night to experiment with new tools and prompts, whether it’s exploring a competitor’s product, trying out a new Copilot feature or reading up on the latest advances in AI.

“I like to get my hands dirty, to actually physically try it and see what happens,” she says. “That’s how I stay up on top of it, just because it’s changing so fast.”

But it’s not only about keeping pace with technology; it’s about staying open to new ways of working and connecting. Akers credits her neurodivergence with making her more willing to lean into trial and error and with helping her appreciate the different perspectives her colleagues bring.

AI can serve as a sort of translator, not of language, but of ability.

Dr. Cornelia C. Walther
“When you’re neurodivergent, you have to always be figuring out little hacks,” she says. “You spend a lot of time learning from other people, like, ‘That worked for you, let me try it out.’ Collaborating, problem-solving, being creative, not being stuck on one way to do something, but being pretty open to trying things, and if they don’t work, just trying again with the next thing.”

It’s a blend of empathy, curiosity and adaptability that Akers sees as a leadership advantage — one that’s increasingly vital as AI tools reshape the workplace. By embracing experimentation and valuing difference, she’s not just finding ways to make her own work easier; she’s helping build a culture where everyone’s strengths have room to shine. It’s a commitment she carries into her role as co-executive sponsor of Microsoft’s Disability and Neurodiversity Inclusion Networks, groups dedicated to supporting and empowering employees across the company.

“There are so many positive things,” she says, “that come out of having a brain that thinks differently.”

Lead photo: Kim Akers, chief operations officer for Microsoft’s commercial business and co-executive sponsor of Microsoft’s Disability and Neurodiversity Inclusion Networks (photo by Scott Eklund)

The post How AI helps neurodivergent professionals showcase their strengths appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/how-ai-helps-neurodivergent-professionals-showcase-their-strengths/feed/ 0
From awareness to action: Building a security-first culture for the agentic AI era http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/#respond Wed, 10 Dec 2025 16:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7373 Microsoft helps leaders secure AI adoption with governance, training, and culture—turning cybersecurity into a growth and trust accelerator.

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
The insights gained from Cybersecurity Awareness Month, right through to Microsoft Ignite 2025, demonstrate that security remains a top priority for business leaders. It serves as a strategic lever for organizational growth, fosters trust, and facilitates the advancement of AI innovation. The Work Trend Index 2025 indicates that over 80% of leaders are currently utilizing agents or plan to do so within the next 12 to 18 months. While AI introduces risks such as oversharing, data leakage, compliance gaps, and agent sprawl, business and security leaders can address these issues in part by: 

  1. Preparing for the integration of AI and agents.
  2. Strengthening training so that everyone has the necessary skills. 
  3. Fostering a culture that prioritizes cybersecurity. 

Preparing for the integration of AI and intelligent agents

Preparing for AI and agent integration calls for careful strategy, thoughtful business planning, and organization-wide adoption under solid governance, security, and management. Microsoft’s AI adoption model offers a step-by-step guide for businesses embarking on this journey and the guide offers actionable insights and solutions to manage AI risks.

Strengthening training so that everyone has the necessary skills

Technology alone isn’t enough. People are your strongest defense—and the foundation of trust. That’s why skilling emerged as a central theme throughout these past months and will continue beyond. Frontier Firms—those structured around on-demand intelligence and powered by “hybrid” teams of humans plus agents—lead by fostering a culture of continuous learning. Our blog “Building human-centric security skills for AI” offers insights and guidance you can apply in your organization.  

  • Lean into your unique human strengths: Your team’s judgment, creativity, and experience are irreplaceable. Take time to invest in upskilling and reskilling them, so they can confidently guide and manage AI tools responsibly and securely. Explore Microsoft Learn for Organizations for resources to support your learning journey.
  • Stay curious and agile through continuous learning: Building security resilience is an ongoing process. Regularly refresh your AI and security training, offer time and resources for employees to explore new skills, and create a supportive, engaging environment that motivates continuous growth. Find in AI Skills Navigator, our agentic learning space, AI and security training tailored to different roles.  

Investing in skilling doesn’t just reduce risk—it accelerates innovation by giving teams the confidence to explore new AI capabilities securely. 

Skilling is an ongoing practice that needs to constantly evolve alongside the business and technology landscape. Staying ahead requires an enterprise-wide strategy that aligns ever-changing business priorities with always-on skill-building. 

—Jeana Jorgensen, Corporate Vice President, Microsoft Learning

Fostering a culture that prioritizes security

As AI impacts everyone’s role, make security awareness and responsible AI practices shared priorities. Encourage your team to weave security thinking into their daily routines—creating a safer environment for all. As Vasu Jakkal, Corporate Vice President of Microsoft Security highlighted in her blog “Cybersecurity Awareness Month: Security starts with you,” it is critical that security become part of your organization’s culture and norms. 

Check out our new e-book, Skilling for Secure AI: How Frontier Firms Lead the Way for practical steps for leaders to upskill their workforce in identity management, data governance, and responsible AI practices.

From awareness to action

In the agentic AI era, people continue to be our most valuable resource. It’s essential to empower them with AI and equip them with the skills they need to use AI responsibly and securely. Cybersecurity awareness should go beyond designated months or campaigns; true awareness means taking meaningful action.   

Here are three actions you can take today to maximize your AI investments: 

  1. Share the Be Cybersmart Kit with your employees. It includes tips for protecting yourself from fraud and deepfakes, guidance on safe AI usage, and key security best practices.
  2. Invest in people: Focus on upskilling initiatives that support your AI transformation, cloud modernization, and security-first strategies.
  3. Champion a security-first culture: Ensure cybersecurity is integral to every business discussion and woven into your overall strategy. 

Microsoft guide for securing the AI-powered enterprise

A close up of a colorful swirl

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/feed/ 0
Cybersecurity Awareness Month: Security starts with you http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/ http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/#respond Wed, 01 Oct 2025 16:00:00 +0000 Make the most out of Cybersecurity Awareness Month with resources from Microsoft.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
At Microsoft, security is our number one priority, and we believe that cybersecurity is as much about people as it is about technology. As we move into October and kick off Cybersecurity Awareness Month, this time of year really makes me think about how important online safety is—not just at work, but for my family and friends too. I often find myself sharing tips with loved ones on how to stay safe online, because building strong security habits and keeping them top of mind has become a key part of how I protect myself and those around me.

Explore Microsoft Cybersecurity Awareness resources

As part of the Microsoft Secure Future Initiative (SFI), we have committed to embed security into every layer of our technology, culture, and governance—placing security above all else. Since its launch in November 2023, SFI has mobilized the equivalent of more than 34,000 engineers to proactively reduce risk and strengthen security across Microsoft and the products and services we offer our customers. A great example of this is mitigating advanced multifactor authentication attacks, where phishing-resistant multifactor authentication now protects 100% of production system accounts and 92% of employee productivity accounts. In addition, we continue to reduce the risk of compromise during new employee setup by enforcing video-based verification, now at 99%.1

Enabling your security-first approach

This year, we have also developed new resources and tools to support security professionals in keeping their organizations secure, particularly as we enter this next era of AI. Building upon our learnings with SFI, we have created SFI patterns and practices, which is a new library of actionable guidance designed to help organizations implement security at scale.

In addition to best practices for security professionals, we continue to add articles to our Be Cybersmart Kit, which is a great starting point for security professionals that need to educate their organizations on how to be safe. The Be Cybersmart Kit contains articles on AI safety, device security, domain impersonation, fraud, secure sign-in, and phishing. The kit is just one of the many resources available on the Microsoft Cybersecurity Awareness site

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Those seeking more in-depth resources can access expert-level learning paths, certifications, and technical documentation to continue their cybersecurity education. And for students pursuing the field of cybersecurity, the Microsoft Cybersecurity Scholarship Program and educational opportunities like Microsoft Elevate are here to help. The goal of all these programs is to help foster a culture that puts security and continuous learning first for students and professionals alike.

Security-first in action: Franciscan Alliance

A great example of a security-first culture, especially around education and awareness training, is Franciscan Alliance, a non-profit Catholic health care organization based in Indiana. Franciscan Alliance employs a proactive and interactive strategy for cybersecurity awareness and employee education.

“We believe cybersecurity education should be continuous, engaging, and empowering—because informed employees are our strongest defense.”

—Jay Bhat, Chief Information Security Officer (CISO), Franciscan Alliance

The organization conducts monthly phishing simulations and quarterly assessments to expose staff to realistic scenarios consistently. Employees who do not pass the quarterly assessments are provided with additional training rather than being penalized, which supports a culture centered on learning and development. Training programs incorporate gamification elements to enhance accessibility and retention. Additionally, employees receive a monthly newsletter covering relevant security topics that support safe practices both professionally and personally.

During Cybersecurity Awareness Month, weekly editions are distributed, along with timely updates on emerging threats, including breaches and attacks. Franciscan Alliance also organizes threat briefings in partnership with external partners and utilizes resources such as Microsoft’s Cybersecurity Awareness materials to inform its training initiatives.

Developing security competencies in the age of AI

As organizations rapidly embrace AI, making security the first priority is not just a best practice—it’s a necessity. AI systems are powerful tools that can transform business productivity, but without robust governance and security measures, they can also introduce significant risks. To address these challenges and empower security-first leadership, we invite C-level executives to register for Microsoft’s upcoming webinar “Trust in AI: Accelerate Business Growth with Confidence,” which will feature critical discussions on how to build trust in AI for your organization.

Get started here:

Additionally, Microsoft’s Chief Product Officer of Responsible AI Sarah Bird will moderate the panel, “Cyber and AI, Strategic Risk and Competitive Advantage,” at the NASDAQ Summit on October 21, 2025, at the New York Stock Exchange, where industry experts will provide guidance on governance and security for AI. In this session, experts will discuss real-world use cases, regulatory developments, and the strategic implications of integrating AI into enterprise environments. Events such as these are incredible opportunities for executives to deepen their understanding and lead with confidence in the age of AI.

Get the Be Cybersmart Kit

Make the most out of Cybersecurity Awareness Month

We hope that these resources provide you with the learning, training, and confidence to set you and your organizations up for success—both this month and beyond. Now is the time to build a culture with a security-first mindset by making security part of your daily habits at work, home, and everywhere else. A security-first mindset means staying informed, proactively protecting digital assets, and encouraging others to do the same. Security is a team sport. By promoting vigilance and shared responsibility, we can create a safer world for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1April 2025 SFI progress report.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/feed/ 0