The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/ Fri, 14 Feb 2025 22:51:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/#respond Tue, 18 Feb 2025 16:00:00 +0000 Get an overview of the 2025 AI Decision Brief, a Microsoft report on how generative AI is impacting businesses and how to maximize AI at your organization.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on The Microsoft Cloud Blog.

]]>
Generative AI has been on a phenomenal growth trajectory over the past few years. We’re seeing businesses across industries using AI to increase productivity, streamline processes, and accelerate innovation. As generative AI applications continue to become more powerful, the question isn’t whether organizations will take advantage of AI, but how they can use it most effectively.

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. In this age of generative AI, we’re committed to sharing what we’ve learned to help further this mission. That’s why we wrote the 2025 AI Decision Brief: Insights from Microsoft and AI leaders on navigating the generative AI platform shift

This report is packed with perspectives from top Microsoft leaders and insights from AI innovators, along with stories of companies across industries that have transformed their businesses using generative AI. It’s also full of pragmatic tips to help your company with its own AI efforts. 

Here’s a more detailed look at what you’ll find in the report.

The state of generative AI today 

The world has embraced generative AI with unprecedented speed. While it took seven years for the internet to reach 100 million users, ChatGPT reached those numbers in just two months.1 And although generative AI is relatively new to the market, adoption is rapidly expanding. In fact, current and planned usage among enterprises jumped to 75% in 2024 from 55% in 2023, according to an IDC study.2  

Put another way, AI is rapidly evolving into what economists call a general-purpose technology. But getting to the point where everyone on the planet has AI access and takes advantage of that access will require some effort, including: 

  • Committing to responsible, trustworthy AI.
    For all people, organizations, and nations to embrace AI, it must be responsible, ethical, fair, and safe. As Microsoft Vice Chair and President Brad Smith says in this report, “Broad social acceptance for AI will depend on ensuring that AI creates new opportunities for workers, respects enduring values of individuals, and addresses the impact of AI on local resources such as land, energy, and water.” 
  • Overcoming adoption challenges.
    Organizations face several challenges in adopting generative AI, such as skill shortages, security concerns, and regulation and compliance issues. Training employees to use AI and building data privacy, security, and compliance into your AI adoption plan are essential.
  • Understanding the winning formula.
    There’s a striking difference between customers in the AI exploration stage and those who have fully embraced it. The highest-performing organizations gain almost four times the value from their AI investments than those just getting started. Plus, those high performers are implementing generative AI projects in a fraction of the time.2

Where generative AI is headed

AI capabilities are doubling at a rate four times that of historical progress.2 This exponential growth tells us that the effects of AI-powered automation, scientific discovery, and innovation will also accelerate. We expect generative AI to revolutionize operations, enable new and disruptive business models, and reshape the competitive landscape in many ways, including:

  • The future of work.
    As the use of generative AI in companies continues to grow, employees are starting to collaborate with AI rather than just treating it as a tool. This means learning to work with AI iteratively and conversationally. “Effective collaboration involves setting expectations, reviewing work, and providing feedback—similar to managing an employee,” explains Jared Spataro, Microsoft Chief Marketing Officer, AI at Work. 
  • The organizations leading innovation.
    Startups, software development companies, research organizations, and co-innovation labs where startups and software giants collaborate on solutions will all continue to shape AI innovation.  
  • Sustainable AI.
    Generative AI is helping build a more sustainable future thanks to tools that integrate renewable energy into grids, reduce food waste, and support socially and environmentally beneficial actions.

How to advance generative AI in your organization 

As we help companies move from talking about AI to translating it into lasting results, we’ve gained a unique perspective on the generative AI strategies that drive business impact. You’ll find many of them in this report, including:

  • Best practices for using generative AI at scale.
    Get tips for developing a scalable AI strategy that best suits your organization, implementing your AI adoption plan, and managing your AI efforts over time. 
  • Ways to accelerate your AI readiness.
    Get checklists for creating your organization’s AI business strategy, technology and data strategy, implementation strategy, cultural and mindset shift, and governance plan. 
  • Customer success stories.
    See how businesses across industries—including healthcare, energy, transportation, and finance—are demonstrating what’s possible with AI now, and in the future. Plus, explore which Microsoft and AI tools they’re using to succeed.

Maximize generative AI with insights from Microsoft leaders

We couldn’t be more excited about the promise of generative AI. Whether you’ve already begun using AI at your organization or are just getting started, we’re here to help you ease the journey and maximize your results.

Get The 2025 AI Decision Brief now for Microsoft AI leadership perspectives on: 

  • Empowering the future: AI access for us all—Brad Smith, Vice Chair and President.
  • How AI is revolutionizing IT at Microsoft—Nathalie D’Hers, CVP Microsoft Digital (IT).
  • Learnings on the business value of AI from IDC—Alysa Taylor, Chief Marketing Officer, Commercial Cloud and AI.
  • The future of work is AI-powered—Jared Spataro, Chief Marketing Officer, AI at Work.
  • Microsoft’s commitment to supporting customers on their AI transformation journey—Judson Althoff, Executive Vice President and Chief Commercial Officer.
  • How software development companies are paving the way for AI transformation—Jason Graefe, Corporate Vice President, ISV and Digital Natives.
  • How to stay ahead of emerging challenges and cyberthreats—Vasu Jakkal, Corporate Vice President, Microsoft Security Business.
A blurry image of a screen

2025 AI Decision Brief

Empower your organization and learn how AI is reshaping businesses through insights shared by Microsoft leaders


1 Benj Edwards, “ChatGPT sets record for fastest-growing user base in history, report says: Intense demand for AI chatbot breaks records and inspires new $20/mo subscription plan,” Ars Technica, February 1, 2023.

2 IDC InfoBrief, sponsored by Microsoft, 2024 Business Opportunity of AI, IDC# US52699124, November 2024.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/feed/ 0
5 key features and benefits of retrieval augmented generation (RAG) http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/13/5-key-features-and-benefits-of-retrieval-augmented-generation-rag/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/13/5-key-features-and-benefits-of-retrieval-augmented-generation-rag/#respond Thu, 13 Feb 2025 16:00:49 +0000 Let’s briefly uncover the future of AI-powered language understanding and generation through the lens of retrieval augmented generation.

The post 5 key features and benefits of retrieval augmented generation (RAG) appeared first on The Microsoft Cloud Blog.

]]>
The rapid advancement of AI has ushered in an era of unprecedented capabilities, with large language models (LLMs) at the forefront of this revolution. These powerful AI systems have demonstrated remarkable abilities in natural language processing, generation, and understanding. However, as LLMs continue to grow in size and complexity, new challenges have emerged, including the need for more accurate, relevant, and contextual responses.

Enter retrieval augmented generation (RAG)—an innovative approach that seamlessly integrates information retrieval with text generation. This powerful combination of retrieval and generation has the potential to revolutionize applications from customer service chatbots to intelligent research assistants.

Let’s briefly uncover the future of AI-powered language understanding and generation through the lens of retrieval augmented generation.

Key features and benefits of RAG

An infographic displaying a four-step process showing how retrieval augmented generation works
Figure 1. Four-step process showing how RAG works.

Here are five key features and benefits that will help you understand RAG better.

1. Current and up-to-date knowledge

RAG models rely on external knowledge bases to retrieve real-time and relevant information before generating responses. LLMs were trained at a specific time and on a specific set of data. RAG allows for responses to be grounded on current and additional data rather than solely depending on the model’s training set.

Benefit: RAG-based systems are particularly effective when the data required is constantly changing and being updated. By incorporating real-time data, RAG patterns expand the breadth of what can be accomplished with an application, including live customer support, travel planning, or claims processing.

For example, in a customer support scenario, a RAG-enabled system can quickly retrieve relevant and accurate product specifications, troubleshooting guides, or customer’s purchase history, allowing users to resolve their issues efficiently. This capability is crucial in customer-support applications—where accuracy is paramount—because it not only enhances the user experience and fosters trust but also encourages the continued use of the AI system, helping to increase customer loyalty and retention.

2. Contextual relevance

RAG excels in providing contextually rich responses by retrieving data that is specifically relevant to the user’s query. This is achieved through sophisticated retrieval algorithms that identify the most pertinent documents or data snippets from a vast, disparate data set.1

Benefit: By leveraging contextual information, RAG enables AI systems to generate responses that are tailored to the specific needs and preferences of users. RAG also enables organizations to maintain data privacy, versus retraining a model owned by a separate entity, allowing data to remain where it lives. This is beneficial in scenarios such as legal advice or technical support.

For example, if an employee asks about their company’s policy on remote work, RAG can pull the latest internal documents that outline those policies, ensuring that the response is not only accurate but is also directly applicable to the employee’s context. This level of contextual awareness enhances the user experience, making interactions with AI systems more meaningful and effective.

A close up of a white object

Microsoft AI in action

Explore how Microsoft AI can transform your organization

3. Reduction of hallucinations

What are hallucinations?

Learn more

RAG allows for controlled information flow, finely tuning the balance between retrieved facts and generated content to maintain coherence while minimizing fabrications. Many RAG implementations offer transparent source attribution—citing references for retrieved information and adding accountability—which are both crucial for responsible AI practices. This auditability not only improves user confidence but also aligns with regulatory requirements in many industries, where accountability and traceability are essential.

Benefit: RAG boosts trust levels and significantly improves the accuracy and reliability of AI-generated content, thus helping to reduce risks in high-stakes domains like legal, healthcare, and finance. This leads to increased efficiency in information retrieval and decision-making processes, as users spend less time fact-checking or correcting AI outputs.2

For example, consider a financial advisor research assistant powered by RAG technology. When asked about recent Security and Exchange Commission filings regarding a publicly traded company in the United States from EDGAR, the commission’s online database, the AI system retrieves information from the latest annual reports, proxy statements, foreign investment disclosures, and other relevant documents filed by the corporation. The RAG model then generates a comprehensive summary, citing specific documents and their publication dates. This not only provides the researcher with current, accurate information they can trust, but also offers clear references for further investigation—significantly accelerating the research process while maintaining high standards of accuracy.

4. Cost effectiveness

RAG allows organizations to use existing data and knowledge bases without extensive retraining of LLMs. This is achieved by augmenting the input to the model with relevant retrieved data rather than requiring the model to learn from scratch.

Benefit: This approach significantly reduces the costs associated with developing and maintaining AI systems. Organizations can deploy RAG-enabled applications more quickly and efficiently, as they do not need to invest heavily in training large models on proprietary data.3

For example, consider a small-but-rapidly growing e-commerce company specializing in eco-friendly garden supplies. As they grow, they face the challenge of efficiently managing and utilizing their expanding knowledge base without increasing operational costs. If a customer inquires about the best fertilizer for a specific plant, the RAG system can quickly retrieve and synthesize information from product descriptions, usage guidelines, plant zone specifications, and customer reviews to provide a tailored response.

In this way, RAG technology allows the business to leverage its existing product documentation, customer FAQs, and a scalable internal knowledge base where the RAG system expands with the business, without the cost or need for extensive AI model training or constant updates. By providing accurate and contextually sensitive responses, the RAG system reduces customer frustration and potential returns—indirectly saving costs associated with customer churn and product returns.

5. User productivity

RAG helps boost user productivity by enabling users to access precise, contextually relevant data quickly by effectively combining information retrieval with generative AI.4

Benefit: This streamlined approach reduces the time spent on data gathering and analysis, allowing decision-makers to focus on actionable insights and teams to automate time-consuming tasks.

For example, KPMG built ComplyAI, a compliance checker, wherein employees submit client documents and request that the application review them. The app reviews the documents and flags any legal standards or compliance requirements, then sends the analysis to the user who originally set up the task. The app handles the review and analysis, saving the requestor time and effort. Thus, the app allows the user to ramp up on the topic or issue in question much faster without requiring them to be a legal expert.

As a result, users are more likely to perceive the AI application as a helpful and integral part of their daily tasks, whether in a professional or personal context.

Get started using RAG to enhance LLMs

In summary, by leveraging the vast knowledge stored in external sources, RAG enhances the capabilities of LLMs, including improved accuracy, contextual relevance, reduced hallucinations, cost-effectiveness, and improved auditability. These features collectively contribute to the development of more reliable and efficient AI applications across various sectors. RAG-enhanced systems also help empower smaller-sized businesses to compete effectively with larger competitors while managing their growth in a cost-effective manner, without the need to hire additional staff or for substantial AI model updates and retraining.

To get started, use the following resources to start building RAG applications with Azure AI Foundry and use them with agents built using Microsoft Copilot Studio.

Our commitment to Trustworthy AI

Organizations across industries are leveraging Azure AI and Microsoft Copilot capabilities to drive growth, increase productivity, and create value-added experiences.

We’re committed to helping organizations use and build AI that is trustworthy, meaning it is secure, private, and safe. We bring best practices and learnings from decades of researching and building AI products at scale to provide industry-leading commitments and capabilities that span our three pillars of security, privacy, and safety. Trustworthy AI is only possible when you combine our commitments, such as our Secure Future Initiative and our Responsible AI principles, with our product capabilities to unlock AI transformation with confidence. 


1DataCamp, How to Improve RAG Performance: 5 Key Techniques with Examples, 2024.

2 Lewis, P., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, 2020.

3 Castro, P., Announcing cost-effective RAG at scale with Azure AI Search, Microsoft, 2024.

4 Hikov, A. and Murphy, L., Information retrieval from textual data: Harnessing large language models, retrieval augmented generation and prompt engineering, Ingenta Connect, Spring 2024.

The post 5 key features and benefits of retrieval augmented generation (RAG) appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/13/5-key-features-and-benefits-of-retrieval-augmented-generation-rag/feed/ 0
Personalization at scale: How cloud and AI are redefining customer engagement http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/10/personalization-at-scale-how-cloud-and-ai-are-redefining-customer-engagement/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/10/personalization-at-scale-how-cloud-and-ai-are-redefining-customer-engagement/#respond Mon, 10 Feb 2025 16:00:00 +0000 For organizations ready to embrace the future, building the right infrastructure is the first step toward achieving personalization at scale.

The post Personalization at scale: How cloud and AI are redefining customer engagement appeared first on The Microsoft Cloud Blog.

]]>
In today’s digital-first world, personalization has become a business imperative. According to a study by McKinsey & Company, 71% of consumers expect companies to deliver personalized interactions, and 76% become frustrated when this doesn’t happen. Businesses that get personalization right, however, see revenue increases of 10% to 15%, with company-specific gains ranging from 5% to 25%—highlighting the clear link between personalization and business growth.1 From curated entertainment recommendations to seamless healthcare solutions, personalization drives loyalty, boosts revenue, and sets industry leaders apart. 

But achieving personalization at scale requires more than AI and data analytics—it demands a powerful, secure, and adaptive infrastructure that enables you to deploy AI. Without a scalable, high-performing cloud foundation, businesses face challenges like latency issues, fragmented data, and high operational costs—all while grappling with the growing importance of data security and compliance. For organizations ready to embrace the future, building the right infrastructure foundation is the first step toward achieving personalization at scale—empowering them to innovate faster, respond in real time, and deliver transformative, trustworthy customer experiences. 

A close up of a colorful wave

Customer Stories

Learn how organizations are achieving more with Microsoft

Overcoming barriers to personalization at scale 

Achieving personalization at scale comes with its share of challenges. Businesses often contend with fragmented data systems, privacy and compliance concerns, and the complexity of acting on data in real time. While these hurdles can seem daunting, understanding them is the first step toward finding solutions. 

Fragmented data is one of the most common obstacles to personalization. Customer information is often scattered across systems, departments, or even physical locations, making it difficult to gain a unified view. For example, PointClickCare found that siloed healthcare data across providers delayed critical care decisions, highlighting the importance of breaking down these barriers to enable better insights.

It’s common for people to work with multiple healthcare professionals for different treatments and prescriptions, for the best care, everyone needs to access, use, and trust the most current, accurate information.

Andrew Datars, Senior Vice President of Engineering at PointClickCare

Real-time data processing adds another layer of complexity. Personalization requires immediate insights and responses, but many businesses struggle with legacy systems that can’t handle fluctuating demands. MediaKind, for example, encountered difficulties delivering real-time media experiences during peak events—putting customer satisfaction at risk. With the increase in competition in the industry and the pace of innovation around engaging with customers through video, they needed to find a way to match their current demands and innovation needs. “People get really upset when their entertainment is offline. Seconds of downtime costs broadcasters and streamers millions of dollars in advertising and brand revenue,” Allen Broome, MediaKind’s Chief Executive Officer notes. 

Privacy and compliance can create significant challenges for businesses aiming to deliver personalized experiences. Analyzing sensitive customer data requires navigating a maze of strict regulations, such as ensuring data residency, meeting regional compliance requirements, and safeguarding user trust. These challenges are particularly pronounced in industries like legal, where the sensitivity of data and the complexity of workflows add additional layers of difficulty. Harvey, a platform designed for the legal sector, faces these exact hurdles. Security is paramount for Harvey due to the need to comply with varied regional security requirements and ensure that data never crosses regional boundaries. “The reason it’s been so hard to build technology for industries like legal is the workflows are so varied and complex, and no two days are the same,” explains Gabe Pereyra, Co-Founder and President at Harvey. By prioritizing security and compliance from the ground up, Harvey provides a trusted solution tailored to one of the most demanding industries. 

While these challenges are real, they are manageable with the right strategies. Recognizing and addressing these barriers allows businesses to take their first steps toward achieving personalization at scale, turning these obstacles into opportunities for growth. 

Redefining customer engagement with cloud and AI technologies 

Scaling personalization to meet modern customer expectations is a complex challenge, but cloud and AI technologies make it practical. Together, they empower organizations to process vast amounts of data, generate actionable insights in real time, and deliver tailored experiences at scale. 

For many organizations, data is scattered across disconnected systems, creating silos that prevent a unified view of customer behaviors and needs. Overcoming this barrier requires modernizing infrastructure to centralize data, enable seamless integration, and provide real-time access to actionable insights. Cloud platforms like Microsoft Azure make this possible by offering secure and scalable solutions that unify fragmented data sources into a single, comprehensive view. For example, PointClickCare leveraged Azure to consolidate siloed healthcare data from multiple systems into a unified network. PointClickCare modernized their infrastructure by deploying a cloud-based solution with key Azure products like Windows Server, Azure SQL Managed Instance, and Azure OpenAI Service to securely integrate data, streamline workflows, and enable real-time access to critical patient information. This transformation provided healthcare providers with actionable insights, improved operational efficiency, and enhanced patient care.  

Personalization hinges on immediacy, and AI-powered cloud platforms enable businesses to process massive streams of data in real time, offering insights and actions when they matter most. Overcoming this challenge requires infrastructure that can handle both the scale and speed of data processing without delays. LALIGA achieves this by leveraging cloud-based AI and machine learning to analyze over 3 million data points per match, all processed in real time. Operating within a hybrid environment, they ensure consistent performance by distributing workloads intelligently across on-premises and cloud systems using Microsoft Azure Arc. This allows LALIGA to deliver engaging digital and in-stadium experiences, from detailed match statistics to personalized player insights, enhancing how fans connect with the game. 

To ensure real-time data provision, cloud infrastructure must be capable of adapting to variable demands. Cloud solutions provide elastic scalability, ensuring organizations can handle varying workloads without compromising performance. With 30 teams, more than 500 players, and each team playing 82 games per season, not including playoffs, the NBA have an enormous amount of player data to collect and analyze. In exploring how AI could help them process data on all on-court players’ specific live body movements, analyzing things like speed, dunk height, number of passes and dribbles, and even injury risk, simultaneously, can create a need for elastic scalability. The NBA used a Microsoft Azure solution, based on Azure Kubernetes Service (AKS), that can manage and process up to 16 gigabytes of raw data per game, not including RGB video signals—sometimes more if the game goes into overtime. The new solution is deployed and operational, and the data being collected is already helping the NBA better understand players’ strengths and weaknesses and improve their performance.  

Lastly, trust and security are fundamental to achieving personalization at scale. In today’s environment, businesses must be able to navigate strict regulatory requirements, safeguard sensitive customer data, and maintain user trust while delivering tailored experiences. Overcoming these challenges requires implementing robust security measures, such as end-to-end encryption, role-based access controls, and compliance monitoring, all of which can be enabled and streamlined through cloud platforms. Azure provides a unified environment where businesses can securely integrate data, enforce regulatory compliance across regions, and monitor potential risks in real time, ensuring sensitive information is protected at every stage. Harvey, for example, leveraged advanced encryption, access management, and compliance tools to meet the stringent security requirements of its clients. This solution enables law firms to confidently protect sensitive client data while delivering innovative, AI-powered legal services. As Harvey’s Chief Executive Officer explained, “Law firms trust Azure because it allows them to deliver cutting-edge, AI-driven legal services without compromising on security or compliance.” This commitment to security enables Harvey to focus on innovation while maintaining trust with its clients. 

Transform your business with scalable personalization 

Personalization at scale is essential for businesses striving to stay competitive in today’s rapidly evolving market. Customers increasingly expect experiences that feel tailored, anticipate their needs, and build trust. As cloud and AI technologies continue to advance, the opportunities for deeper, more impactful personalization will only expand. 

You can stay ahead of the competition by delivering personalized experiences that resonate with your customers. Here are some essential steps you can take to get started today:  

  • Audit your data landscape to identify silos and unify disparate systems into a centralized platform for streamlined insights.
  • Establish robust data governance policies to ensure compliance, security, and transparency, earning and maintaining customer trust.
  • Invest in scalable, elastic cloud infrastructure that grows with your needs, so you can handle the demands of real-time personalization.
  • Empower your teams with the training and tools needed to effectively leverage AI and cloud technologies, making personalization a reality. 

By acting now, businesses can not only meet today’s customer expectations but also pave the way to lead in a future driven by secure, scalable, and transformative personalization.

Abstract image

Microsoft Customer Stories

See how organizations are working smarter


1 McKinsey & Company, The value of getting personalization right—or wrong—is multiplying, November 2021.

The post Personalization at scale: How cloud and AI are redefining customer engagement appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/10/personalization-at-scale-how-cloud-and-ai-are-redefining-customer-engagement/feed/ 0
Unleashing the power of AI in India http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/#respond Thu, 06 Feb 2025 16:00:00 +0000 India has embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation.

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

It’s no secret that India is well-positioned to be a global leader in the AI era, having embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation. Boasting a vast talent pool, proactive government initiatives, and a thriving startup ecosystem, India is uniquely equipped to leverage AI to solve pressing societal and business challenges and optimize operations across a wide array of civic and business verticals.

A long-standing partner in India’s technological growth, Microsoft has solidified its commitment with a US $3 billion investment to expand AI and Azure cloud infrastructure in the country. This initiative is designed to accelerate AI adoption across industries, empower businesses to integrate AI into critical processes, and nurture local talent to meet the evolving demands of the tech ecosystem. These efforts underscore Microsoft’s confidence in India’s position as a global leader in AI innovation and technological advancement.

AI business resources

Help your organization achieve its transformation goals

A decorative image of abstract art swirling in green, purple, and blue colors

Local ingenuity was on full display during the Microsoft AI Tour stop in Bengaluru and New Delhi, where organizations showcased how they are leveraging AI to tackle complex challenges, streamline workflows, and drive transformative efficiencies across industries.

MakeMyTrip powers the future of travel with AI

MakeMyTrip (MMT), India’s leading online travel company, is at the forefront of enhancing the travel shopping experience with generative AI. Over its 24-year journey, MMT has served more than 77 million users, offering comprehensive travel booking services. A standout feature powered by generative AI is Myra, their conversational bot. MMT is integrating an AI-powered workflow within Myra to assist users seamlessly at every stage of their travel journey—from pre-trip planning to in-trip support and post-trip follow-up. Built using large language models (LLMs) and orchestrated via Microsoft Azure AI Foundry, these services ensure smooth assistance throughout the travel process. As one of the early adopters of generative AI in travel tech, MMT is leading the next generation of travel experiences.

Persistent Systems improves contract management with AI-powered agent

Persistent Systems, one of the world’s fastest-growing digital engineering and enterprise modernization service providers, faced recurring challenges surrounding their contract management: inefficient workflows and lengthy negotiation cycles were causing bottlenecks in an otherwise agile organization. Persistent turned to the power of generative AI and Microsoft’s technology stack to reimagine their approach to contract management, developing ContractAssIst, an AI-powered agent built using generative AI and Microsoft 365 Copilot, to transform collaboration and streamline internal contract negotiations. Built to help ensure security and access controls, the tool helps to enhance collaboration, streamline workflows, and accelerate decision-making. 

As a result, ContractAssIst has reduced emails during negotiations by 95% and cut navigation and negotiation time by 70%, a task that currently takes approximately 20 to 25 minutes. Persistent has deployed Microsoft 365 Copilot to nearly 2,000 users and plans to extend it to a broader audience.

LTIMindtree unlocks data management with Microsoft 365 Copilot

LTIMindtree, a global technology consulting and digital solutions company with more than 84,000 employees in more than 30 countries, is leveraging AI in innovative ways to drive digital transformation and enhance business and IT operations. They have demonstrated how Microsoft 365 Copilot technology and AI agents are transforming their critical business functions, such as pre-sales, resource management, and cyber security. For example, custom built AI agents assist the resource management teams to quickly find the right employees with relevant skills and match them to specific projects; and help pre-sales and account managers create high-quality responses using historical data to incoming requests for proposals (RFPs) and requests for information (RFIs). They are also using Microsoft Security Copilot to create a unified command center for investigations, threat intelligence, and incident response, empowering them to build a next-gen Security Operations Center (SOC). As a result, LTIMindtree has seen a 30% increase in overall employee efficiency, with 20% less time spent on emails and day-to-day task allocation.

Streamlining health claims with ICICI Lombard’s AI-powered solution

ICICI Lombard, a leading private insurer in India, has developed an innovative solution to streamline health claims processing. Traditionally, claim adjudicators manually filed claims, a time-consuming process involving the review of 20 pages of documents. Leveraging Microsoft Azure OpenAI Service, Azure AI Document Intelligence, and Azure AI Vision OCR service, ICICI Lombard’s new solution extracts relevant information from these documents, providing adjudicators with a consolidated view of the diagnosis and treatment. This innovation has reduced the time required to process claims by more than 50%.

eSanjeevani transforms healthcare access with innovative AI solutions

eSanjeevani, India’s National Telemedicine Service by the Ministry of Health and Family Welfare, has integrated AI-enabled tools to enhance care quality and streamline teleconsultations, promoting equitable access to healthcare across the country. Powered by Azure, it offers secure, scalable, and accessible doctor-to-doctor and doctor-to-patient teleconsultations. eSanjeevani is advancing its AI journey with Microsoft AI, enhancing productivity, data analysis, and user experience. These innovations are helping eSanjeevani set new benchmarks in telemedicine and digital healthcare services. It is also developing a proof of concept with Microsoft Copilot to transcribe doctor-patient conversations in real time for advanced speech analytics, aiding data-driven decisions. Serving more than 330 million patients, 98% from rural areas, eSanjeevani is today the world’s largest telemedicine initiative in primary healthcare.

AI for everyone in India

Satya Nadella speaking at the Microsoft AI Tour stop in India.
India AI Tour keynote with Satya Nadella, Chief Executive Officer.

India’s AI journey is not just about innovation, it’s about transformation across industries and lives. From travel to healthcare, banking to engineering, the case studies showcased here demonstrate the immense potential of AI when paired with the right tools, partnerships, and vision. Microsoft’s investments and technologies have enabled organizations in India to tackle challenges, streamline processes, and unlock new levels of efficiency and growth. As India continues to lead in the global AI revolution, these examples serve as a testament to how AI can create meaningful impact, fostering a future where innovation drives progress for everyone.

Find the resources to support your AI journey

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/feed/ 0
Common retrieval augmented generation (RAG) techniques explained http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/04/common-retrieval-augmented-generation-rag-techniques-explained/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/04/common-retrieval-augmented-generation-rag-techniques-explained/#respond Tue, 04 Feb 2025 16:00:00 +0000 Organizations use retrieval augmented generation (or RAG) to incorporate current, domain-specific data into language model-based applications without extensive fine-tuning.   This article outlines and defines various practices used across the RAG pipeline—full-text search, vector search, chunking, hybrid search, query rewriting, and re-ranking. What is full-text search? Full-text search is the process of searching the entire […]

The post Common retrieval augmented generation (RAG) techniques explained appeared first on The Microsoft Cloud Blog.

]]>
Organizations use retrieval augmented generation (or RAG) to incorporate current, domain-specific data into language model-based applications without extensive fine-tuning.  

A decorative GIF with abstract art

AI business resources

Expert insights and guidance from a curated set of AI business resources

This article outlines and defines various practices used across the RAG pipeline—full-text search, vector search, chunking, hybrid search, query rewriting, and re-ranking.

Full-text search is the process of searching the entire document or dataset, rather than just indexing and searching specific fields or metadata. This type of search is typically used to retrieve the most relevant chunks of text from the underlying dataset or knowledge base. These retrieved chunks are then used to augment the input to the language model, providing context and information to improve the quality of the generated response.

Full-text search is often combined with other search techniques, such as vector search or hybrid search, to leverage the strengths of multiple approaches.

The purpose of full-text search is to:

  • Allow the retrieval of relevant data from the complete textual content of a document or dataset.
  • Enable the identification of documents that may contain the answer or relevant information, even if the specific query terms are not present in the metadata or document titles.

The process of implementing a full-text search involves the following techniques:

  • Indexing—the full text of documents or dataset is indexed, often using inverted index structures that store and organize information that helps improve the speed and efficiency of search queries and retrieved results.
  • Querying—when a user query is received, the full text of the documents or dataset is searched to find the most relevant information.
  • Ranking—the retrieved documents or chunks are ranked based on relevance to the query, using techniques like term frequency inverse document frequency (TF-IDF) or BM25.

Vector search retrieves stored matching information based on conceptual similarity, or the underlying meaning of sentences, rather than exact keyword matches. In vector search, machine learning models generate numeric representations of data, including text and images. Because the content is numeric rather than plain text, matching is based on vectors that are most similar to the query vector, enabling search matching for:

  • Semantic or conceptual likeness (“dog” and “canine,” conceptually similar yet linguistically distinct).
  • Multilingual content (“dog” in English and “hund” in German).
  • Multiple content types (“dog” in plain text and a photograph of a dog in an image file).

With the rise of generative AI applications, vector search and vector databases have seen a dramatic rise in adoption, along with the increased number of applications using dialogue interactions and question/answer formats. Embeddings are a specific type of vector representation created by natural language machine learning models trained to identify patterns and relationships between words.

There are three steps in processing vector search:

  1. Encoding—use language models to transform or convert text chunks into high-dimensional vectors or embeddings.
  2. Indexing—store these vectors in a specialized database optimized for vector operations.
  3. Querying—convert user queries into vectors using the same encoding method to retrieve semantically similar content.

Things to consider when implementing vector search:

  • Selecting the right embedding model for your specific use case, like GPT or BERT.
  • Balancing index size, search speed, and accuracy.
  • Keeping vector representations up to date as the source data changes.

What is chunking?

Chunking is the process of dividing large documents and text files into smaller parts to stay under the maximum token input limits for embedding models. Partitioning your content into chunks ensures that your data can be processed by the embedding models and that you don’t lose information due to truncation.

For example, the maximum length of input text for the Azure OpenAI Service text-embedding-ada-002 model is 8,191 tokens. Given that each token is around four characters of text for common OpenAI models, this maximum limit is equivalent to around 6,000 words of text. If you’re using these models to generate embeddings, it’s critical that the input text stays below the limit.

Documents are divided into smaller segments, depending on:

  • Number of tokens or characters.
  • Structure-aware segments, like paragraphs and sections.
  • Overlapping windows of text.

When implementing chunking, it’s important to consider these factors:

  • Shape and density of your documents. If you need intact text or passages, larger chunks and variable chunking that preserves sentence structure can produce better results.
  • User queries. Larger chunks and overlapping strategies help preserve context and semantic richness for queries that target specific information.
  • Large language models (LLMs) have performance guidelines for chunk size. You need to set a chunk size that works best for all of the models you’re using. For instance, if you use models for summarization and embeddings, choose an optimal chunk size that works for both.

Hybrid search combines keyword search and vector search results and fuses them together using a scoring algorithm. A common model is reciprocal rank fusion (RRF). When two or more queries are executed in parallel, RRF evaluates the search scores to produce a unified result set.

For generative AI applications and scenarios, hybrid search often refers to the ability to search both full text and vector data.

The process of hybrid search involves:

  1. Transforming the query into a vector format.
  2. Performing vector search to find semantically similar chunks.
  3. Simultaneously conducting keyword search on the same corpus.
  4. Combining and ranking results from both methods.

When implementing hybrid search, consider the following:

  • Balancing the influence of each search method.
  • Increased computational complexity compared to single-method search.
  • Tuning the system to work well across diverse types of queries and content.
  • Overlapping keywords to match when using question and answering systems, like ChatGPT.

Microsoft AI in action

Explore how Microsoft AI can transform your organization

A close up of a colorful wave

What is query rewriting?

Query rewriting is an important technique used in RAG to enhance the quality and relevance of the information retrieved by modifying and augmenting a provided user query. Query rewriting creates variations of the same query that are shared with the retriever simultaneously, alongside the original query. This helps remediate poorly phrased questions and casts a broader net for the type of knowledge collected for a single query.

In RAG systems, rewriting helps improve recall, better capturing user intent. It’s performed during pre-retrieval, before the information retrieval step in a RAG scenario.

Query rewriting can be approached in three ways:

  1. Rules-based—using predefined rules and patterns to modify the query.
  2. Machine learning-based—training models to learn how to transform queries based on examples.
  3. Mixed—combining rules-based and machine learning-based techniques.

What is re-ranking?

Re-ranking, or L2 ranking, uses the context or semantic meaning of a query to compute a new relevance score over pre-ranked results. Post retrieval, a retrieval system passes search results to a ranking machine-learning model that scores the documents (or textual chunks) by relevance. Then, the top results of a limited, defined number of documents (top 50, top 10, top 3) are shared with the LLM.

Learn how to start building a RAG application

RAG systems employ various techniques to enhance knowledge retrieval and improve the quality of generated responses. These techniques work to provide language models with highly relevant context to generate accurate and informative responses.

To get started, use the following resources to start building a RAG application with Azure AI Foundry and use them with agents built using Microsoft Copilot Studio.

Our commitment to Trustworthy AI

Organizations across industries are leveraging Azure AI Foundry and Microsoft Copilot Studio capabilities to drive growth, increase productivity, and create value-added experiences.

We’re committed to helping organizations use and build AI that is trustworthy, meaning it is secure, private, and safe. We bring best practices and learnings from decades of researching and building AI products at scale to provide industry-leading commitments and capabilities that span our three pillars of security, privacy, and safety. Trustworthy AI is only possible when you combine our commitments, such as our Secure Future Initiative and our Responsible AI principles, with our product capabilities to unlock AI transformation with confidence. 

Azure remains steadfast in its commitment to Trustworthy AI, with security, privacy, and safety as priorities. Check out the 2024 Responsible AI Transparency Report.

The post Common retrieval augmented generation (RAG) techniques explained appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/04/common-retrieval-augmented-generation-rag-techniques-explained/feed/ 0
Accelerate employee AI skilling: Insights from Microsoft http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/30/accelerate-employee-ai-skilling-insights-from-microsoft/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/30/accelerate-employee-ai-skilling-insights-from-microsoft/#respond Thu, 30 Jan 2025 16:00:00 +0000 Our experience has yielded some widely applicable takeaways that can be helpful to organizations that want to build AI skills.

The post Accelerate employee AI skilling: Insights from Microsoft appeared first on The Microsoft Cloud Blog.

]]>
At Microsoft, we’ve become pioneers in the AI landscape by transforming our own organization. We’re customer zero—putting AI to work in all facets of our business and continuously exploring how this powerful technology can drive economic growth, maximize efficiency, and reduce operating costs. We’re also regularly evaluating and evolving how we coach employees as part of their continued AI skills development.

Although every organization’s AI transformation is unique and blueprints are scarce, we’ve learned that having the right skills across the organization is key. By implementing skill-building initiatives throughout the company, we’re reimagining how we work at Microsoft and aligning those initiatives to the functions that are critical to how we do business.

Through this process, we’re constantly uncovering valuable insights on how to lead by learning—often developing the playbooks from scratch. By applying these insights, we advance our AI transformation and benefit our workforce, customers, and partners around the world. We’re glad to share our findings with you to help your teams skill up to make the most of AI for innovation, growth, and opportunities.

Developing crucial AI skills for organizational transformation

Organizational transformation now requires AI-first skills; yet it can be challenging to plan modern and effective skill-building programs.

We understand the importance of providing our employees—both technical and non-technical—with the AI skills to grow and evolve with the business and the technology, along with the ability to apply these skills every day. Teams across Microsoft have established innovative and effective AI training programs that cater to specific roles in marketing, sales, engineering, and beyond.

Although there’s no one-size-fits-all approach to AI training, our experience has yielded some widely applicable takeaways which can be helpful to organizations that want to build AI skills. Our new e-book, 10 Best Practices to Accelerate Your Employees’ AI Skills: Lessons and experiences from Microsoft’s skilling initiatives, highlights some of the vital lessons we’ve learned that can help support you in implementing skill-building programs crucial to your AI transformation.

Sharing highlights from our AI learning experience

The e-book explores many of the lessons we’ve learned in our ongoing AI evolution. Our experiences can help inspire and inform your path forward, too, as you and your teams get skilled up and ready to power AI transformation with the Microsoft Cloud. In particular, the e-book showcases stories from AI skill-building initiatives implemented by four Microsoft teams:

  • Microsoft Marketing, a diverse collective of professionals, ranging from creative roles to business strategists and technical experts.
  • MCAPS Academy, the team responsible for training sellers globally within the Microsoft Customer and Partner Solutions (MCAPS) organization.
  • Worldwide Learning Engineering, the team tasked with architecting and building apps and platforms that support MCAPS and some of the Microsoft skill-building offerings for customers and partners.
  • The Microsoft Garage, an innovation platform that enables collaboration and experimentation through hackathons, workshops, talks, training sessions, and more.
An infographic that briefly describes the benefits of using AI for different roles at Microsoft, like marketing, sales, and engineering.
A functional approach to AI skill building at Microsoft.

Here’s what we learned.

1. Give space for exploration

Encourage a culture of learning by providing employees with the time and tools to explore AI.

Our Worldwide Learning Engineering team has dedicated time to delve into AI, and this fosters an environment where curiosity and innovation can thrive. Additionally, The Garage’s experiments, such as the SkillUp AI Challenge, provide employees with a sandbox for practical AI applications, encouraging both personal and professional growth.

2. Make learning fun

Create a low-pressure, engaging environment where employees can learn at their own pace.

The Garage’s SkillUp AI Challenge incorporates fun, interactive exercises that make AI relatable and enjoyable for all skill levels. Similarly, the Marketing AI practitioner hub offers gamified learning paths that enable marketers to integrate AI into their daily workflows in an entertaining way.

3. Provide clear, structured learning paths

Simplify the learning experience with structured paths tailored to different skill levels and roles.

MCAPS Academy Flight Plans offer role-specific learning paths, helping to ensure that technical and non-technical sales teams alike have clear directions for their AI learning. Moreover, the Marketing Learning team has developed a curriculum that supports marketers in becoming regular AI practitioners through well-defined learning stages.

4. Make it role specific

Adapt AI training programs to the unique needs of each role within the organization.

The Worldwide Learning Engineering team focuses on providing engineers with opportunities for deep technical engagement through dedicated learning time and advanced AI tools. At the same time, the MCAPS Academy addresses the specific needs of a different job role—sales—by blending foundational knowledge with real-world applications to enhance AI fluency.

5. Start with foundations

Is your organization prepared?

Assess your AI readiness

Begin AI training with foundational knowledge to help ensure that all employees have a solid understanding of AI basics.

The Marketing Learning team introduces marketers to AI through simple, foundational concepts before progressing to more complex applications. Likewise, the MCAPS Academy provides basic AI training to new hires before guiding them through more advanced, role-specific learning paths.

6. Have a plan to update the content regularly

Maintain the relevance of AI training programs by regularly updating content.

The Worldwide Learning Engineering team continuously refreshes its training materials to keep up with the latest advancements in AI technology. Meanwhile, The Garage schedules regular updates for its skill-building exercises to help ensure that they remain engaging and current.

7. Drive awareness and continued adoption

Promote ongoing AI learning and adoption through awareness campaigns and reinforcement.

The Marketing AI practitioner hub provides regular touchpoints to encourage consistent AI practice among marketers. Similarly, the MCAPS Academy uses newsletters and internal communications to keep the sales force informed and engaged in AI learning.

8. Set clear guidelines for responsible use

Establish and communicate guidelines for the responsible use of AI to maintain standards.

The Marketing Learning team’s curriculum emphasizes the importance of responsible AI use, providing clear guidelines and best practices. The Worldwide Learning Engineering team also integrates responsible AI principles into its training sessions, highlighting the significance of these considerations in AI development.

9. Let employees learn from each other

Facilitate peer-to-peer learning opportunities to enhance AI skills through collaboration.

The Garage hosts show-and-tell sessions where employees share their AI projects and insights. For engineers, the Worldwide Learning Engineering team organizes knowledge-sharing workshops to promote collaborative learning.

10. Leverage existing resources

Take advantage of available resources to support AI skill-building initiatives.

The MCAPS Academy makes the most of existing training platforms and materials, integrating them into its AI learning paths. And The Garage draws on external AI tools and resources to complement its interactive learning programs.

A close up of a white object

AI learning hub on Microsoft Learn

Get the skills to power your AI transformation

Building a foundation for the future of AI skilling

Our experiences as customer zero for AI training have been transformative—and we’re just getting started. By empowering our teams with the right skills, we’re not only driving innovation within our organization but also setting a strong foundation for the future, supporting our employees and customers, creating business value and growth, and fostering innovation.

As organizations around the world look to build AI skills and to scale this powerful technology throughout their business, we’re glad to share these insights to support your AI transformation. Together, we can lead in the AI-powered world and unlock new levels of value for our workforce, customers, and partners—today, tomorrow, and beyond.

The post Accelerate employee AI skilling: Insights from Microsoft appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/30/accelerate-employee-ai-skilling-insights-from-microsoft/feed/ 0
AI-powered agents in action: How we’re embracing this new ‘agentic’ moment at Microsoft http://approjects.co.za/?big=insidetrack/blog/ai-powered-agents-in-action-how-were-embracing-this-new-agentic-moment-at-microsoft/ http://approjects.co.za/?big=insidetrack/blog/ai-powered-agents-in-action-how-were-embracing-this-new-agentic-moment-at-microsoft/#respond Thu, 30 Jan 2025 15:00:00 +0000 We’re using AI agents and Microsoft 365 Copilot to boost our productivity internally at Microsoft.

The post AI-powered agents in action: How we’re embracing this new ‘agentic’ moment at Microsoft appeared first on The Microsoft Cloud Blog.

]]>
When we launched Microsoft 365 Copilot in February of 2023, it was a watershed moment in the history of Microsoft. By incorporating next-generation AI into the productivity tools that millions of people depend on every day, a new era of productivity was born.

“Today marks a significant milestone in our journey to empower every person and every organization on the planet to achieve more,” Microsoft CEO Satya Nadella said when he announced the product. “With Copilot, we are bringing the power of next-generation AI to the tools millions of people use every day.”

Fast forward to now and there’s no doubt that Copilot is revolutionizing employee productivity here at Microsoft and elsewhere. It’s also clear that the pace of innovation is only increasing, and AI-powered agents, integrated with Copilot, are poised to help enterprises all over the world fulfill the promise of AI.

Krishnamurthy in a photo.
Rajamma Krishnamurthy and Amy Rosencranz (not pictured) are part of the team that’s bringing AI agents to life at Microsoft.

Jared Spataro, Microsoft corporate vice president for AI at Work, reflected on this paradigm shift during his keynote address at Microsoft Ignite. “Agents are the new apps for an AI-powered world. Every organization will have a constellation of agents, ranging from simple prompt-and-response to fully autonomous.”

Here in Microsoft Digital, the company’s IT organization, the feeling of excitement that we felt that day was palpable.

“It was exciting to hear about the vision for an ‘agentic world,’ where a rich tapestry of AI agents, including personal agents, business process agents, and cross-organizational agents, work together to enhance productivity and collaboration,” says Rajamma Krishnamurthy, a principal program management lead.

The opportunity that agents present is massive.

“AI powered agents can automate or assist in time consuming tasks like document creation, email or meeting summarization, creating presentations or reports, saving precious time and energy,” Krishnamurthy says. “This will enable our employees to focus on more innovative and engaging work.”

They will become our personal assistants.

“Agents will be able to do things like tell me what time I should be leaving for work based on traffic, helping me navigate which way to go, helping me to find parking, and helping me set up my day so I know what’s most important to work on,” says Amy Rosencranz, a principal program manager also working on agents in Microsoft Digital. “I’ve been excited about those scenarios for a long time, anticipating how AI can seamlessly integrate into our daily lives, and now it’s here.”

In Microsoft Digital, we are embracing our agentic future, where agents will make our employees, as well as the millions of employees who rely on Microsoft 365 globally, more productive every day.

Enabling an agent-powered Microsoft

It’s important to acknowledge that adopting AI in the enterprise is a journey. In Microsoft Digital, we’ve adopted a maturity model for AI deployment in the enterprise. Early phases focus on using Microsoft 365 Copilot, grounded in enterprise data, to enhance knowledge discovery and retrieval. Later phases enable employees to act on that knowledge and even fully automate business workflows. Microsoft 365 Copilot enterprise deployment phases

Microsoft 365 Copilot enterprise deployment phases

An arrow that describes the foundational capabilities, retrieval, action, and automation.
Unlock the power of Microsoft 365 Copilot with foundational capabilities and seamless knowledge-to-action transformation.

Use these principles to guide you as you move through the two phases.  

  1. Foundational capabilities. The first and most important step is to deploy a secure, enterprise-grade AI solution like Copilot for Microsoft 365 that’s grounded in your enterprise data. At Microsoft, we’ve deployed Microsoft 365 Copilot to all of the more than 300,000 employees and vendors at the company, providing everyone with an AI-powered assistantto enhance their daily productivity.
  2. Specialized agents. Employees use low-code solutions like Copilot Studio Agent Builder or ready-made agents in SharePoint to quickly train models and retrieve knowledge for specialized scenarios.
  3. Knowledge and actions. Powered by Copilot Studio, agents go beyond simple knowledge retrieval, offering next steps and actions that help employees to defragment their day-to-day employee experience. While these agents take a little more time to build, they offer significantly more utility in the enterprise. Copilot Studio provides a robust library of first- and third-party connectors that make it easy to incorporate actions across enterprise platforms.
  4. Workflow reinvention. Employees manage and train a constellation of agents that perform fully autonomous actionsNote, the ability to create fully autonomous agents is currently in public preview. “The best way to think about these are just as your teammates,” Nadella said when explaining this at Microsoft Ignite.

It’s important to note that these steps can take time.

Deploying Microsoft 365 Copilot at global enterprise scale and conducting change management practices to help our employees maximize the potential of AI has required patience as we create locally relevant change management campaigns tailored to individual countries, roles, and other factors.

Later phases require more advanced tools, appropriate tenant governance, and collaboration between departments to ensure appropriate and responsible uses of AI. Additionally, our AI Center of Excellence has been instrumental in helping to build an AI-forward culture through training activities, knowledge sharing, and other activities to accelerate our growth as an organization.

Data quality and tenant governance are also important considerations for unlocking the value of agents in your enterprise.

“The better your data, the better your back-end data, the better your data is set up to interact with AI, the better the responses are going to be,” Rosencranz says.

In Microsoft Digital, we’ve adopted standards and policies that help us ensure that our agents are trained on high quality, accurate AI-ready data. AI-ready data for enterprise AI agents is data that’s clean, well-governed, and accessible through scalable pipelines, integrating principles of data standardization, privacy compliance, and federated governance to enable seamless interoperability and actionable insight. With AI-ready data, our data scientists and engineers are better equipped to locate, process, and govern the enterprise data that drives our organization, including the development of agents.

But we’ve also been deliberate in building tools that make it easy to build and deploy agents in the enterprise. In fact, our design-first mindset, facilitated through architectural reviews, is enabling us to design and deploy agentic architectures that are resilient, secure, cost-effective, high-performance, and operationally sound. This structured approach ensures that AI agents deliver transformative value while aligning with organizational goals and maintaining trust.

{Learn how we’re transforming our data culture with AI-ready data.}

{Learn more about how we’re responding to the AI Revolution with an AI Center of Excellence.}

Bringing agents to life at Microsoft

While everyone at Microsoft already has access to Microsoft 365 Copilot, we’ve been cautious in deploying Copilot Studio, part of the Microsoft Power Platform, to all of our employees. Copilot Studio uses the same low-code connector model as the Power Platform to provide over 1,400 first- and third-party services that can power actions. The same principles that we apply to the Power Platform—“employee empowerment with guardrails”—are being used to safely bring agents to life at Microsoft.

“Anyone at Microsoft can build agents to help them through mundane tasks such as a writing assistant to help write better content or to strategize with them on important areas like their career, however these agents are available only to the person who created them,” Krishnamurthy says. “Agents that need to scale enterprise-wide are worked on by the respective engineering teams in collaboration with business partners.”

While the power in these “knowledge-only” or specialized agents is significant, we in Microsoft Digital must balance employee innovation against some of the risks of agentic AI. Security and privacy controls are important for all applications, and even more so for those that incorporate AI.

“Sometimes we mysticize these agents as things that take a lot of effort to build,” Nadella said at Ignite. Our vision is that it should be as simple as creating a Word doc or a PowerPoint slide.”

Additionally, understanding and incorporating our responsible AI principles in all aspects of the Security Development Lifecycle (SDL) is critical.

“A robust governance process and controls should be adhered to when building these AI agents through the entire SDL, starting with designing, building, deploying and monitoring agents after they’re deployed,” Krishnamurthy says.

Some practices we’re using within Microsoft Digital to keep our employees safe include:  

  1. Security. We have established standards for data classification, policies on handling confidential information, and other security measures to protect data from unauthorized access, misuse, and disclosures. Microsoft Purview provides these foundational capabilities, including data labeling, rights management, and data loss prevention at Microsoft. 
  2. Privacy. At Microsoft, we have established privacy compliance measures to ensure that personal data is protected. Includes adhering to regulations such as GDPR and CCPA. We also conduct regular privacy assessments for all applications, especially AI-powered agents.
  3. Regulatory. It’s important to conduct regulatory compliance assessments to ensure that agents and extensions are meeting legal standards. Our legal and compliance teams are carefully monitoring AI regulations like NY 144 and the EU AI Act. Understanding and incorporating applicable guidelines, regulations, and laws into assessments is critical.

As Peter Parker famously learned, “with great power comes great responsibility.” The same holds true with agents in the enterprise. While agents are an incredibly powerful tool that nearly anyone can take advantage of to improve their productivity, being mindful of security, privacy, and regulatory issues is essential to the responsible deployment of agentic AI in the enterprise.

{Find out how we’re tackling Microsoft 365 Copilot governance internally at Microsoft.}

{Learn how citizen developers at Microsoft are empowered through good governance with the Power Platform.}

Enabling employee self-service

In Microsoft Digital, we’re building AI-powered agents to support common employee scenarios like IT support and HR queries. The employee self-service agent seamlessly integrates with Microsoft 365 Copilot and helps to defragment the employee experience by providing a single place for employees to seek help with their most common pain points. With Copilot Studio, this agentic experience helps employees to quickly retrieve relevant information and then resolve their issues while also enabling them to act, such as by opening a support ticket or submitting a request for time off.

Other capabilities include:

  • An out-of-the-box experience that facilitates a no-configuration, focused employee self-service lens for optimized responses to common HR and IT questions.
  • The minimum configuration delivers answers to employees via official content sources and company-crafted answers where necessary, lowering search time and frustration.
  • Additional configuration reduces cost and accelerates time to value for HR functions and IT workflows.

Employee self-service agent in Microsoft 365 Copilot

Employee self-service agents visual featuring knowledge access, action-taking, and business agility.
Use Microsoft 365 Copilot to empower your workforce with seamless knowledge access, swift action-taking, and enhanced business agility.

Employee-self service is being used by some partners and customers in a private preview and will be available globally to all Microsoft customers soon.

{Learn how we’re building employee self-service agents for IT and HR.}

The future of IT

While we’re at the very beginning of this agentic journey at Microsoft, the pace of change has been and will continue to be incredibly swift, as new capabilities emerge and autonomous agents become more common. In Microsoft Digital, we see a world where agentic AI will unlock productivity and creativity, empowering our employees to train their own agentic teams that handle routine day-to-day operational tasks so they can focus on the higher value work only humans can do. Some ways we’re exploring applying agents within Microsoft Digital include:

  • Autonomous agents that can detect, report, remediate, and monitor network security and connectivity issues.
  • Autonomous agents that streamline and simplify business and operational processes, enabling our employees to focus on the higher value work that only humans can do.
  • Autonomous agents that anticipate your needs during travel, clearing your calendar, reconciling schedule conflicts, and even helping with things like reserving a car or mitigating flight delays.
  • Autonomous agents that help our global workplace services team to manage their facilities more effectively, reducing carbon emissions while maximizing workplace occupancy.
  • Autonomous agents that anticipate device issues, apply patches, continuously monitor device health and security, and keep our infrastructure and devices secure and reliable.

While the advent of generative AI in the enterprise has been a boon to employees and has given organizations like Microsoft a competitive advantage, fully autonomous agents, powered by Copilot Studio, will give our employees an incredible advantage in a very competitive global marketplace for products, ideas, and solutions.

“We are so at the tip of the iceberg, and the pace at which the product is developing is unlike anything I’ve seen in my tenure at Microsoft, and I’ve been here a while,” Rosencranz says. “Agents are already so powerful, but they’re only going to get more powerful. More “wow” moments are coming.”

We invite you to seize this generational opportunity with agents to provide more “wow” moments for your own employees.

Key Takeaways

Here are principles to think about as you consider experimenting with agents at your company:

  • Agents are the next wave of AI innovation, enabling your employees to retrieve information, act, or even fully automate business processes and operations.
  • While agents are powerful and simple to create, be mindful of security, privacy, responsible AI, and compliance requirements to ensure that your agents aren’t creating unnecessary business risks.
  • There are several ways to build no-code and low-code agents for personal or enterprise-wide use, including Agent Builder in SharePoint, creating agents in Microsoft 365 Copilot Chat, and using Copilot Studio.
  • AI-ready data is essential to unlock the power of agents in the enterprise. Like other AI systems, the responses and actions of your agents are only as good as the data they were trained on.
Try it out

Get started with Microsoft 365 Copilot at your company.

Related links

The post AI-powered agents in action: How we’re embracing this new ‘agentic’ moment at Microsoft appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=insidetrack/blog/ai-powered-agents-in-action-how-were-embracing-this-new-agentic-moment-at-microsoft/feed/ 0
Making it easier for companies to build and ship AI people can trust https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/ https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/#respond Wed, 22 Jan 2025 16:00:00 +0000 Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves.

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves. Leaders worry about the risk of AI generating incorrect or harmful information, leaking sensitive data, being hijacked by attackers or violating privacy laws — and they’re sometimes ill-equipped to handle the risks.  

“Organizations care about safety and security along with quality and performance of their AI applications,” says Sarah Bird, chief product officer of Responsible AI at Microsoft. “But many of them don’t understand what they need to do to make their AI trustworthy, or they don’t have the tools to do it.”  

To bridge the gap, Microsoft provides tools and services that help developers build and ship trustworthy AI systems, or AI built with security, safety and privacy in mind. The tools have helped many organizations launch technologies in complex and heavily regulated environments, from an AI assistant that summarizes patient medical records to an AI chatbot that gives customers tax guidance.  

The approach is also helping developers work more efficiently, says Mehrnoosh Sameki, a Responsible AI principal product manager at Microsoft. 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.

“It’s very easy to get to the first version of a generative AI application, but people slow down drastically before it goes live because they’re scared it might expose them to risk, or they don’t know if they’re complying with regulations and requirements,” she says. “These tools expedite deployment and give peace of mind as you go through testing and safeguarding your application.”  

The tools are part of a holistic method that Microsoft provides for building AI responsibly, honed by expertise in identifying, measuring, managing and monitoring risk in its own products — and making sure each step is done. When generative AI first emerged, the company assembled experts in security, safety, fairness and other areas to identify foundational risks and share documentation, something it still does today as technology changes. It then developed a thorough approach for mitigating risk and tools for putting it into practice.  

The approach reflects the work of an AI Red Team that identifies emerging risks like hallucinations and prompt attacks, researchers who study deepfakesmeasurement experts who developed a system for evaluating AI, and engineers who build and refine safety guardrails. Tools include the open source framework PyRIT for red teams to identify risks, automated evaluations in Azure AI Foundry for continuously measuring and monitoring risks, and Azure AI Content Safety for detecting and blocking harmful inputs and outputs.  

Microsoft also publishes best practices for choosing the right model for an application, writing system messages and designing user experiences as part of building a robust AI safety system.  

“We use a defense-in-depth approach with many layers protecting against different types of risks, and we’re giving people all the pieces to do this work themselves,” Bird says. 

For the tax-preparation company that built a guidance chatbot, the capability to correct AI hallucinations was particularly important for providing accurate information, says Sameki. The company also made its chatbot more secure, safe and private with filters that block prompt attacks, harmful content and personally identifiable information.  

Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same.

Sarah Bird, chief product officer of Responsible AI

She says the health care organization that created the summarization assistant was especially interested in tools for improving accuracy and creating a custom filter to make sure the summaries didn’t omit key information.  

“A lot of our tools help as debugging tools so they could understand how to improve their application,” Sameki says. “Both companies were able to deploy faster and with a lot more confidence.”  

Microsoft is also helping organizations improve their AI governance, a system of tracking and sharing important details about the development, deployment and operation of an application or model. Available in private preview in Azure AI Foundry, AI reports will give organizations a unified platform for collaborating, complying with a growing number of AI regulations and documenting evaluation insights, potential risks and mitigations.

“It’s hard to know that all the pieces are working if you don’t have the right governance in place,” says Bird. “We’re making sure that Microsoft’s AI systems are compliant, and we’re sharing best practices, tools and technologies that help customers with their compliance journey.”  

The work is part of Microsoft’s goal to help people do more with AI and share learnings that make the work easier for everyone.  

“Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same,” Bird says. 

Learn more about Microsoft’s Responsible AI work.

Lead illustration by Makeshift Studios / Rocio Galarza. Story published on January 22, 2025

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/feed/ 0
Coldplay evolves the fan experience with Microsoft AI https://azure.microsoft.com/en-us/blog/coldplay-evolves-the-fan-experience-with-microsoft-ai/ https://azure.microsoft.com/en-us/blog/coldplay-evolves-the-fan-experience-with-microsoft-ai/#respond Wed, 22 Jan 2025 13:00:00 +0000 To coincide with Coldplay's latest project, the band collaborated with Microsoft to produce an AI-powered experience that lets fans interact with their new album in a unique and personal way.

The post Coldplay evolves the fan experience with Microsoft AI appeared first on The Microsoft Cloud Blog.

]]>
Great music builds memories that span generations

My first concert with my son was a Coldplay show in Vancouver a couple of years ago. I was completely captivated by the show, the music, the immersive experience, and the focus on sustainability—but mostly by watching my youngest experience a concert for the first time. It was truly magical. The band has always been at the forefront of innovation and generating magic with their fans. 

A close up of a person's hands

To coincide with their latest project, A Film For The Future, the band collaborated with Microsoft to produce an AI-powered experience that lets fans interact with their new album MOON MUSiC in a unique and personal way.

After some initial skepticism when generative AI emerged, we’re now seeing creators and artists begin to experiment with AI as a creative booster—bringing fresh ways to interact with and enhance their work, come up with new ideas, and even get things done faster.

A Film For The Future is a highly collaborative project, bringing together a diverse group of filmmakers and animators to create unique segments for the 44-minute film, which provides a visual accompaniment to the new album. Each creator was given a broad set of themes and creative license to come up with visuals each felt best represented Coldplay’s music. That collaborative theme now extends to the fan experience, which transcends traditional passive viewing and enables you to create your own 15-second clip of the film using Microsoft Copilot and Azure AI. Your clip, or remix, is then added to the community playback here.

By going to the Community Remix section on the film’s website, you can see clips created by others accompanied by the “iAAM” track (“I Am A Mountain”) from “MOON MUSiC” and can create your own clip.

Not surprisingly, it’s a very emotive experience as you create your own personal remix. Mine is trust in summoning lightning, which is obviously awesome. You can fine-tune your own remix by adjusting the intensity of seven different attributes, such as passion, growth, and peace, that are represented as moons against a rainbow (trust me, you have to see it). Once you’re done, your remix is added to the community and you can download a video and image of your remix if you’d like. You can also create another remix.

Azure AIHow innovators are creating the future.

The memories of that Vancouver concert came flooding back while I was in the app, giving me a wonderful moment to plunge into a cherished memory. It was unexpected and kind of amazing. Not every app can evoke such a response. Coldplay certainly brings some emotional chemistry to the mix, but this creative use of AI opens new ground for fans. 

Microsoft collaborated with Seattle-based Pixel Lab to build this fan remix experience. The platform AI analyzes the emotional context of each video clip and dynamically assembles them to create a unique and immersive experience for every fan. This means that no two remixes are the same, and each fan gets a personalized journey through Coldplay’s music, making the experience deeply engaging and memorable.

From the Community Remix experience, you can chat with Microsoft Copilot about Coldplay, the new album, A Film For The Future, or whatever you want to chat about using simple prompts.

The fusion of generative AI and human creativity is opening new vistas for artists and businesses alike. The fan remix experience is more than a showcase of cutting-edge technology. Fans can now become co-creators, using Microsoft AI to craft their own unique interpretations of Coldplay’s music. It highlights one of the many strategic values of integrating AI into creative processes to unlock new opportunities for innovation and differentiation. 

Made with Azure AI Foundry

This fan remix experience was built with a collection of Azure AI services available in Azure AI Foundry, Microsoft’s unified AI platform announced a few months ago at Microsoft Ignite. It integrates advanced AI services like natural language processing, computer vision, and machine learning to help organizations across industries create AI-powered solutions that accelerate innovation and differentiate them in the market. Azure AI Foundry enables everyone—from developers to data scientists, to business and IT leaders—to collaborate seamlessly design, customize, and manage innovative solutions that transform ideas into reality. 

Simplify development and improve AI efficiency with AI Foundry

Enhancing human creativity with AI 

At the heart of the Coldplay project is the belief that AI can enhance human creativity rather than replace it. Technology has been an important part of creative expression for a long, long time. Artists are often among the first to test the creative potential of technical innovation. Generative AI expands the role of technology in artistic expression, bringing audiences into the creative process, and in this case, elevating fans to co-creators with the artist. It’s a glimpse into how AI is changing expectations for how we engage with our favorite artists.

background pattern

A Film For The Future

Try the remix experience to create your own

Learn more >

The post Coldplay evolves the fan experience with Microsoft AI appeared first on The Microsoft Cloud Blog.

]]>
https://azure.microsoft.com/en-us/blog/coldplay-evolves-the-fan-experience-with-microsoft-ai/feed/ 0
Innovating in line with the European Union’s AI Act https://blogs.microsoft.com/on-the-issues/?p=66749 https://blogs.microsoft.com/on-the-issues/?p=66749#respond Wed, 15 Jan 2025 14:10:00 +0000 As our Microsoft AI Tour reached Brussels, Paris, and Berlin recently, we met with European organizations that were energized by the possibilities of our latest AI technologies and engaged in deployment projects. They were also alert to the fact that 2025 is the year that key obligations under the European Union’s AI Act come into effect, opening a new chapter in digital regulation as the world’s first, comprehensive AI law becomes a reality.

The post Innovating in line with the European Union’s AI Act appeared first on The Microsoft Cloud Blog.

]]>
As our Microsoft AI Tour reached Brussels, Paris, and Berlin toward the end of last year, we met with European organizations that were energized by the possibilities of our latest AI technologies and engaged in deployment projects. They were also alert to the fact that 2025 is the year that key obligations under the European Union’s AI Act come into effect, opening a new chapter in digital regulation as the world’s first, comprehensive AI law becomes a reality.  

At Microsoft, we are ready to help our customers do two things at once: innovate with AI and comply with the EU AI Act. We are building our products and services to comply with our obligations under the EU AI Act and working with our customers to help them deploy and use the technology compliantly. We are also engaged with European policymakers to support the development of efficient and effective implementation practices under the EU AI Act that are aligned with emerging international norms.  

Below, we go into more detail on these efforts. Since the dates for compliance with the EU AI Act are staggered and key implementation details are not yet finalized, we will be publishing information and tools on an ongoing basis. You can consult our EU AI Act documentation on the Microsoft Trust Center to stay up to date. 

Building Microsoft products and services that comply with the EU AI Act 

Organizations around the world use Microsoft products and services for innovative AI solutions that empower them to achieve more. For these customers, particularly those operating globally and across different jurisdictions, regulatory compliance is of paramount importance. This is why, in every customer agreement, Microsoft has committed to comply with all laws and regulations applicable to Microsoft. This includes the EU AI Act. It is also why we made early decisions to build and continue to invest in our AI governance program. 

As outlined in our inaugural Transparency Report, we have adopted a risk management approach that spans the entire AI development lifecycle. We use practices like impact assessments and red-teaming to help us identify potential risks and ensure that teams building the highest-risk models and systems receive additional oversight and support through governance processes, like our Sensitive Uses program. After mapping risks, we use systematic measurement to evaluate the prevalence and severity of risks against defined metrics. We manage risks by implementing mitigations like the classifiers that form part of Azure AI Content Safety and ensuring ongoing monitoring and incident response.  

Our framework for guiding engineering teams building Microsoft AI solutions—the Responsible AI Standard—was drafted with an early version of the EU AI Act in mind.  

Building on these foundational components of our program, we have devoted significant resources to implementing the EU AI Act across Microsoft. Cross-functional working groups combining AI governance, engineering, legal, and public policy experts have been working for months to identify whether and how our internal standards and practices should be updated to reflect the final text of the EU AI Act as well as early indications of implementation details. They have also been identifying any additional engineering work needed to ensure readiness.  

For example, the EU AI Act’s prohibited practices provisions are among the first provisions to come into effect in February 2025. Ahead of the European Commission’s newly established AI Office providing additional guidance, we have taken a proactive, layered approach to compliance. This includes:​ 

  • Conducting a thorough review of Microsoft-owned systems already on the market to identify any places where we might need to adjust our approach, including by updating documentation or implementing technical mitigations.​ To do this, we developed a series of questions designed to elicit whether an AI system could implicate a prohibited practice and dispatched this survey to our engineering teams via our central tooling. Relevant experts reviewed the responses and followed up with teams directly where further clarity or additional steps were necessary. These screening questions remain in our central responsible AI workflow tool on an ongoing basis, so that teams working on new AI systems answer them and engage the review workflow as needed.  
  • Creating new restricted uses in our internal company policy to ensure Microsoft does not design or deploy AI systems for uses prohibited by the EU AI Act.​ We are also developing specific marketing and sales guidance to ensure that our general-purpose AI technologies are not marketed or sold for uses that could implicate the EU AI Act’s prohibited practices.  
  • Updating our contracts, including our Generative AI Code of Conduct, so that our customers clearly understand they cannot engage in any prohibited practices.​ For example, the Generative AI Code of Conduct now has an express prohibition on the use of the services for social scoring. 

We were also among the first organizations to sign up to the three core commitments in the AI Pact, a set of voluntary pledges developed by the AI Office to support regulatory readiness ahead of some of the upcoming compliance deadlines for the EU AI Act. In addition to our regular rhythm of publishing annual Responsible AI Transparency Reports, you can find an overview of our approach to the EU AI Act and a more detailed summary of how we are implementing the prohibited practices provisions on the Microsoft Trust Center. 

Working with customers to help them deploy and use Microsoft products and services in compliance with the EU AI Act 

One of the core concepts of the EU AI Act is that obligations need to be allocated across the AI supply chain. This means that an upstream regulated actor, like Microsoft in its capacity as a provider of AI tools, services, and components, must support downstream regulated actors, like our enterprise customers, when they integrate a Microsoft tool into a high-risk AI system. We embrace this concept of shared responsibility and aim to support our customers with their AI development and deployment activities by sharing our knowledge, providing documentation, and offering tooling. This all ladders up to the AI Customer Commitments that we made in June of last year to support our customers on their responsible AI journeys. 

We will continue to publish documentation and resources related to the EU AI Act on the Microsoft Trust Center to provide updates and address customer questions. Our Responsible AI Resources site is also a rich source of tools, practices, templates, and information that we believe will help many of our customers establish the foundations of good governance to support EU AI Act compliance.  

On the documentation front, the 33 Transparency Notes that we have published since 2019 provide essential information about the capabilities and limitations of our AI tools, components, and services that our customers rely on as downstream deployers of Microsoft AI platform services. We have also published documentation for our AI systems, such as answers to frequently asked questions. Our Transparency Note for the Azure OpenAI Service, an AI platform service, and FAQ for Copilot, an AI system, are examples of our approach. 

We expect that several of the secondary regulatory efforts under the EU AI Act will provide additional guidance on model- and system-level documentation. These norms for documentation and transparency are still maturing and would benefit from further definition consistent with efforts like the Reporting Framework for the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems. Microsoft has been pleased to contribute to this Reporting Framework through a process convened by the OECD and looks forward to its forthcoming public release. 

Finally, because tooling is necessary to achieve consistent and efficient compliance, we make available to our customers versions of the tools that we use for our own internal purposes. These tools include Microsoft Purview Compliance Manager, which helps customers understand and take steps to improve compliance capabilities across many regulatory domains, including the EU AI Act; Azure AI Content Safety to help mitigate content-based harms; Azure AI Foundry to help with evaluations of generative AI applications; and Python Risk Identification Tool or PyRIT, an open innovation framework that our independent AI Red Team uses to help identify potential harms associated with our highest-risk AI models and systems. 

Helping to develop efficient, effective, and interoperable implementation practices 

A unique feature of the EU AI Act is that there are more than 60 secondary regulatory efforts that will have a material impact on defining implementation expectations and directing organizational compliance. Since many of these efforts are in progress or yet to get underway, we are in a key window of opportunity to help establish implementation practices that are efficient, effective, and aligned with emerging international norms. 

Microsoft is engaged with the central EU regulator, the AI Office, and other relevant authorities in EU Member States to share insights from our AI development, governance, and compliance experience, seek clarity on open questions, and advocate for practical outcomes. We are also participating in the development of the Code of Practice for general-purpose AI model providers, and we remain longstanding contributors to the technical standards being developed by European Standards organizations, such as CEN and CENELEC, to address high-risk AI system requirements in the EU AI Act. 

Our customers also have a key role to play in these implementation efforts. By engaging with policymakers and industry groups to understand the evolving requirements and have a say on them, our customers have the opportunity to contribute their valuable insights and help shape implementation practices that better reflect their circumstances and needs, recognizing the broad range of organizations in Europe that are energized by the opportunity to innovate and grow with AI. In the coming months, a key question to be resolved is when organizations that substantially fine-tune AI models become downstream providers due to comply with general-purpose AI model obligations in August. 

Going forward 

Microsoft will continue to make significant product, tooling, and governance investments to help our customers innovate with AI in line with new laws like the EU AI Act. Implementation practices that are efficient, effective, and interoperable internationally are going to be key to supporting useful and trustworthy innovation on a global scale, so we will continue to lean into regulatory processes in Europe and around the world. We are excited to see the projects that animated our Microsoft AI Tour events in Brussels, Paris, and Berlin improve people’s lives and earn their trust, and we welcome feedback on how we can continue to support our customers in their efforts to comply with new laws like the EU AI Act. 

The post Innovating in line with the European Union’s AI Act appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/?p=66749/feed/ 0