Machine learning Archives | Microsoft AI Blogs http://approjects.co.za/?big=en-us/ai/blog/topic/machine-learning/ Thu, 20 Feb 2025 16:00:00 +0000 en-US hourly 1 Power your AI transformation with Microsoft Fabric skilling plans and a certification discount http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/02/20/power-your-ai-transformation-with-microsoft-fabric-skilling-plans-and-a-certification-discount/ Thu, 20 Feb 2025 16:00:00 +0000 As we continue enhancing Fabric's capabilities, we are pleased to share several significant new skilling opportunities to help further empower your data analytics journey.

The post Power your AI transformation with Microsoft Fabric skilling plans and a certification discount appeared first on Microsoft AI Blogs.

]]>
Looking for a competitive edge in the era of AI and today’s data-powered business world? Microsoft Fabric transforms data into actionable intelligence, empowering your organization to optimize operations, uncover growth, and mitigate risks with a unified data solution. As we continue enhancing Fabric’s capabilities, we are pleased to share several significant new skilling opportunities to help further empower your data analytics journey.

In this blog, we’ll lay out the latest and greatest of our curated Fabric skilling paths on Microsoft Learn to help your team drive transformative business outcomes. We’re also announcing a new certification exam available with a 50% discount! And of course, we’ll dive into our exciting upcoming in-person event, FabCon, where we’ll have even more surprises in store, plus a chance for you to connect with industry experts and the larger data-analysis community. Let’s get started!

Get certified as a Fabric Data Engineer

Learning Microsoft Fabric equips aspiring engineers with skills to streamline workflows, handle large-scale data processing, and integrate advanced AI tools. As a Fabric Data Engineer, you’ll have the chance to design and manage cutting-edge data solutions that move AI-powered insights. That’s why we’re thrilled to announce the general availability of our new certification for Fabric Data Engineers

By earning your Microsoft Certified: Fabric Data Engineer Associate certification, you’ll be equipped with an industry-recognized credential to set you apart in the growing field of data and AI. But you don’t have to do it alone. We’ve convened two live and on-demand series of expert-led walkthroughs to help you either get started with Fabric or build on your existing skills. Designed with Fabric Data Engineers in mind, these Microsoft Fabric Learn Together sessions (available in four time zones and three languages) are intended to give you the knowledge and confidence to ace your certification exam and take your data engineering career to the next level.

Want to explore the ins and outs of Fabric on your own time? We also have an official plan on Microsoft Learn featuring everything you’ll need to learn to pass the DP-700 Fabric Data Engineer Associate certification exam, including: 

  • Describe the core features and capabilities of lakehouses in Microsoft Fabric.
  • Use Apache Spark DataFrames to analyze and transform data.
  • Use Real-Time Intelligence to ingest, query, and process streams of data.
  • And much more! 

There’s more: For a limited time, you can get 50% off the cost of the DP-700 Fabric Data Engineer Associate certification exam. To be eligible, either attend one of the Learn Together sessions, complete the Plan on Microsoft Learn, or have previously passed the DP-203 exam. You have until March 31, 2025, to request the discount voucher, so get started fast-tracking your data engineering career today!

Join a community of Fabric users and experts at FabCon Las Vegas 

No matter your role or skill level, you can connect with other Fabric users and experts at the Fabric Community Conference from March 31-April 2, 2025, in Las Vegas. Join us at the MGM Grand for the ultimate Microsoft Fabric, Power BI, SQL, and AI event featuring over 200 sessions with speakers covering exciting new Fabric features and skilling opportunities. 

Connect one-on-one with community and product experts, including a dedicated partner pre-day, all-day Ask-the-Experts hours, a bustling expo hall, and plenty of after-hours social events. Workshops will also be available on March 29th, 30th, and April 3rd, making this the most comprehensive Microsoft Fabric learning experience to date. 

Don’t miss out! Register today to grab the early bird discount and use code MSCUST for $150 off registration. 

Build AI apps faster with SQL databases in Fabric 

Fabric’s capabilities and versatility are always expanding. We recently introduced a public preview of SQL databases to make building AI apps faster and easier than ever. SQL Database in Microsoft Fabric provides a unified, autonomous, and AI-optimized platform that accelerates app development by up to 71%, empowering businesses to innovate faster and gain a competitive edge in the AI era. 

To enhance your skills in working with SQL databases in Fabric, we’ve designed a new learning path called Implement operational databases in Microsoft Fabric. This course will guide you through the process of creating and managing SQL databases within the Fabric environment. You’ll also learn how to provision an SQL database, configure security settings, and perform essential database operations.

The course covers important topics such as data modeling, query optimization, and performance tuning specific to Fabric’s SQL capabilities. By completing this learning path, you’ll gain hands-on experience with Fabric’s SQL features and be better equipped to design and implement efficient database solutions.

You can also watch on-demand sessions of a recent SQL Database in Fabric Learn Together series to see how to build reliable, highly scalable applications where cloud authentication and encryption are secured by default. 

Unlock AI-ready insights and transform your data 

There’s always more to discover on Microsoft Learn, including a plan to help you harness AI and unify your intelligent data and analytics on the Fabric platform. With the Make your data AI-ready with Microsoft Fabric plan on Microsoft Learn, you’ll find out how to implement large-scale data engineering, build a lakehouse, and explore warehouse solutions.

This free, curated, and self-paced plan guides you through key learning milestones:

  • Ingesting data through shortcuts, mirroring, pipelines, and dataflows. 
  • Transforming data using dataflows, procedures, and notebooks. 
  • Storing processed data in the lakehouse and data warehouse for easy retrieval. 
  • Exposing data by creating reusable semantic models in Power BI, making transformed data accessible for analysis. 

Kick off your data and AI journey at Microsoft Learn 

If you’re looking to expand your Microsoft Fabric expertise and accelerate your professional development, we have everything you need:

  • Harness AI to unify your data and analytics with the official plan on Microsoft Learn: Make your data AI-ready with Microsoft Fabric.

Fabric Community Conference

Bigger and better than ever

Fun at Work Meetings

The post Power your AI transformation with Microsoft Fabric skilling plans and a certification discount appeared first on Microsoft AI Blogs.

]]>
Microsoft: A leader in the 2024 Gartner Magic Quadrant report http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/12/09/microsoft-a-leader-in-the-2024-gartner-magic-quadrant-report/ Mon, 09 Dec 2024 16:00:00 +0000 We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft AI Blogs.

]]>
We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration. 

Gartner MQ Table

A Leader in Data Integration 

We feel that Microsoft’s acknowledgment in the Gartner Magic Quadrant reflects our dedication to innovation and customer-centric solutions. This stems from our relentless drive to advance technology and address the ever-evolving needs of modern organizations.

Our vision for data integration is to deliver seamless, intuitive experiences that empower businesses to unlock the full potential of their data and achieve transformative results. This recognition reinforces our dedication to leading the evolution of data integration and delivering unparalleled value to our customers and partners worldwide.

shape, background pattern

Microsoft Fabric

Give your teams the AI-powered tools they need for any data project—including workloads tailored to your industry

Microsoft Fabric: Unified Data Platform for the Era of AI 

At the core of our data integration strategy is Microsoft Fabric. Built to navigate the complexities of modern data ecosystems, Microsoft Fabric provides an all-in-one, software-as-a-service (SaaS) platform with AI-powered services to handle any data project—all within a pre-integrated and optimized environment. It enables organizations to unlock their data’s full potential, drive innovation, and make smarter decisions. Features like Copilot and other generative AI tools introduce new ways to transform and analyze data, generate insights, and create visualizations and reports in Microsoft Fabric.

Microsoft OneLake: The heart of our Data Integration journey 

At the center of our Fabric is OneLake, the unified, open data lake that simplifies and accelerates data integration across diverse systems. OneLake, with the data integration capabilities of Fabric, is designed to help you simplify data management and reduce data duplication. OneLake’s open data format means you only need to load the data into the lake once and you can use the single copy across every Fabric workload and engine. It acts as the central hub, ensuring seamless connectivity, accessibility, and collaboration for all your data needs. 

OneLake has four innovative pathways for integrating data depending on your needs: 

  1. Fabric Data Factory 

Fabric Data Factory integrates seamlessly with OneLake, offering powerful cloud-scale services for data movement, orchestration, transformation, deployment, and monitoring. These capabilities enable organizations to tackle even the most complex ETL (Extract, Transform, and Load) scenarios, unifying data estates, streamlining operations, and unlocking the full potential of their data.

  1. Multi-Cloud Shortcuts

OneLake shortcuts allow you to virtualize data into OneLake from across clouds, accounts, and domains—all without duplication, movement, or changes to metadata or ownership. This capability allows organizations to access and analyze their data in place, without the need for complex data migration processes. By maintaining a live connection to the source, OneLake ensures real-time data availability and consistency across all integrated environments. You can shortcut data from Azure Data Lake Service, S3-compatible sources, Iceberg-compatible sources, Google Cloud Platform, Dataverse, and more.

  1. Database Mirroring 

OneLake offers an innovative zero-ETL approach to database mirroring, simplifying the replication of operational databases into the lake. This capability minimizes the effort required to synchronize databases, supporting real-time changes and ensuring that data is always current and ready for analytics and reporting.

  1. Real-Time Intelligence 

Real-time intelligence in Microsoft Fabric empowers organizations to ingest and process streaming and high granularity data instantaneously, driving real-time insights and automating decision-making. This solution is ideal for applications requiring immediate data updates, such as IoT analytics, fraud detection, and operational dashboards. The capability extends to highly granular data analytics, allowing businesses to track a single package within a global delivery network or monitor a specific component in a manufacturing machine across a fleet of factories worldwide, enabling precise insights and optimized operations. Leveraging cutting-edge data processing frameworks, Eventhouse ensures scalability, reliability, and low-latency performance, making it suitable for high-volume streaming scenarios.

With these innovative pathways, Fabric empowers organizations to break down data silos, optimize workflows, and unlock the full potential of their data. Whether it’s through seamless data integration, real-time insights, or multi-cloud collaboration, Fabric is designed to meet the demands of modern data ecosystems. These enriched features position Fabric as a critical tool for organizations aiming to unlock the full potential of their data while maintaining simplicity, security, and scalability.

Customer success stories 

Our customers’ success stories are a testament to the impact of Microsoft Fabric. Organizations across various industries have leveraged our data integration capabilities to unlock new opportunities, drive innovation, and achieve their business goals. By streamlining data processes and improving data quality, Microsoft Fabric has enabled these businesses to make data-driven decisions with confidence. 

Read UST Global’s case study to learn how they leveraged the power of Fabric to migrate over 20 years of data, integrating disparate data sources to facilitate better collaboration and innovation among employees. 

Looking ahead: The future of Data Integration with Microsoft Fabric 

As we celebrate being recognized as a Leader in the Gartner Magic Quadrant for the fourth consecutive year in a row, we are motivated to push the boundaries of what’s possible in data integration. To us, this is a milestone that reflects not only our commitment to innovation but also our dedication to empowering our customers to turn their data into actionable insights.

Looking forward, the roadmap for Microsoft Fabric is filled with exciting enhancements and new features. These advancements are designed to tackle the complexities of modern data ecosystems, making it even easier for organizations to unify, transform, and harness their data at scale. Continuous improvement is at the core of our strategy. We aim to remain at the forefront of the data integration landscape and redefine the possibilities of what a comprehensive data platform can achieve. 

We believe this recognition by Gartner is a validation of the trust our customers place in us and a reflection of our relentless drive to deliver world-class solutions. As we continue this journey, we remain committed to collaborating with our community and partners, building on this success to achieve even greater outcomes together.

Resources 


Gartner, Magic Quadrant for Data Integration Tools, By Thornton Craig, Sharat Menon, Robert Thanaraj, Michele Launi, Nina Showell, 3 December 2024 

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. Available here.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft AI Blogs.

]]>
Laying a secure cloud foundation for AI-powered enterprises https://azure.microsoft.com/en-us/blog/laying-a-secure-cloud-foundation-for-ai-powered-enterprises/ Tue, 03 Dec 2024 16:00:00 +0000 Azure is designed to unify operations, streamline application development, and consolidate data management across distributed infrastructures.

The post Laying a secure cloud foundation for AI-powered enterprises appeared first on Microsoft AI Blogs.

]]>
As AI continues to redefine the way businesses operate, organizations are facing new challenges and demands on their technology platforms. Today’s enterprises want systems that can handle massive amounts of data to support real-time insights while also helping them overcome the challenge of working with operations, applications, data, and infrastructure across IT and OT environments—whether in the cloud, at the edge, or on-premises. 

This is where Azure’s adaptive cloud approach enabled by Azure Arc comes into play. Azure is designed to help unify operations, streamline application development, and consolidate data management across distributed infrastructures to meet the demands of an AI-powered world. 

At Microsoft’s Ignite conference, we announced new capabilities that make it easier to use the power of Azure wherever you need it. These announcements include the preview of Azure Arc gateway, the introduction of Windows Server Management enabled by Azure Arc, the preview of Azure Container Storage enabled by Azure Arc, Azure Monitor pipeline, Azure Key Vault Secret Store, general availability (GA) of Azure IoT Operations, Fabric Real-Time Intelligence, and the introduction of Azure Local, among others. 

Azure Arc, the core component of this approach, now supports over 39,000 customers worldwide, connecting servers, databases, and Kubernetes clusters directly to Azure. The rapid growth of Azure Arc signals that the cloud is more than just a place; it is a paradigm shift spread throughout every organization. We are excited to share how customers such as LALIGA, Coles, Husqvarna, and Emirates Global Aluminium are leveraging the adaptive cloud approach to help achieve their business goals. 

Operate anywhere with AI-enhanced management and security

For one of our customers, LALIGA, one of the world’s largest football leagues, Azure’s adaptive cloud approach has been critical for managing an infrastructure capable of helping support real-time data processing across its stadiums and digital platforms to deliver an engaging fan experience. By adopting Azure Arc, LALIGA has achieved seamless management of both cloud and on-premises environments, processing over 3 million data points per match and enabling rapid, AI-powered insights. This unified platform provides LALIGA with the flexibility and scalability to react to dynamic fan interactions and optimize operations, to help ensure they can continue evolving alongside new technologies and market demands. 

“The challenges that we have can be solved with this adaptive cloud approach. I think that we have the right tool, which is Azure Arc. With this, we’re able to manage the infra no matter if it’s located in the cloud, on-premises, in the edge.”—Miguel Angel Leal Góngora, Chief Technology & Innovation Officer, LALIGA 

Enabling operational resilience and security across distributed systems is a foundational requirement to help maintain service and protect sensitive data.

Azure provides comprehensive tools that help streamline operations and management of both infrastructure and applications, including configuration management and governance, resiliency and observability, built-in security and control, and universal AI assistants like Copilot in Azure. An essential part of simplifying cloud connectivity from datacenter and edge sites is the new Azure Arc gateway in preview. Built in response to customer feedback, this capability provides a streamlined approach that simplifies cloud connectivity and empowers teams to manage cloud connections more easily, enhancing control over the network infrastructure. 

In addition, we are simplifying access to Windows Server Management enabled by Azure Arc. At no extra cost, customers with Software Assurance (SA) or active subscription licenses can access certain Azure Arc-enabled management capabilities. By connecting their servers to Azure, customers can use over 20 of Azure’s services, including Azure Update Manager, Machine Configuration, and Azure Monitor, as well as Windows Server features like Azure Site Recovery and Best Practices Assessment. These tools help centralize and modernize management across hybrid, multi-cloud, and edge environments.

Simplifying app development and accelerating innovation with Arc enabled Kubernetes

Leveraging cloud native development to drive innovation is a key focus for modern infrastructure and Microsoft’s adaptive cloud approach provides solutions that help make this easier for developers.

One of the primary challenges that developers face is ensuring applications remain reliable, even in disconnected or intermittent network scenarios. A significant part of this solution is the new Azure Container Storage enabled by Azure Arc, which is currently available in preview. Azure Container Storage allows developers to build containerized applications that operate seamlessly across environments, regardless of where data is stored and despite intermittent connectivity. With this capability, data is automatically synchronized between local storage and cloud environments when connectivity is restored, ensuring that developers can confidently build edge solutions that are scalable and robust. 

The new Azure Monitor pipeline allows teams to ingest and process large volumes of data efficiently, allowing for quick identification and resolution of potential issues. This streamlined data pipeline is key for maintaining operational efficiency and scaling modern cloud-native applications across distributed environments.

Security becomes increasingly complex as systems span clouds, datacenters, and edge. Azure Key Vault Secret Store provides a robust solution for managing secrets within Kubernetes clusters, offering features such as auto-rotation of secrets for enhanced security. This modern cloud security approach helps to ensure that sensitive information remains secure across Linux and Windows environments and offers a reliable and scalable way to secure applications and workloads. 

Coles operates more than 1,800 stores across Australia, including 850 Coles Supermarkets locations. Committed to seamless experiences for millions of customers in person and online, the retailer is constantly investing in innovative technologies, such as AI and computer vision. Coles uses Azure Machine Learning to train and develop edge AI models and run these models locally in their stores. Leveraging Azure’s adaptive cloud approach, the company reports it has met its target of a six-times increase in the pace that it can deploy applications to stores. 

“We’re going to be working more with Microsoft over the next 12 months to build out Azure Machine Learning operations for the edge to be able to seamlessly test and deploy new models and ensure auditability of our models and the different versions over time. The Azure automated machine learning tool was really useful for us, and it speeds up our data annotation time for training models by 50%.”—Roslyn Mackay, Head of Technology Innovation, Coles Group 

Azure Machine Learning


Build, train, and deploy

Unifying data from edge to cloud for AI-powered insights

In a recent Forrester survey commissioned by Microsoft, respondents shared that on average 46% of the data captured by their organization is currently sent to the cloud and that they expect that number to grow to 68% within just two years. With Azure Arc, organizations can more consistently manage and secure connected devices across distributed environments, to help ensure data integrity at every level. 

The GA of Azure IoT Operations further enhances this capability by simplifying the process of collecting, managing, and processing data from IoT devices. Additionally, the GA of Fabric Real-Time Intelligence enables ingestion of signals from the edge for transformation, visualization, tracking, AI, and real-time actions in the cloud. This complements Fabric’s existing suite of insights and analytics capabilities including OneLake and Power BI. Together, these services help provide businesses with the ability to leverage AI fully across their distributed estate. 

Husqvarna logo

Husqvarna is a world-leader in outdoor products for forest, park, lawn, and garden care, as well as equipment and diamond tools for the light construction industry. Husqvarna’s business goals include doubling the number of connected devices it currently has in the market, doubling the sales of robotic lawn mowers, and increasing its market share of electrified solutions, all of which will require a highly effective global supply chain. Husqvarna envisions Azure IoT Operations as an important component of the platform they are defining by providing new capabilities that will allow them to build a data-powered, global supply chain and improve processes in ways that were previously difficult. For Husqvarna, the ability to harness data on a global scale will allow them to develop a stronger and more efficient supply chain, reduce costs, and help enhance their efficiency in delivering goods to their customers. 

Innovating on a blended infrastructure, together 

Microsoft’s vision for the future of distributed infrastructure centers on creating a seamless blend of cloud and customer environments, fundamentally redefining what cloud infrastructure means. Instead of treating cloud as a separate entity, Microsoft’s approach integrates customer environments with cloud services, allowing businesses to extend their technology operations across datacenter, edge, and public cloud environments.

A cornerstone of this vision is the introduction of Azure Local. Azure Local is an Arc-enabled infrastructure designed specifically for local data processing and critical operations, bringing cloud-like capabilities to on-premises environments. This solution enables organizations to manage workloads that require low-latency and robust performance while benefiting from the scalability and resilience of the cloud. 

Azure Local’s architecture is built to support near real-time processing needs and can help our customers with more options for their regulatory compliance requirements, including decentralized or disconnected scenarios.

Emirates Global Aluminium (EGA), based in the United Arab Emirates has rapidly grown from a small regional smelter into a vertically integrated aluminium provider serving more than 400 customers from aerospace to automotive to consumer goods, and is now the world’s largest premium aluminium producer. To support both its on-site operations and broader cloud-based solutions, including its digital manufacturing platform, EGA is now focused on a plan to move one-third of its server base to the cloud with Azure and another third to run hybrid and at the edge with Azure Local, bringing together the best of the public and private cloud from one provider.

After bringing the power of the cloud into its operation areas, EGA experienced 10 to 13 times faster AI response time, lower latency, and 86% cost savings associated with AI video and image recognition in comparison to building an AI model at the edge independently.

“Microsoft Azure hybrid cloud brings us not just infrastructure as a service capability, but also many software and platform capabilities that open up new possibilities we didn’t have in our former on-premises environment. One of the key capabilities is running real-time AI data analysis in our operations. For example, we used to manually inspect only 2% of anodes, which are large carbon blocks used in the aluminium smelting process. Now, we inspect 100% of all anodes using vision AI based on a neural network machine learning model running on the edge (with Azure Local). This model allows us to standardize the inspection process by automatically recognizing defects in real time going beyond what the human eye can see.”—Carlo Khalil Nizam, Chief Digital Officer, EGA 

Looking ahead: join us in shaping the future of adaptive cloud

At Microsoft, our commitment to the adaptive cloud approach is not just about addressing today’s challenges. It is about equipping organizations to thrive in the AI-powered world of tomorrow. As we continue to innovate, we are excited to partner with you to redefine what is possible across distributed environments.

Want to learn more about our roadmap and how Azure is powering transformation across industries? Check out our Microsoft Ignite sessions and blogs to dive deeper into the latest announcements, hear from our experts, and explore how the adaptive cloud approach can work for you. Let’s build the future together. 

The post Laying a secure cloud foundation for AI-powered enterprises appeared first on Microsoft AI Blogs.

]]>
Accelerate app innovation with an AI-powered data platform http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/11/19/accelerate-app-innovation-with-an-ai-powered-data-platform/ Tue, 19 Nov 2024 13:30:00 +0000 Microsoft Fabric is an all-in-one, software-as-a-service (SaaS) platform with AI-powered services to accomplish any data project—all in a pre-integrated and optimized environment so all your data teams could work faster, together.

The post Accelerate app innovation with an AI-powered data platform appeared first on Microsoft AI Blogs.

]]>
One year ago, we launched an end-to-end data platform into general availability designed to help organizations power their AI transformation and reimagine how to connect, manage, and analyze their data. Microsoft Fabric is an all-in-one, software-as-a-service (SaaS) platform with AI-powered services to accomplish any data project—all in a pre-integrated and optimized environment so all your data teams could work faster, together.

With Fabric, we focused on simplicity, openness, and autonomy. All Fabric workloads work together seamlessly out-of-the-box without the myriad of infrastructure and configuration settings you typically find in data platforms so you can focus on getting results. You can ingest structured and unstructured data in any format into OneLake’s open Delta Parquet format and even access third-party tools from industry leading software companies built directly into Fabric. Advanced security, governance, and continuous integration and continuous delivery (CI/CD) capabilities are woven into the platform with personalized experiences for admins and users alike. Microsoft Copilot and other AI capabilities are built into every layer of Fabric to help data professionals and business users automate routine tasks and get more done. In fact, we’ve found that users were 52% faster in completing standard data analysis tasks and uncovered insights 36% more accurately when using Copilot in Fabric, with 90% saying they were likely to adopt Copilot in Fabric.1  

Fabric’s vision for a data platform has highly resonated with the industry, and more than 17,000 customers, including 70% of the Fortune 500, are already using Fabric to empower their data teams.

  • Melbourne Airport, the second busiest in Australia, used Fabric to analyze their operational data in real-time and gained 30% increased performance efficiency across data-related operations. “It’s a radical and powerful new technology that can feel just like using Microsoft Excel or Power BI. But once in the hands of the user, it doesn’t feel like a new, complex technology at all,” Irfan Khan, Head of Data and Analytics.  
  • Chanel, a world leader in luxury fashion, adopted Fabric not only to drive more value from its data and support their AI innovation, but also safeguard its data at rest and in-transit with Fabric’s end-to-end, built-in security, governance, and reliability. “We chose Microsoft Fabric as the foundation of this platform, driven by its ability to implement a data mesh approach,” Olivier Barbonnat, Chief Information Officer Europe.  
  • Our own Microsoft IDEAS (insights, data, engineering, analytics, systems) team, one of the largest data teams in the world, transitioned to Fabric to support its AI ambitions. Its solution now encompasses 27,000 data sources, 420 petabytes of data, 35,000 data pipelines, 38,000 semantic models, and more than 600 teams relying on its models. The IDEAS team estimated it has received a 50% efficiency boost from consolidating assets in OneLake, using modern tools such as Spark and Python, Direct Lake mode in Microsoft Power BI, and AI-assisted coding through IDEAS Copilot.

Schaeffler, Hitachi Solutions, KPMG, Epic, and many other customers have seen a transformational impact to how they process data. You can explore all these Fabric stories on the Microsoft Customer Stories page. One of the reasons Fabric caught the imagination of so many is because, with Fabric, you can simplify and future-proof your data estate. Fabric’s capabilities and workloads will continue to expand and be seamlessly infused into our pre-integrated platform, helping you keep up with the technology trends without added work.  

And since launching Fabric, we’ve added new ways to bring data into Fabric with capabilities like mirroring and new shortcut sources. We’ve expanded Copilot in Fabric across almost every experience to help everyone automate routine tasks. We’ve added a multitude of security and governance features to help you make sure your data is secure at every step of its journey. We’ve added the ability to extend Fabric further with native, industry-specific workloads from Microsoft and other software developers. And most impactfully, we launched a new workload to help organizations make better decisions from Internet of Things (IoT), logs, and telemetry data—Real-Time Intelligence.  

With Fabric Real-Time Intelligence, we transformed Fabric into a platform equipped to support your operational scenarios and data in motion. And now, we’re helping you bring transactional scenarios to Fabric with the introduction of Fabric Databases. 

Introducing a unified data platform with Fabric Databases

Currently, the data and AI technology market is massively fragmented with hundreds of vendors and thousands of services. We believe the future of data and AI is the convergence of all your data services into a unified, open, and extensible platform, so you no longer have to manually stitch together disconnected services.  

Today, we’re thrilled to announce a major leap toward this goal with Fabric Databases, now in preview. Fabric Databases represent a new class of cloud databases that brings a world-class transactional database natively to Microsoft Fabric for app developers. With the addition of Fabric Databases, Fabric now brings together both transactional and analytical workloads, creating a truly unified data platform. Developers can streamline application development with simple, autonomous, and AI-optimized databases that provision in seconds and are secured by default with features like cloud authentication and database encryption. Built-in vector search, retrieval augmented generation (RAG) support, and Azure AI integration simplify AI app development, and your data is instantly available in OneLake for advanced analytics. Developers can even use Copilot in Fabric to translate natural language queries into SQL and get inline code completion alongside code fixes and explanations.

SQL database, the first available in Fabric, was built on our industry-leading SQL Server engine and the simple and intuitive SaaS platform of Fabric. In fact, data professionals who’ve tried SQL database in Fabric were able to complete common database tasks up to 71% faster and with 63% more effective task completion. They reported feeling 84% more confident in these tasks and finding the tasks up to 91% less difficult. These results were even more pronounced for people who were newer to cloud. Those with less than two years of cloud platform experience benefited the most in terms of efficiency and effectiveness, highlighting the simplicity and intuitiveness of Fabric Databases. 

SQL database is just the beginning for Fabric Databases, with more databases on the roadmap. Whether you’re an experienced data professional or just getting started, you can build AI apps faster and more confidently on Fabric Databases.

To learn more, read the Fabric Databases blog post, watch the Microsoft Mechanics deep dive video, and watch the following sizzle video:

General availability of Fabric Real-Time Intelligence

We’re also thrilled to announce Real-Time Intelligence is now generally available. With Real-Time Intelligence, you get both pro-dev and no-code tools to ingest high-volume streaming data with high granularity, dynamically transform streaming data, query data in real-time for instant insights, and trigger automated actions based on the data. The Real-time hub provides a central place to discover and manage all your streaming data. Dener Motorsport, a participant in the annual Porsche Carrera Cup Brasil event, used Real-Time Intelligence for in-race analytics, and their CEO, Dener Pires, said “Before we used Microsoft Fabric and Real-Time Intelligence, it was probably 30 minutes before the engineers knew that something was wrong with a car, could get the data, analyze it, and provide a solution. Today that process is done in minutes.” Check out this blog post and the following demo to see Real-Time Intelligence in action: 

OneLake catalog—a complete catalog for discovery, management, and governance

No matter what data project you’re trying to accomplish, it starts with the right foundation. OneLake, Fabric’s unified, multi-cloud data lake, is built for everyone in your entire organization as the single point to discover and explore your data. With OneLake shortcuts and mirroring, you can unify all of your multi-cloud and on-premise sources and enable your people to work from the same data—meaning fewer copies of data, better collaboration between your teams, and easier, more streamlined analysis. And since data is stored in an open format, you can use data in OneLake for all your data projects, no matter the vendor or service.  

Today, we’re excited to announce the OneLake catalog, a complete solution to explore, manage, and govern your entire Fabric data estate. The OneLake catalog comes with two tabs, Explore and Govern, that can help all Fabric users discover and manage trusted data, as well as provide governance for data owners with valuable insights, recommended actions, and tooling. Since the OneLake catalog is an evolution of the OneLake data hub, it already shows up in Microsoft 365, such as in Excel and Microsoft Teams and many other products in the Microsoft cloud for easy data consumption. OneLake catalog value can be extended to the Microsoft Purview data governance solution, Unified Catalog, which offers the data office, data stewards, and data owners advanced governance capabilities, including data quality and a global catalog for the heterogeneous data estate. The Explore tab is now generally available, and the Govern tab will be coming soon in preview.  

Learn more about the OneLake catalog by reading this blog post and by watching the following demo:

More Fabric innovation

The introduction of Fabric Databases and the growing opportunity with generative AI in accelerating data projects has encouraged us to reimagine the pillars of Fabric. We are now focused on making sure Fabric can provide you with: 

  • An AI-powered data platform. Fabric can give your teams the AI-powered tools needed for any data project in a pre-integrated and optimized SaaS environment. You can even extend Fabric further by adding other native workloads from the Workload Hub, created by industry-leading partners.  
  • An open and AI-ready data lake. Fabric can help you access your entire multi-cloud data estate from a single data lake, work from the same copy of data across analytics engines, and ensure your data is ready to power AI innovation.  
  • AI-enabled business users. Fabric can empower everyone to better understand your data with AI-enhanced Q&A experiences and visuals embedded in the Microsoft 365 apps they use every day. 
  • A mission-critical foundation. You can confidently deploy and manage Fabric with category-leading performance, instant scalability, shared resilience, and built-in security, governance, and compliance. 

Check out the new Fabric sizzle video to see these pillars in action: 

We’re excited to share a huge slate of announcements designed to help us better accomplish each goal above. These enhancements include: 

Fabric workload enhancements

  • The general availability of sustainability data solutions in Microsoft Fabric, a single place for all your environmental, social, and governance (ESG) data needs. Julie Nikulina, IT Solutions Engineer at Schaeffler AG, a global automotive and industrial supplier, mentioned that, “thanks to Microsoft Fabric, we’ll be able to answer lots of questions about climate neutrality and decarbonization company-wide via a single platform—and we can implement new use cases in short sprints within two to six weeks.”  
  • Coming soon, the preview of AI functions in Fabric notebooks, which provide a simplified API for common AI text enrichments like summarization, translation, sentiment analysis, and more. 
  • The general availability of API for GraphQL, which is an API to help you access data from multiple sources in Fabric with a single query API. 
  • The preview of several enhancements to Fabric Real-Time Intelligence, which include new Fabric events, enhancements to Eventstreams and Eventhouses, and easier real-time dashboard sharing. 
  • The preview of the Copilot in Fabric experience for data pipelines in Fabric Data Factory. 
  • The preview of our integration with Esri ArcGIS for advanced spatial analytics. 

Microsoft OneLake enhancements

New AI capabilities in Fabric

  • Coming soon, the preview of AI skill enhancements, including a more conversational experience and support for semantic models and Eventhouse KQL databases. 
  • Coming soon, the preview of AI skill integration with Agent Service in the newly announced Azure AI Foundry, allowing developers to use AI skills as a core knowledge source. 

Platform-wide enhancements

  • The preview of workspace monitoring, which provides detailed diagnostic logs for workspaces to troubleshoot performance issues, capacity performance, and data downtime. 
  • The general availability of the Workload Development Kit, created to help software developers design, build, and interoperate applications within Fabric. We’re excited to see many of our industry-leading partners announce preview of their workload hub offerings, including Quantexa, SAS, Teradata, Osmos, Esri, and Profisee. 
  • The preview of further integration with Microsoft Purview including extending Protection policies to enforce access permissions to more sources and using Data Loss Prevention policies to restrict access to semantic models with sensitive data. 
  • The general availability of external data sharing allows you to directly share OneLake tables and folders with other Fabric tenants in an easy, quick, and secure manner. 
  • Fabric is FedRAMP High certified for the Azure Commercial cloud, the highest level of compliance and security standards required by the federal government for cloud service providers. Now government agencies can run Fabric on the Azure Commercial cloud while maintaining strict compliance. 

You can learn more about all of these announcements and so much more in the Fabric November 2024 Update blog post and the numerous blog posts that will go live throughout this week on the Fabric blog channel.  

Fabric billing and consumption updates

Finally, we’re making some important changes to Fabric’s billing model. First, coming soon, organizations with multiple capacities can now direct Copilot in Fabric consumption and billing to a specific capacity, no matter where the Copilot in Fabric usage actually takes place. Admins can assign specific members of their organization to the specified F64 or higher capacity for all of their Copilot requests. These requests will be consumed and billed on that assigned F64+ capacity, ensuring Copilot in Fabric usage doesn’t impact priority jobs while expanding Copilot access to any workspace regardless of its capacity.

Additionally, we’re providing capacity admins with more control over the Fabric jobs running in their capacities. Surge protection, now in preview, helps protect capacities from unexpected surges in background workload consumption. Admins can use surge protection to set a limit on background activity consumption, which will prevent background jobs from starting when reached. Admins can configure different limits for each capacity in your organization to give you the flexibility to meet your needs.

Watch Fabric in action at Microsoft Ignite

Join us at Microsoft Ignite 2024 from November 19 to November 21, 2024 to see all of these announcements in action across the following sessions:  

And six other Fabric breakout sessions. You can also join us at labs and theater sessions throughout the event. Find all the data-related sessions at Ignite. You can also learn about other announcements across our Azure portfolio by reading these blogs by Jessica Hawk and Omar Khan. 

Finally, if you want more strategic guidance to help you along your data and analytics journey in the era of AI, you should watch the recent Data and Analytics Forum.

Getting started with Microsoft Fabric

New customers can try out everything Fabric has to offer by signing up for a free 60-day trial—no credit card information required. Learn how to start your free trial.  

If you’re considering purchasing Fabric and need help deciding on a SKU, we’re excited to share a new Fabric SKU estimator, now in private preview. You can sign up to try out this tool as part of the early adopter program—try the SKU estimator.

Start building your Fabric skills

Be one of the first to start using Fabric Databases

Ready to build reliable, highly scalable applications where cloud authentication and encryption are secured by default? Starting December 3, 2024, join live sessions with database experts and see just how easy it is to get started. View the schedule and register for the series.

Get certified in Microsoft Fabric—for free

Get ready to fast-track your career by earning your Microsoft Certified: Fabric Analytics Engineer Associate certification. For a limited time, we’re offering 5,000 free DP-600 exam vouchers to eligible Fabric community members. Complete your exam by the end of the year and join the ranks of certified experts. Don’t miss this opportunity to get certified

A new Fabric certification for data engineers

We’re excited to announce a brand-new certification for data engineers. The new Microsoft Certified: Fabric Data Engineer Associate certification will help you demonstrate your skills with data ingestion, transformation, administration, monitoring, and performance optimization in Fabric. To earn this certification, pass Exam DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric, currently in beta.

Join us at the 2025 Microsoft Fabric Community Conference

Looking to gain hands-on experience with Fabric and learn directly from the people who created it? If so, join us from March 29 to April 3, 2025, at the Microsoft Fabric Community Conference in Las Vegas, Nevada. Register today

Explore additional resources for Microsoft Fabric

If you want to learn more about Fabric:  

Read additional blogs by industry-leading partners:


1Based upon n=209 user studies conducted by Microsoft Corporation in October 2024 that measured four common metrics associated with the consumption experience of Power BI in Microsoft Fabric. Qualitative sentiment gathered upon task completion. The actual results may vary. 

2Based upon n=210 user studies conducted with technical practitioners by Microsoft Corporation in October 2024 that measured time to complete four common tasks associated with AI application development on a SQL database in Microsoft Fabric and on Azure SQL Database. Actual results may vary based upon individual performance and sentiment

The post Accelerate app innovation with an AI-powered data platform appeared first on Microsoft AI Blogs.

]]>
Announcing the availability of Azure OpenAI Data Zones and latest updates from Azure AI https://azure.microsoft.com/en-us/blog/announcing-the-availability-of-azure-openai-data-zones-and-latest-updates-from-azure-ai/ Wed, 06 Nov 2024 17:00:00 +0000 Summarizing new capabilities this month across Azure AI portfolio that provide greater choices and flexibility to build and scale AI solutions.

The post Announcing the availability of Azure OpenAI Data Zones and latest updates from Azure AI appeared first on Microsoft AI Blogs.

]]>
Over 60,000 customers including AT&T, H&R Block, Volvo, Grammarly, Harvey, Leya, and more leverage Microsoft Azure AI to drive AI transformation. We are excited to see the growing adoption of AI across industries and businesses small and large. This blog summarizes new capabilities across Azure AI portfolio that provide greater choice and flexibility to build and scale AI solutions. Key updates include:

Azure OpenAI Data Zones for the United States and European Union

We are thrilled to announce Azure OpenAI Data Zones, a new deployment option that provides enterprises with even more flexibility and control over their data privacy and residency needs. Tailored for organizations in the United States and European Union, Data Zones allow customers to process and store their data within specific geographic boundaries, ensuring compliance with regional data residency requirements while maintaining optimal performance. By spanning multiple regions within these areas, Data Zones offer a balance between the cost-efficiency of global deployments and the control of regional deployments, making it easier for enterprises to manage their AI applications without sacrificing security or speed.

This new feature simplifies the often-complex task of managing data residency by offering a solution that allows for higher throughput and faster access to the latest AI models, including newest innovation from Azure OpenAI Service. Enterprises can now take advantage of Azure’s robust infrastructure to securely scale their AI solutions while meeting stringent data residency requirements. Data Zones is available for Standard (PayGo) and coming soon to Provisioned.

graphical user interface, application

Azure OpenAI Service updates

Earlier this month, we announced general availability of Azure OpenAI Batch API for Global deployments. With Azure OpenAI Batch API, developers can manage large-scale and high-volume processing tasks more efficiently with separate quota, a 24-hour turnaround time, at 50% less cost than Standard Global. Ontada, an entity within McKesson, is already leveraging Batch API to process large volume of patient data across oncology centers in the United States efficiently and cost effectively.

 ”Ontada is at the unique position of serving providers, patients and life science partners with data-driven insights. We leverage the Azure OpenAI Batch API to process tens of millions of unstructured documents efficiently, enhancing our ability to extract valuable clinical information. What would have taken months to process now takes just a week. This significantly improves evidence-based medicine practice and accelerates life science product R&D. Partnering with Microsoft, we are advancing AI-driven oncology research, aiming for breakthroughs in personalized cancer care and drug development.” — Sagran Moodley, Chief Innovation and Technology Officer, Ontada

We have also enabled Prompt Caching for o1-preview, o1-mini, GPT-4o, and GPT-4o-mini models on Azure OpenAI Service. With Prompt Caching developers can optimize costs and latency by reusing recently seen input tokens. This feature is particularly useful for applications that use the same context repeatedly such as code editing or long conversations with chatbots. Prompt Caching offers a 50% discount on cached input tokens on Standard offering and faster processing times.

For Provisioned Global deployment offering, we are lowering the initial deployment quantity for GPT-4o models to 15 Provisioned Throughput Unit (PTUs) with additional increments of 5 PTUs. We are also lowering the price for Provisioned Global Hourly by 50% to broaden access to Azure OpenAI Service. Learn more here about managing costs for AI deployments. 

In addition, we’re introducing a 99% latency service level agreement (SLA) for token generation. This latency SLA ensures that tokens are generated at faster and more consistent speeds, especially at high volumes.

New models and customization

We continue to expand model choice with the addition of new models to the model catalog. We have several new models available this month, including Healthcare industry models and models from Mistral and Cohere. We are also announcing customization capabilities for Phi-3.5 family of models.

  • Healthcare industry models, comprising of advanced multimodal medical imaging models including MedImageInsight for image analysis, MedImageParse for image segmentation across imaging modalities, and CXRReportGen that can generate detailed structured reports. Developed in collaboration with Microsoft Research and industry partners, these models are designed to be fine-tuned and customized by healthcare organizations to meet specific needs, reducing the computational and data requirements typically needed for building such models from scratch. Explore today in Azure AI model catalog.
  • Ministral 3B from Mistral AI: Ministral 3B represents a significant advancement in the sub-10B category, focusing on knowledge, commonsense reasoning, function-calling, and efficiency. With support for up to 128k context length, these models are tailored for a diverse array of applications—from orchestrating agentic workflows to developing specialized task workers. When used alongside larger language models like Mistral Large, Ministral 3B can serve as efficient intermediary for function-calling in multi-step agentic workflows.
  • Cohere Embed 3: Embed 3, Cohere’s industry-leading AI search model, is now available in the Azure AI Model Catalog—and it’s multimodal! With the ability to generate embeddings from both text and images, Embed 3 unlocks significant value for enterprises by allowing them to search and analyze their vast amounts of data, no matter the format. This upgrade positions Embed 3 as the most powerful and capable multimodal embedding model on the market, transforming how businesses search through complex assets like reports, product catalogs, and design files. 
  • Fine-tuning general availability for Phi 3.5 family, including Phi-3.5-mini and Phi-3.5-MoE. Phi family models are well suited for customization to improve base model performance across a variety of scenarios including learning a new skill or a task or enhancing consistency and quality of the response. Given their small compute footprint as well as cloud and edge compatibility, Phi-3.5 models offer a cost effective and sustainable alternative when compared to models of the same size or next size up. We’re already seeing adoption of Phi-3.5 family for use cases including edge reasoning as well as non-connected scenarios. Developers can fine-tune Phi-3.5-mini and Phi-3.5-MoE today through model as a platform offering and using serverless endpoint.
graphical user interface

AI app development

We are building Azure AI to be an open, modular platform, so developers can go from idea to code to cloud quickly. Developers can now explore and access Azure AI models directly through GitHub Marketplace through Azure AI model inference API. Developers can try different models and compare model performance in the playground for free (usage limits apply) and when ready to customize and deploy, developers can seamlessly setup and login to their Azure account to scale from free token usage to paid endpoints with enterprise-level security and monitoring without changing anything else in the code.

We also announced AI App Templates to speed up AI app development. Developers can use these templates in GitHub Codespaces, VS Code, and Visual Studio. The templates offer flexibility with various models, frameworks, languages, and solutions from providers like Arize, LangChain, LlamaIndex, and Pinecone. Developers can deploy full apps or start with components, provisioning resources across Azure and partner services.

Our mission is to empower all developers across the globe to build with AI. With these updates, developers can quickly get started in their preferred environment, choose the deployment option that best fits the need and scale AI solutions with confidence.

New features to build secure, enterprise-ready AI apps

At Microsoft, we’re focused on helping customers use and build AI that is trustworthy, meaning AI that is secure, safe, and private. Today, I am excited to share two new capabilities to build and scale AI solutions confidently.

The Azure AI model catalog offers over 1,700 models for developers to explore, evaluate, customize, and deploy. While this vast selection empowers innovation and flexibility, it can also present significant challenges for enterprises that want to ensure all deployed models align with their internal policies, security standards, and compliance requirements. Now, Azure AI administrators can use Azure policies to pre-approve select models for deployment from the Azure AI model catalog, simplifying model selection and governance processes. This includes pre-built policies for Models-as-a-Service (MaaS) and Models-as-a-Platform (MaaP) deployments, while a detailed guide facilitates the creation of custom policies for Azure OpenAI Service and other AI services. Together, these policies provide complete coverage for creating an allowed model list and enforcing it across Azure Machine Learning and Azure AI Studio.

To customize models and applications, developers may need access to resources located on-premises, or even resources not supported with private endpoints but still located in their custom Azure virtual network (VNET). Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway will support a private connection from the managed VNET to any resources using HTTP or HTTPs protocol. Today, it is verified to support a private connection to Jfrog Artifactory, Snowflake Database, and Private APIs. With Application Gateway in Azure Machine Learning and Azure AI Studio, now available in public preview, developers can access on-premises or custom VNET resources for their training, fine-tuning, and inferencing scenarios without compromising their security posture.

Start today with Azure AI

It has been an incredible six months being here at Azure AI, delivering state-of-the-art AI innovation, seeing developers build transformative experiences using our tools, and learning from our customers and partners. I am excited for what comes next. Join us at Microsoft Ignite 2024 to hear about the latest from Azure AI.

Additional resources:

The post Announcing the availability of Azure OpenAI Data Zones and latest updates from Azure AI appeared first on Microsoft AI Blogs.

]]>
Navigate the data and AI landscape by joining us at the Microsoft Data and Analytics Forum http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/10/09/navigate-the-data-and-ai-landscape-by-joining-us-at-the-microsoft-data-and-analytics-forum/ Wed, 09 Oct 2024 15:00:00 +0000 The Data and Analytics Forum is an upcoming, digital event designed to help data leaders navigate the current data and AI landscape.

The post Navigate the data and AI landscape by joining us at the Microsoft Data and Analytics Forum appeared first on Microsoft AI Blogs.

]]>
The future of AI is here. From easy-to-use copilot experiences that come out-of-the-box to custom generative AI solutions made in Azure AI Studio, every organization is exploring how they can take advantage of AI. And most believe they will be successful—with 87% of organizations believing AI will give them a competitive edge1.

But as you prepare for a future built on AI, you also need clean data to fuel AI. Fostering game-changing AI innovation requires a well-orchestrated data estate that can support everything from small AI pet-projects to scalable AI solutions that span the company. This is a challenging prospect for most organizations whose data environments have grown organically over time with specialized and fragmented solutions. A complex data estate leads to data sprawl and duplication, infrastructure inefficiencies, limited interoperability, and even data exposure risks—not an ideal place to start your AI innovation.

For data leaders trying to streamline and advance their data estate, the burden is often on you to search through the thousands of data and AI offerings, find the right set of offerings, figure out how to integrate them, and do it in such a way that is scalable and can evolve over time. Most data leaders would rather focus on the outcomes of their tools rather than spend all their time integrating specialized solutions and maintaining their data estate.

If you are thinking about how you can prepare your data estate for the era of AI, or will soon, I invite you to join us for the Microsoft Data and Analytics Forum on October 30th, 2024. The event is online only and free to attend.

A digital event designed for data leaders 

shape, background pattern

Microsoft Data and Analytics Forum

Learn how you can prepare your data and analytics for game-changing AI innovations.

The Data and Analytics Forum is an upcoming, digital event designed to help data leaders navigate the current data and AI landscape. Microsoft experts will walk you through the latest trends and industry best practices to help you stay ahead of the curve. You will also hear learnings from current data leaders who have successfully implemented an AI-ready data strategy. Finally, we will highlight how the Microsoft Intelligent Data Platform, a suite of analytics, database, AI, and governance technologies, can help you create a powerful, agile, and secure data and AI foundation, made simple.

The first half of the event will include a keynote session on the current data and AI landscape, followed by your choice of three breakout sessions. Here are the full list of sessions: 

  • Data and Analytics Forum keynote: Jessica Hawk, Corporate Vice President of Data, AI, and Digital Applications, will cover the current data landscape now with AI, how the Microsoft Intelligence Platform can help organizations transform their data estate, and, through a fireside chat with customers, how companies have been able to prepare their data estate to take advantage of AI. 
  • A data foundation that unlocks AI innovation: Emerson Gatchalian, Chief Data and Analytics Officer Americas, will zoom out to study the concepts and conditions at the root of today’s data and analytics stack, from what’s driving investment to the issues companies face in siloed data. Then he’ll show how the right data foundation can meet this AI moment. 
  • Fuel your data culture with generative AI: In my session, we’ll explore both the challenges and opportunities that data leaders face in building a data-driven culture and how modern tools and next generation AI can help accelerate your path. 
  • Data governance in the era of AI: Karthik Ravindran, General Manager of Enterprise Data and AI Governance will cover governance and its importance in an AI world, the influence of data on AI, and the opportunities and the quality challenges of generative AI. Governance practices will be covered, leading to an overview of Microsoft Purview.

Register today

Register now for the Microsoft Data and Analytics Forum and learn how to prepare your organization to take advantage of the latest data and AI innovations.

See the latest innovation coming to Fabric

Check out the latest capabilities coming to the Microsoft Fabric platform by reading the European Fabric Community Conference 2024 announcement blog.

We will also be releasing even more innovation at Microsoft Ignite this year. You’ll see demos of the latest capabilities for Fabric and across all Microsoft products. You can also connect with experts, community leaders, and partners who can help you modernize and manage your own intelligent apps, safeguard your business and data, accelerate productivity, and so much more. 

Explore additional resources for Microsoft Fabric 

If you want to learn more about Microsoft Fabric:


1Expanding AI’s impact with organizational learning.

The post Navigate the data and AI landscape by joining us at the Microsoft Data and Analytics Forum appeared first on Microsoft AI Blogs.

]]>
European Fabric Community Conference 2024: Building an AI-powered data platform http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/09/25/european-fabric-community-conference-2024-building-an-ai-powered-data-platform/ Wed, 25 Sep 2024 07:00:00 +0000 Get a firsthand look at the latest capabilities we are bringing to the Microsoft Fabric platform.

The post European Fabric Community Conference 2024: Building an AI-powered data platform appeared first on Microsoft AI Blogs.

]]>


Thank you to everyone joining us at the first annual European Microsoft Fabric Community Conference this week in Stockholm, Sweden! Besides seeing the beautiful views of Old Town, attendees are getting an immersive analytics and AI experience across 120 sessions, 3 keynotes, 10 workshops, an expo hall, community lounge, and so much more. They are seeing firsthand the latest capabilities we are bringing to the Fabric platform. For those unable to attend, this blog will highlight the most significant announcements that are already changing the way our customers interact with Fabric. 

Decorative image of abstract art

Microsoft Fabric

Learn how to set up Fabric for your business and discover resources that help you take the first steps

Over 14,000 customers have invested in the promise of Microsoft Fabric to accelerate their analytics including industry-leaders like KPMG, Chanel, and Grupo Casas Bahia. For example, Chalhoub Group, a regional luxury retailer with over 750 experiential retail stories, used Microsoft Fabric to modernize its analytics and streamline its data sources into one platform, significantly speeding up their processes.

“It’s about what the technology enables us to achieve—a smarter, faster, and more connected operational environment.”

—Mark Hourany, Director of People Analytics, Chalhoub Group

Check out the myriad ways customers are using Microsoft Fabric to unlock more value from their data:

New capabilities coming to Microsoft Fabric

Since launching Fabric, we’ve released thousands of product updates to create a more complete data platform for our customers. And we aren’t slowing down anytime soon. We’re thrilled to share a new slate of announcements that are applying the power of AI to help you accelerate your data projects and get more done.

Specifically, these updates are focused on making sure Fabric can provide you with: 

  1. AI-powered development: Fabric can give teams the AI-powered tools needed for any data project in a pre-integrated and optimized SaaS environment.
  1. An AI-powered data estate: Fabric can help you access your entire multi-cloud data estate from a single, open data lake, work from the same copy of data across analytics engines, and use that data to power AI innovation 
  1. AI-powered insights: Fabric can empower everyone to better understand their data with AI-powered visuals and Q&A experiences embedded in the Microsoft 365 apps they use every day. 

Let’s look at the latest features and integrations we are announcing in each of these areas. 

AI-powered development

With Microsoft Fabric, you have a single platform that can handle all of your data projects with role-specific tools for data integration, data warehousing, data engineering, data science, real-time intelligence, and business intelligence. All of your data teams can work together in the same pre-integrated, optimized experience, and get started immediately with an intuitive UI and low code tools. All the workloads access the same unified data lake, OneLake, and work from a single pool of capacity to simplify the experience and ease collaboration. With built-in security and governance, you can secure your data from any intrusion and ensure only the right people have access to the right data. And as we continue to infuse Copilot and other AI experiences across Fabric, you can not only use Fabric for any application, but also accelerate time to production. In the video below, check out how users can take advantage of Copilot to create end-to-end solutions in Fabric: 

Today, I’m thrilled to share several new enhancements and capabilities coming to the platform and each workload in Fabric.

Fabric platform

We’re building platform-wide capabilities to help you more seamlessly manage DevOps, tackle projects of any scale and complexity. First, we’re updating the UI for deployment pipelines, in preview, to be more focused, easier to navigate, and have a smoother flow, now in preview. Next, we’re introducing the Terraform provider for Fabric, in preview, to help customers ensure deployments and management tasks are executed accurately and consistently. The Terraform provider enables users to automate and streamline deployment and management processes using a declarative configuration language. We are also adding support for Azure service principal in Microsoft Fabric REST APIs to help customers automate the deployment and management of Fabric environments. You can manage principal permissions for Fabric workspaces, as well as the creation and management of Fabric artifacts like eventhouses and lakehouses.

We’re excited to announce the general availability of Fabric Git integration. Sync Fabric workspaces with Git repositories, leverage version control, and collaborate seamlessly using Azure DevOps or GitHub. We are also extending our integration with Visual Studio Code (VS Code). You can now debug Fabric notebooks with the web version of VS Code and integrate Fabric environments as artifacts with the Synapse VS Code extension—allowing you to explore and manage Fabric environments from within VS Code. To learn more about these updates, read the Fabric September 2024 Update blog.

Security and governance

To help organizations govern the massive volumes of data across their data estate, we’re adding more granular data management capabilities including item tagging and enhancements to domains—both of which are now in preview. We’re introducing the ability to apply tags to Fabric items, helping users more easily find and use the right data. Once applied, data consumers can view, search, and filter by the applied tags across various experiences. We’re also enhancing domains and subdomains with more controls for admins including the ability to define a default sensitivity label, domain level export and sharing settings, and insights for admins, on tenant domains. Finally, for data owners, we’re adding the ability to search for data by domain, to filter workspaces by domain, and to view domain details in a data item’s location.

Over the past year, we’ve launched a myriad of security features designed to secure your data at every step of the analytics journey. Two of our network security features, trusted workspace access, and managed private endpoints, were previously only available in F64 or higher capacities. We’re excited to share that, based on your feedback, we are making these features available in all Fabric capacities. We’re also making managed private endpoints available in trial capacities as part of this release.

We’re also announcing deeper integration with Microsoft Purview, Microsoft’s unified data security, data governance, and compliance solution. Coming soon, security admins will be able to use Microsoft Purview Information Protection sensitivity labels to manage who has access to Fabric items with certain labels—similar to Microsoft 365. Also coming soon, we are extending support for Microsoft Purview Data Loss Prevention (DLP) policies, so security admins can apply DLP policies to detect the upload of sensitive data, like social security numbers, to a lakehouse in Fabric. If detected, the policy will trigger an automatic audit activity, can alert the security admin, and can even show a custom policy tip to data owners to remedy themselves. These capabilities will be available at no additional cost during preview in the near term, but will be part of a new Purview pay-as-you-go consumptive model, with pricing details to follow in the future. Learn more about how to secure your Fabric data with Microsoft Purview by watching the following video: 

You can also complement and extend the built-in governance in Fabric by seamlessly connecting your Fabric data to the newly reimagined Purview Data Governance solution—now generally available. This new solution delivers an AI-powered, business-friendly, and unified solution that can seamlessly connect to data sources within Fabric and across your data estate to streamline and accelerate the activation of your modern data governance practice. Purview integrations enable Fabric customers to discover, secure, govern, and manage Fabric items from a single pane of glass within Purview for an end-to-end approach to their data estate. Learn more about these Microsoft Purview innovations.  

Workload enhancements and updates

We’re also making significant updates across the six core workloads in Fabric: Data Factory, Data Engineering, Data Warehouse, Data Science, Real-Time Intelligence, and Microsoft Power BI.

Data Factory

In the Data Factory workload, built to help you solve some of the most complex data integration scenarios, we are simplifying the data ingestion experience with copy job, transforming the dataflow capability, and releasing enhancements for data pipelines. With copy job, now in preview, you can ingest data at petabyte scale, without creating a dataflow or data pipeline. Copy job supports full, batch, and incremental copy from any data sources to any data destinations. Next, we are releasing the Copilot in Fabric experience for Dataflows Gen2 into general availability—empowering everyone to design dataflows with the help of an AI-powered expert. We’re also releasing Fast Copy in Dataflows Gen2 into general availability, enabling you to ingest large amounts of data using the same high-performance backend for data movement used in Data Factory (e.g., “copy” activity in data pipelines, or copy job). Lastly for Dataflows Gen2, we are introducing incremental refresh into preview, allowing you to limit refreshes to just new or updated data to reduce refresh times.

Along with the dataflow announcements, we’re announcing an array of enhancements for data pipelines in Fabric, including the general availability of the on-premises data gateway integration, the preview of Fabric user data functions in data pipelines, the preview of invoke remote pipeline to call Azure Data Factory (ADF) and Synapse pipelines from Fabric, and a new session tag parameter for Fabric Spark notebook activity to enable high-concurrency Notebook runs. Additionally, we’ve made it easier to bring ADF pipelines into Fabric by linking your existing pipelines to your Fabric workspace. You’ll be able to fully manage your ADF factories directly from the Fabric workspace UI and convert your ADF pipelines into native Fabric pipelines with an open-source GitHub project. 

Data Engineering

For the Data Engineering workload, we’re updating the native execution engine for Fabric Spark and releasing upgraded Fabric Runtime 1.3 into general availability. The native execution engine enhances Spark job performance by running queries directly on lakehouse infrastructure, achieving up to four times faster performance compared to traditional Spark based on the TPC-DS 1TB benchmark. The native execution engine can now, in preview, support Fabric Runtime 1.3, which together can further enhance the performance of Spark jobs and queries for both data engineering and data science projects. This engine has been completely rewritten to offer superior query performance across data processing; extract, transform, load (ETL); data science, and interactive queries. We are also excited to announce a new acceleration tab and UI enablement for the native execution engine.

Additionally, we are announcing an extension of support in Spark to mirrored databases, providing a consistent and convenient way to access and explore databases seamlessly with the Spark engine. You can easily add data sources, explore data, perform transformations, and join your data with other lakehouses and mirrored databases. Finally, we are excited to launch T-SQL notebooks into public preview. The T-SQL notebook enables SQL developers to author and run T-SQL code with a connected Fabric data warehouse or SQL analytics endpoint, allowing them to execute complex T-SQL queries, visualize results in real-time, and document analytical process within a single, cohesive interface. 

Data Warehouse

We are excited to announce the Copilot in Fabric experience for Data Warehouse is now in preview. This AI assistant experience can help developers generate T-SQL queries for data analysis, explain and add in-line code comments for existing T-SQL queries, fix broken T-SQL code, and answer questions about general data warehousing tasks and operations. Learn more about the Copilot experience for Data Warehouse here. And as mentioned above, we are announcing T-SQL notebooks—allowing you to create a notebook item directly from the data warehouse editor in Fabric and use the rich capabilities of notebooks to run T-SQL queries.

Real-Time Intelligence

In May 2024, we launched a new workload called Real-Time Intelligence that combined Synapse Real-Time Analytics and Data Activator with a range of additional new features, currently in preview, to help organizations make better decisions with up-to-the-minute insights. We are excited to share new capabilities, all in preview, to help you better ingest, analyze, and visualize your real-time data.

First, we’re announcing the launch of the new Real-Time hub user experience; a redesigned and enhanced experience with a new left navigation, a new page called “My Streams” to create and access custom streams, and four new eventstream connectors: Azure SQL Managed Instance – change data capture (MI CDC), SQL Server on Virtual Machine – change data capture (VM CDC), Apache Kafka, and Amazon MSK Kafka. These new sources empower you to build richer, more dynamic eventstreams in Fabric. We’re also enhancing eventstream capabilities by supporting eventhouse as a new destination for your data streams. Eventhouses, equipped with KQL databases, are designed to analyze large volumes of data, particularly in scenarios that demand real-time insight and exploration.

graphical user interface, table

We’re also pleased to announce an upgrade to the Copilot in Fabric experience in Real-Time Intelligence, which translates natural language into KQL, helping you better understand and explore your data stored in Eventhouse. Now, the assistant supports a conversational mode, allowing you to ask follow-up questions that build on previous queries within the chat. With the addition of multi-variate anomaly detection, it’s even easier to discover the unknowns in your high-volume, high-granularity data. You can also have Copilot create a real-time dashboard instantly based on the data in your table, providing immediate insights you can share in your organization.

Finally, we are upgrading the Data Activator experience to make it easier to define a variety of rules to act in response to changes in your data over time, and the richness of our rules have improved to include more complex time window calculations and responding to every event in a stream. You can set up alerts from all your streaming data, Power BI visuals, and real-time dashboards and now even set up alerts directly on your KQL queries. With these new enhancements, you can make sure action is taken the moment something important happens.

Learn more about all of these workload enhancements in the Fabric September 2024 Update blog.

Power BI

We’re thrilled to announce new capabilities across Power BI that will make it easier to track and use the KPIs that matter most to you, create organizational apps, and work with Direct Lake semantic models. 

First, we are announcing the preview of Metric sets which will allow users to promote consistent and reliable metrics in large organizations across Fabric, making it easier for end users to discover and use standardized metrics from corporate models. With Metric sets, trusted creators within an organization can develop standardized metrics, which incorporate essential business logic from Power BI. These creators can organize the metrics into collections, promote and certify them, and make them easily discoverable for end users and other creators. These endorsed and promoted metrics can then be used to build Power BI reports, improving data quality across the organization, and can also be reused in other Fabric solutions, such as notebooks.

graphical user interface, chart, line chart

We’re improving organizational apps in Power BI, a key tool for packaging and securely distributing Power BI reports to your organization. Now in preview, you can create multiple organizational apps in each workspace, and they can contain other Fabric items like notebooks and real-time dashboards. The app interface can even be customized, giving you more control over the color, navigation style, and landing experience.

We’re also making it easier to work with Direct Lake semantic models with new version history for semantic models, similar to the experience found across the Microsoft 365 apps. Power BI users can also now live edit Direct Lake semantic models right from Power BI Desktop. And we’re excited to announce a capability widely asked for by Power BI users: a dark mode in Power BI Desktop. 

Finally, we’re announcing the general availability of OneLake integration for semantic models in Import mode. OneLake integration automatically writes data imported into your semantic models to Delta Lake tables in OneLake so that you can enjoy the benefits of Fabric without any migration effort. Once added to a lakehouse in OneLake, you can use T-SQL, Python, Scala, PySpark, Spark SQL, or R on these Delta tables to consume this data and add business value. All of this value comes at no additional cost as data stored in OneLake for Power BI import semantic models is included in the price of your Power BI licensing.

Learn more about the Power BI announcements in the Power BI September 2024 Feature blog. Also see the AI-powered insights section below for new Copilot experiences for Power BI creators and consumers.

AI-powered data estate

With OneLake, Fabric’s unified data lake, you can create a truly AI-powered data estate to fuel your AI innovation and data culture. OneLake’s shortcuts and mirroring capabilities enable you to access your entire multi-cloud data estate from a single, intuitively organized data lake. With your data in OneLake, you can then work from a single copy across analytics engines, whether you are using Spark, T-SQL, KQL, or Analysis Services and even access that data from other apps like Microsoft Excel or Teams. Today, we are thrilled to share even more capabilities and enhancements coming to OneLake that can help you better connect to and manage your data estate.

One of the biggest benefits of OneLake is the ability to create shortcuts to your data sources, which virtualizes data in OneLake without moving or duplicating it. We are pleased to announce that shortcuts for Google Cloud Services (GCS) and S3-compatible sources are now generally available. These shortcuts also support the on-premise data gateway, which you can use to connect to your on-premise S3 compatible sources as well as GCS buckets that are protected by a virtual private cloud. We’ve also made enhancements to the REST APIs for OneLake shortcuts, including adding support for all current shortcut types and introducing a new list operation. With these improvements, you can programmatically create and manage your OneLake shortcuts.

We’re also excited to announce further integration with Azure Databricks with the ability to access Databricks Unity Catalog tables directly from OneLake—now in preview. Users can just provide the Azure Databricks workspace URL and select the catalog, and Fabric creates a shortcut for every table in the selected catalog, keeping the data in sync in near real-time. Once your Azure Databricks Catalog item is created, it behaves the same as any other item in Fabric, so you can access the table through SQL endpoints, notebooks, or Direct Lake mode for Power BI reports. Learn more about the OneLake shortcut and Azure Databricks announcements in the Fabric September 2024 Updates blog.

At Microsoft Build last May, we announced an expanded partnership with Snowflake that gives our customers the flexibility to easily connect and work across our tools. Today, I’m excited to share progress on this partnership with the upcoming preview of shortcuts to Iceberg tables. In the coming weeks, Microsoft Fabric engines will be able to consume Iceberg data with no movement or duplication using OneLake shortcuts. Simply point to an Iceberg dataset from Snowflake or another Iceberg-compatible service, and OneLake virtualizes the table as a Delta Lake table for broad compatibility across Fabric engines. This means you can work with a single copy of your data across Snowflake and Fabric. With the ability to write Iceberg data to OneLake from Snowflake, Snowflake customers will have the flexibility to store Iceberg data in OneLake and use it across Fabric.

Finally, we’ve released mirroring support for Snowflake databases into general availability—providing a seamless, no-ETL experience for integrating existing Snowflake data with the rest of your data in Microsoft Fabric. With this capability, you can continuously replicate Snowflake data directly into Fabric OneLake in near real-time, while maintaining strong performance on your transactional workloads. Learn more about Snowflake mirroring in Fabric.

AI-powered insights

With your data teams using the AI-enhanced tools in Fabric to accelerate development of insights across your data estate, you then need to ensure these insights reach those who can use them to inform decisions. With easy-to-understand Power BI reports and AI-powered Q&A experiences, Fabric bridges the gap between data and business results to help you foster a culture that empowers everyone to find data-driven answers.

We’re announcing a richer Copilot experience in Power BI to help create reports in a clearer, more transparent way. This new experience, now in preview, includes improved conversational abilities between you and Copilot that makes it easier to provide more context to Copilot initially so you can get the report you need on the first try. Copilot will even provide report outlines to improve transparency on data fields being used. We are also releasing the ability to auto-generate descriptions for measures into general availability. Lastly, report viewers can now use Copilot to summarize a report or page right from the Power BI mobile app, now in preview.

We’re also enhancing email subscriptions for reports by extending dynamic per recipient subscriptions to include both paginated and Power BI reports. With dynamic subscriptions, you can set up a single email subscription that delivers customized reports to each recipient based on the data in the semantic model. For reports that are too large for email format, we are also giving you the ability to deliver Power BI and paginated report subscriptions to a OneDrive or SharePoint location for easy access. Finally, you can now create print-ready, parameterized paginated reports using the Get Data experience in Power BI Report Builder—accessing over 100 data sources.

Learn more about all of the Power BI announcements in the Power BI September 2024 Feature blog

Start building your Fabric skills

We are grateful so many of you have decided to grow your skills with Microsoft Fabric. In the past six months alone, more than 17,000 individuals have earned the Fabric Analytics Engineer Associate certification, making it the fastest growing certification in Microsoft’s history. Today, we’re excited to announce a brand-new certification for data engineers coming in late October. The new Microsoft Certified: Fabric Data Engineer Associate certification will help you prove your skills with data ingestion, transformation, administration, monitoring, and performance optimization in Fabric. 

Our portfolio of Microsoft Credentials for Fabric also includes four Microsoft Applied Skills, which are a complement to Microsoft certifications and free of cost. Applied Skills test your ability to complete a real-world scenario in a lab environment and provide you with formal credentials that showcase your technical skills to employers. For Fabric, we have Applied Skills credentials covering implementing lakehouses, data warehouses, data science and real-time intelligence solutions. 

Visit the Fabric Career Hub to get the best free resources to help you get certified and the latest certification exam discounts. Don’t forget to also join the vibrant Fabric community to connect with like-minded data professionals, get all your Fabric technical questions answered, and stay current on the latest product updates, training programs, events, and more. 

And if you want to test your skills, explore Fabric, and win prizes, you can also register for the Microsoft Fabric and AI Learning Hackathon. To learn more, you can join our Ask Me Anything event on October 8. 

Join us at Microsoft Ignite

We are excited to bring even more innovation to the Microsoft Fabric platform at Microsoft Ignite this year. Join us from November 19 through November 21, 2024 either in person in Chicago or online. You will see firsthand the latest solutions and capabilities across all of Microsoft and connect with experts, community leaders, and partners who can help you modernize and manage your own intelligent apps, safeguard your business and data, accelerate productivity, and so much more. 

Explore additional resources for Microsoft Fabric

If you want to learn more about Microsoft Fabric: 

The post European Fabric Community Conference 2024: Building an AI-powered data platform appeared first on Microsoft AI Blogs.

]]>
Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models https://azure.microsoft.com/en-us/blog/boost-your-ai-with-azures-new-phi-model-streamlined-rag-and-custom-generative-ai-models/ Thu, 22 Aug 2024 16:00:00 +0000 We're excited to announce several updates to help developers quickly create AI solutions with greater choice and flexibility leveraging the Azure AI toolchain.

The post Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models appeared first on Microsoft AI Blogs.

]]>
As developers continue to develop and deploy AI applications at scale across organizations, Azure is committed to delivering unprecedented choice in models as well as a flexible and comprehensive toolchain to handle the unique, complex and diverse needs of modern enterprises. This powerful combination of the latest models and cutting-edge tooling empowers developers to create highly-customized solutions grounded in their organization’s data. That’s why we are excited to announce several updates to help developers quickly create AI solutions with greater choice and flexibility leveraging the Azure AI toolchain:

  • Improvements to the Phi family of models, including a new Mixture of Experts (MoE) model and 20+ languages.
  • AI21 Jamba 1.5 Large and Jamba 1.5 on Azure AI models as a service.
  • Integrated vectorization in Azure AI Search to create a streamlined retrieval augmented generation (RAG) pipeline with integrated data prep and embedding.
  • Custom generative extraction models in Azure AI Document Intelligence, so you can now extract custom fields for unstructured documents with high accuracy.
  • The general availability of Text to Speech (TTS) Avatar, a capability of Azure AI Speech service, which brings natural-sounding voices and photorealistic avatars to life, across diverse languages and voices, enhancing customer engagement and overall experience. 
  • The general availability of Conversational PII Detection Service in Azure AI Language.

Use the Phi model family with more languages and higher throughput 

We are introducing a new model to the Phi family, Phi-3.5-MoE, a Mixture of Experts (MoE) model. This new model combines 16 smaller experts into one, which delivers improvements in model quality and lower latency. While the model is 42B parameters, since it is an MoE model it only uses 6.6B active parameters at a time, by being able to specialize a subset of the parameters (experts) during training, and then at runtime use the relevant experts for the task. This approach gives customers the benefit of the speed and computational efficiency of a small model with the domain knowledge and higher quality outputs of a larger model. Read more about how we used a Mixture of Experts architecture to improve Azure AI translation performance and quality.

We are also announcing a new mini model, Phi-3.5-mini. Both the new MoE model and the mini model are multi-lingual, supporting over 20 languages. The additional languages allow people to interact with the model in the language they are most comfortable using.

Even with new languages the new mini model, Phi-3.5-mini, is still a tiny 3.8B parameters.

Companies like CallMiner, a conversational intelligence leader, are selecting and using Phi models for their speed, accuracy, and security.

CallMiner is constantly innovating and evolving our conversation intelligence platform, and we’re excited about the value Phi models are bringing to our GenAI architecture. As we evaluate different models, we’ve continued to prioritize accuracy, speed, and security... The small size of Phi models makes them incredibly fast, and fine tuning has allowed us to tailor to the specific use cases that matter most to our customers at high accuracy and across multiple languages. Further, the transparent training process for Phi models empowers us to limit bias and implement GenAI securely. We look forward to expanding our application of Phi models across our suite of products—Bruce McMahon, CallMiner’s Chief Product Officer.

To make outputs more predictable and define the structure needed by an application, we are bringing Guidance to the Phi-3.5-mini serverless endpoint. Guidance is a proven open-source Python library (with 18K plus GitHub stars) that enables developers to express in a single API call the precise programmatic constraints the model must follow for structured output in JSON, Python, HTML, SQL, whatever the use case requires. With Guidance, you can eliminate expensive retries, and can, for example, constrain the model to select from pre-defined lists (e.g., medical codes), restrict outputs to direct quotes from provided context, or follow in any regex. Guidance steers the model token by token in the inference stack, producing higher quality outputs and reducing cost and latency by as much as 30-50% when utilizing for highly structured scenarios. 

We are also updating the Phi vision model with multi-frame support. This means that Phi-3.5-vision (4.2B parameters) allows reasoning over multiple input images unlocking new scenarios like identifying differences between images.

graphical user interface, website
text

At the core of our product strategy, Microsoft is dedicated to supporting the development of safe and responsible AI, and provides developers with a robust suite of tools and capabilities.  

Developers working with Phi models can assess quality and safety using both built-in and custom metrics using Azure AI evaluations, informing necessary mitigations. Azure AI Content Safety provides built-in controls and guardrails, such as prompt shields and protected material detection. These capabilities can be applied across models, including Phi, using content filters or can be easily integrated into applications through a single API. Once in production, developers can monitor their application for quality and safety, adversarial prompt attacks, and data integrity, making timely interventions with the help of real-time alerts. 

Introducing AI21 Jamba 1.5 Large and Jamba 1.5 on Azure AI models as a service

Furthering our goal to provide developers with access to the broadest selection of models, we are excited to also announce two new open models, Jamba 1.5 Large and Jamba 1.5, available in the Azure AI model catalog. These models use the Jamba architecture, blending Mamba, and Transformer layers for efficient long-context processing.

According to AI21, the Jamba 1.5 Large and Jamba 1.5 models are the most advanced in the Jamba series. These models utilize the Hybrid Mamba-Transformer architecture, which balances speed, memory, and quality by employing Mamba layers for short-range dependencies and Transformer layers for long-range dependencies. Consequently, this family of models excels in managing extended contexts ideal for industries including financial services, healthcare, and life sciences, as well as retail and CPG. 

“We are excited to deepen our collaboration with Microsoft, bringing the cutting-edge innovations of the Jamba Model family to Azure AI users…As an advanced hybrid SSM-Transformer (Structured State Space Model-Transformer) set of foundation models, the Jamba model family democratizes access to efficiency, low latency, high quality, and long-context handling. These models empower enterprises with enhanced performance and seamless integration with the Azure AI platform”— Pankaj Dugar, Senior Vice President and General Manger of North America at AI21

Simplify RAG for generative AI applications

We are streamlining RAG pipelines with integrated, end to end data preparation and embedding. Organizations often use RAG in generative AI applications to incorporate knowledge on private organization specific data, without having to retrain the model. With RAG, you can use strategies like vector and hybrid retrieval to surface relevant, informed information to a query, grounded on your data. However, to perform vector search, significant data preparation is required. Your app must ingest, parse, enrich, embed, and index data of various types, often living in multiple sources, just so that it can be used in your copilot. 

Today we are announcing general availability of integrated vectorization in Azure AI Search. Integrated vectorization automates and streamlines these processes all into one flow. With automatic vector indexing and querying using integrated access to embedding models, your application unlocks the full potential of what your data offers.

In addition to improving developer productivity, integration vectorization enables organizations to offer turnkey RAG systems as solutions for new projects, so teams can quickly build an application specific to their datasets and need, without having to build a custom deployment each time.

Customers like SGS & Co, a global brand impact group, are streamlining their workflows with integrated vectorization.

“SGS AI Visual Search is a GenAI application built on Azure for our global production teams to more effectively find sourcing and research information pertinent to their project… The most significant advantage offered by SGS AI Visual Search is utilizing RAG, with Azure AI Search as the retrieval system, to accurately locate and retrieve relevant assets for project planning and production”—Laura Portelli, Product Manager, SGS & Co

Extract custom fields in Document Intelligence 

You can now extract custom fields for unstructured documents with high accuracy by building and training a custom generative model within Document Intelligence. This new ability uses generative AI to extract user specified fields from documents across a wide variety of visual templates and document types. You can get started with as few as five training documents. While building a custom generative model, automatic labeling saves time and effort on manual annotation, results will display as grounded where applicable, and confidence scores are available to quickly filter high quality extracted data for downstream processing and lower manual review time.

graphical user interface, application, table

Create engaging experiences with prebuilt and custom avatars 

Today we are excited to announce that Text to Speech (TTS) Avatar, a capability of Azure AI Speech service, is now generally available. This service brings natural-sounding voices and photorealistic avatars to life, across diverse languages and voices, enhancing customer engagement and overall experience. With TTS Avatar, developers can create personalized and engaging experiences for their customers and employees, while also improving efficiency and providing innovative solutions.

The TTS Avatar service provides developers with a variety of pre-built avatars, featuring a diverse portfolio of natural-sounding voices, as well as an option to create custom synthetic voices using Azure Custom Neural Voice. Additionally, the photorealistic avatars can be customized to match a company’s branding. For example, Fujifilm is using TTS Avatar with NURA, the world’s first AI-powered health screening center.

“Embracing the Azure TTS Avatar at NURA as our 24-hour AI assistant marks a pivotal step in healthcare innovation. At NURA, we envision a future where AI-powered assistants redefine customer interactions, brand management, and healthcare delivery. Working with Microsoft, we’re honored to pioneer the next generation of digital experiences, revolutionizing how businesses connect with customers and elevate brand experiences, paving the way for a new era of personalized care and engagement. Let’s bring more smiles together”—Dr. Kasim, Executive Director and Chief Operating Officer, Nura AI Health Screening

As we bring this technology to market, ensuring responsible use and development of AI remains our top priority. Custom Text to Speech Avatar is a limited access service in which we have integrated safety and security features. For example, the system embeds invisible watermarks in avatar outputs. These watermarks allow approved users to verify if a video has been created using Azure AI Speech’s avatar feature.  Additionally, we provide guidelines for TTS avatar’s responsible use, including measures to promote transparency in user interactions, identify and mitigate potential bias or harmful synthetic content, and how to integrate with Azure AI Content Safety. In this transparency note, we describe the technology and capabilities for TTS Avatar, its approved use cases, considerations when choosing use cases, its limitations, fairness considerations and best practice for improving system performance. We also require all developers and content creators to apply for access and comply with our code of conduct when using TTS Avatar features including prebuilt and custom avatars.  

Use Azure Machine Learning resources in VS Code

We’re thrilled to announce the general availability of the VS Code extension for Azure Machine Learning. The extension allows you to build, train, deploy, debug, and manage machine learning models with Azure Machine Learning directly from your favorite VS Code setup, whether on desktop or web. With features like VNET support, IntelliSense and integration with Azure Machine Learning CLI, the extension is now ready for production use. Read this tech community blog to learn more about the extension.

Customers like Fashable have put this into production.

“We have been using the VS Code extension for Azure Machine Learning since its preview release, and it has significantly streamlined our workflow… The ability to manage everything from building to deploying models directly within our preferred VS Code environment has been a game-changer. The seamless integration and robust features like interactive debugging and VNET support have enhanced our productivity and collaboration. We are thrilled about its general availability and look forward to leveraging its full potential in our AI projects.”—Ornaldo Ribas Fernandes, Co-founder and CEO, Fashable

Protect users’ privacy 

Today we are excited to announce the general availability of Conversational PII Detection Service in Azure AI Language, enhancing Azure AI’s ability to identify and redact sensitive information in conversations, starting with English language. This service aims to improve data privacy and security for developers building generative AI apps for their enterprise. The Conversational PII redaction service expands upon the Text PII redaction service, supporting customers looking to identify, categorize, and redact sensitive information such as phone numbers and email addresses in unstructured text. This Conversational PII model is specialized for conversational style inputs, particularly those found in speech transcriptions from meetings and calls. 

diagram

Self-serve your Azure OpenAI Service PTUs  

We recently announced updates to Azure OpenAI Service, including the ability to manage your Azure OpenAI Service quota deployments without relying on support from your account team, allowing you to request Provisioned Throughput Units (PTUs) more flexibly and efficiently. We also released OpenAI’s latest model when they made it available on 8/7, which introduced Structured Outputs, like JSON Schemas, for the new GPT-4o and GPT-4o mini models. Structured outputs are particularly valuable for developers who need to validate and format AI outputs into structures like JSON Schemas. 

We continue to invest across the Azure AI stack to bring state of the art innovation to our customers so you can build, deploy, and scale your AI solutions safely and confidently. We cannot wait to see what you build next.

Stay up to date with more Azure AI news 

The post Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models appeared first on Microsoft AI Blogs.

]]>
Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models https://azure.microsoft.com/en-us/blog/boost-your-ai-with-azures-new-phi-model-streamlined-rag-and-custom-generative-ai-models/ Thu, 22 Aug 2024 16:00:00 +0000 We're excited to announce several updates to help developers quickly create AI solutions with greater choice and flexibility leveraging the Azure AI toolchain.

The post Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models appeared first on Microsoft AI Blogs.

]]>
As developers continue to develop and deploy AI applications at scale across organizations, Azure is committed to delivering unprecedented choice in models as well as a flexible and comprehensive toolchain to handle the unique, complex and diverse needs of modern enterprises. This powerful combination of the latest models and cutting-edge tooling empowers developers to create highly-customized solutions grounded in their organization’s data. That’s why we are excited to announce several updates to help developers quickly create AI solutions with greater choice and flexibility leveraging the Azure AI toolchain:

  • Improvements to the Phi family of models, including a new Mixture of Experts (MoE) model and 20+ languages.
  • AI21 Jamba 1.5 Large and Jamba 1.5 on Azure AI models as a service.
  • Integrated vectorization in Azure AI Search to create a streamlined retrieval augmented generation (RAG) pipeline with integrated data prep and embedding.
  • Custom generative extraction models in Azure AI Document Intelligence, so you can now extract custom fields for unstructured documents with high accuracy.
  • The general availability of Text to Speech (TTS) Avatar, a capability of Azure AI Speech service, which brings natural-sounding voices and photorealistic avatars to life, across diverse languages and voices, enhancing customer engagement and overall experience. 
  • The general availability of Conversational PII Detection Service in Azure AI Language.

Use the Phi model family with more languages and higher throughput 

We are introducing a new model to the Phi family, Phi-3.5-MoE, a Mixture of Experts (MoE) model. This new model combines 16 smaller experts into one, which delivers improvements in model quality and lower latency. While the model is 42B parameters, since it is an MoE model it only uses 6.6B active parameters at a time, by being able to specialize a subset of the parameters (experts) during training, and then at runtime use the relevant experts for the task. This approach gives customers the benefit of the speed and computational efficiency of a small model with the domain knowledge and higher quality outputs of a larger model. Read more about how we used a Mixture of Experts architecture to improve Azure AI translation performance and quality.

We are also announcing a new mini model, Phi-3.5-mini. Both the new MoE model and the mini model are multi-lingual, supporting over 20 languages. The additional languages allow people to interact with the model in the language they are most comfortable using.

Even with new languages the new mini model, Phi-3.5-mini, is still a tiny 3.8B parameters.

Companies like CallMiner, a conversational intelligence leader, are selecting and using Phi models for their speed, accuracy, and security.

CallMiner is constantly innovating and evolving our conversation intelligence platform, and we’re excited about the value Phi models are bringing to our GenAI architecture. As we evaluate different models, we’ve continued to prioritize accuracy, speed, and security... The small size of Phi models makes them incredibly fast, and fine tuning has allowed us to tailor to the specific use cases that matter most to our customers at high accuracy and across multiple languages. Further, the transparent training process for Phi models empowers us to limit bias and implement GenAI securely. We look forward to expanding our application of Phi models across our suite of products—Bruce McMahon, CallMiner’s Chief Product Officer.

To make outputs more predictable and define the structure needed by an application, we are bringing Guidance to the Phi-3.5-mini serverless endpoint. Guidance is a proven open-source Python library (with 18K plus GitHub stars) that enables developers to express in a single API call the precise programmatic constraints the model must follow for structured output in JSON, Python, HTML, SQL, whatever the use case requires. With Guidance, you can eliminate expensive retries, and can, for example, constrain the model to select from pre-defined lists (e.g., medical codes), restrict outputs to direct quotes from provided context, or follow in any regex. Guidance steers the model token by token in the inference stack, producing higher quality outputs and reducing cost and latency by as much as 30-50% when utilizing for highly structured scenarios. 

We are also updating the Phi vision model with multi-frame support. This means that Phi-3.5-vision (4.2B parameters) allows reasoning over multiple input images unlocking new scenarios like identifying differences between images.

graphical user interface, website
text

At the core of our product strategy, Microsoft is dedicated to supporting the development of safe and responsible AI, and provides developers with a robust suite of tools and capabilities.  

Developers working with Phi models can assess quality and safety using both built-in and custom metrics using Azure AI evaluations, informing necessary mitigations. Azure AI Content Safety provides built-in controls and guardrails, such as prompt shields and protected material detection. These capabilities can be applied across models, including Phi, using content filters or can be easily integrated into applications through a single API. Once in production, developers can monitor their application for quality and safety, adversarial prompt attacks, and data integrity, making timely interventions with the help of real-time alerts. 

Introducing AI21 Jamba 1.5 Large and Jamba 1.5 on Azure AI models as a service

Furthering our goal to provide developers with access to the broadest selection of models, we are excited to also announce two new open models, Jamba 1.5 Large and Jamba 1.5, available in the Azure AI model catalog. These models use the Jamba architecture, blending Mamba, and Transformer layers for efficient long-context processing.

According to AI21, the Jamba 1.5 Large and Jamba 1.5 models are the most advanced in the Jamba series. These models utilize the Hybrid Mamba-Transformer architecture, which balances speed, memory, and quality by employing Mamba layers for short-range dependencies and Transformer layers for long-range dependencies. Consequently, this family of models excels in managing extended contexts ideal for industries including financial services, healthcare, and life sciences, as well as retail and CPG. 

“We are excited to deepen our collaboration with Microsoft, bringing the cutting-edge innovations of the Jamba Model family to Azure AI users…As an advanced hybrid SSM-Transformer (Structured State Space Model-Transformer) set of foundation models, the Jamba model family democratizes access to efficiency, low latency, high quality, and long-context handling. These models empower enterprises with enhanced performance and seamless integration with the Azure AI platform”— Pankaj Dugar, Senior Vice President and General Manger of North America at AI21

Simplify RAG for generative AI applications

We are streamlining RAG pipelines with integrated, end to end data preparation and embedding. Organizations often use RAG in generative AI applications to incorporate knowledge on private organization specific data, without having to retrain the model. With RAG, you can use strategies like vector and hybrid retrieval to surface relevant, informed information to a query, grounded on your data. However, to perform vector search, significant data preparation is required. Your app must ingest, parse, enrich, embed, and index data of various types, often living in multiple sources, just so that it can be used in your copilot. 

Today we are announcing general availability of integrated vectorization in Azure AI Search. Integrated vectorization automates and streamlines these processes all into one flow. With automatic vector indexing and querying using integrated access to embedding models, your application unlocks the full potential of what your data offers.

In addition to improving developer productivity, integration vectorization enables organizations to offer turnkey RAG systems as solutions for new projects, so teams can quickly build an application specific to their datasets and need, without having to build a custom deployment each time.

Customers like SGS & Co, a global brand impact group, are streamlining their workflows with integrated vectorization.

“SGS AI Visual Search is a GenAI application built on Azure for our global production teams to more effectively find sourcing and research information pertinent to their project… The most significant advantage offered by SGS AI Visual Search is utilizing RAG, with Azure AI Search as the retrieval system, to accurately locate and retrieve relevant assets for project planning and production”—Laura Portelli, Product Manager, SGS & Co

Extract custom fields in Document Intelligence 

You can now extract custom fields for unstructured documents with high accuracy by building and training a custom generative model within Document Intelligence. This new ability uses generative AI to extract user specified fields from documents across a wide variety of visual templates and document types. You can get started with as few as five training documents. While building a custom generative model, automatic labeling saves time and effort on manual annotation, results will display as grounded where applicable, and confidence scores are available to quickly filter high quality extracted data for downstream processing and lower manual review time.

graphical user interface, application, table

Create engaging experiences with prebuilt and custom avatars 

Today we are excited to announce that Text to Speech (TTS) Avatar, a capability of Azure AI Speech service, is now generally available. This service brings natural-sounding voices and photorealistic avatars to life, across diverse languages and voices, enhancing customer engagement and overall experience. With TTS Avatar, developers can create personalized and engaging experiences for their customers and employees, while also improving efficiency and providing innovative solutions.

The TTS Avatar service provides developers with a variety of pre-built avatars, featuring a diverse portfolio of natural-sounding voices, as well as an option to create custom synthetic voices using Azure Custom Neural Voice. Additionally, the photorealistic avatars can be customized to match a company’s branding. For example, Fujifilm is using TTS Avatar with NURA, the world’s first AI-powered health screening center.

“Embracing the Azure TTS Avatar at NURA as our 24-hour AI assistant marks a pivotal step in healthcare innovation. At NURA, we envision a future where AI-powered assistants redefine customer interactions, brand management, and healthcare delivery. Working with Microsoft, we’re honored to pioneer the next generation of digital experiences, revolutionizing how businesses connect with customers and elevate brand experiences, paving the way for a new era of personalized care and engagement. Let’s bring more smiles together”—Dr. Kasim, Executive Director and Chief Operating Officer, Nura AI Health Screening

As we bring this technology to market, ensuring responsible use and development of AI remains our top priority. Custom Text to Speech Avatar is a limited access service in which we have integrated safety and security features. For example, the system embeds invisible watermarks in avatar outputs. These watermarks allow approved users to verify if a video has been created using Azure AI Speech’s avatar feature.  Additionally, we provide guidelines for TTS avatar’s responsible use, including measures to promote transparency in user interactions, identify and mitigate potential bias or harmful synthetic content, and how to integrate with Azure AI Content Safety. In this transparency note, we describe the technology and capabilities for TTS Avatar, its approved use cases, considerations when choosing use cases, its limitations, fairness considerations and best practice for improving system performance. We also require all developers and content creators to apply for access and comply with our code of conduct when using TTS Avatar features including prebuilt and custom avatars.  

Use Azure Machine Learning resources in VS Code

We’re thrilled to announce the general availability of the VS Code extension for Azure Machine Learning. The extension allows you to build, train, deploy, debug, and manage machine learning models with Azure Machine Learning directly from your favorite VS Code setup, whether on desktop or web. With features like VNET support, IntelliSense and integration with Azure Machine Learning CLI, the extension is now ready for production use. Read this tech community blog to learn more about the extension.

Customers like Fashable have put this into production.

“We have been using the VS Code extension for Azure Machine Learning since its preview release, and it has significantly streamlined our workflow… The ability to manage everything from building to deploying models directly within our preferred VS Code environment has been a game-changer. The seamless integration and robust features like interactive debugging and VNET support have enhanced our productivity and collaboration. We are thrilled about its general availability and look forward to leveraging its full potential in our AI projects.”—Ornaldo Ribas Fernandes, Co-founder and CEO, Fashable

Protect users’ privacy 

Today we are excited to announce the general availability of Conversational PII Detection Service in Azure AI Language, enhancing Azure AI’s ability to identify and redact sensitive information in conversations, starting with English language. This service aims to improve data privacy and security for developers building generative AI apps for their enterprise. The Conversational PII redaction service expands upon the Text PII redaction service, supporting customers looking to identify, categorize, and redact sensitive information such as phone numbers and email addresses in unstructured text. This Conversational PII model is specialized for conversational style inputs, particularly those found in speech transcriptions from meetings and calls. 

diagram

Self-serve your Azure OpenAI Service PTUs  

We recently announced updates to Azure OpenAI Service, including the ability to manage your Azure OpenAI Service quota deployments without relying on support from your account team, allowing you to request Provisioned Throughput Units (PTUs) more flexibly and efficiently. We also released OpenAI’s latest model when they made it available on 8/7, which introduced Structured Outputs, like JSON Schemas, for the new GPT-4o and GPT-4o mini models. Structured outputs are particularly valuable for developers who need to validate and format AI outputs into structures like JSON Schemas. 

We continue to invest across the Azure AI stack to bring state of the art innovation to our customers so you can build, deploy, and scale your AI solutions safely and confidently. We cannot wait to see what you build next.

Stay up to date with more Azure AI news 

The post Boost your AI with Azure’s new Phi model, streamlined RAG, and custom generative AI models appeared first on Microsoft AI Blogs.

]]>
Empowering partnerships: The Microsoft Fabric Conference—your gateway to AI innovation http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/07/22/empowering-partnerships-the-microsoft-fabric-conference-your-gateway-to-ai-innovation/ Mon, 22 Jul 2024 15:00:00 +0000 The upcoming European Microsoft Fabric Community Conference 2024 in Stockholm, Sweden from September 24 to 27, 2024, is not just an event—it's a beacon for Microsoft partners who are steering the future of AI and analytics.

The post Empowering partnerships: The Microsoft Fabric Conference—your gateway to AI innovation appeared first on Microsoft AI Blogs.

]]>
In case you haven’t heard, building on the success of the inaugural Microsoft Fabric Community Conference in Las Vegas, Nevada earlier this year, the conference has expanded to Europe!  

The upcoming European Microsoft Fabric Community Conference 2024 in Stockholm, Sweden is not just an event—it’s a beacon for Microsoft partners who are steering the future of AI and analytics. The conference, set to take place from September 24 to 27, 2024, is a pivotal gathering for those at the forefront of deploying and adopting Microsoft Fabric’s transformative technologies.  

Stockholm, the heart of Scandinavian innovation, is the perfect backdrop for the Fabric Community Conference. Known for its vibrant tech scene and forward-thinking approach, Stockholm embodies the spirit of progress that Microsoft and its partners strive for. 

Decorative image of abstract art

European Microsoft Fabric Community Conference 2024

A brand-new conference dedicated to Fabric

What to expect at the European Microsoft Fabric Community Conference 2024

Expect to be wowed. You’ll hear from leading Microsoft and community experts from around the world covering topics ranging from Retrieval-Augmented Generation (RAG) pattern applications and semantic modeling, to data governance and sustainability, to integrating applications into the Fabric framework. And if that isn’t enough, you’ll get to experience the latest features from Fabric, Power BI, Azure Databases, Azure AI, Microsoft Purview, and more, demonstrating how Fabric serves as a unified platform that empowers both data and business professionals across all industries.   

And as a prelude to the main conference, we invite you to a special Partner Pre-Day—day dedicated to you, our partners, to ensure you’re equipped with the knowledge and connections to thrive. See more details below.

We’ve also planned a few other activities to connect with the community: 

  • Partner happy hour: Network with the Fabric leadership and product team. An invaluable opportunity to connect with the team bringing you Fabric.
  • One-on-one partner executive connections: Meet with our executives and Fabric partner team to discuss your priorities and needs and gain a better understanding of partner motions and resources.
  • Partner-to-partner connection: Connect with other partners to discuss joint business opportunities and share learnings. 

What’s in it for Microsoft partners 

For Microsoft partners, the conference is more than just a learning experience; it represents an amazing chance for partners to forge deeper connections with the minds behind the technology, to engage with customers eager to leverage your expertise to grow their business, and to network with peers and other partners who are equally passionate about driving adoption of these amazing technologies. Are you excited yet? 

And of course, let’s not forget the Partner Pre-Day, an invaluable opportunity for partners to delve into the latest Microsoft partner initiatives, resources, and strategies for focusing on how Microsoft Fabric drives business growth and innovation. 

Here’s a sneak peek into the Partner Pre-Day: 

  • Get inspired: Attend Ask Me Anything sessions with top Microsoft data, AI, and analytics leadership and this year’s Partner of the Year Award winners.
  • Learn: Gain insights on how best to take advantage of partner-only offerings and incentives, access to resources, and deep technical skilling customized for our Microsoft AI Cloud Partner Program ecosystem.
  • Share: Meet one-on-one with Microsoft executives and the Microsoft partner team to share what’s on your mind.
  • Connect: Forge new relationships and strengthen existing ones with your partner peers for joint business outcomes.  

If you’re as excited about the Fabric Conference in Stockholm as we are, you’ll want to stay connected for all the latest updates. Be sure to follow the event on Microsoft’s partner social media channels on LinkedIn, Fabric YouTube, and the Fabric Tech Community Blog. These platforms are your go-to for live updates, exclusive behind-the-scenes content, and a chance to network with fellow innovators before, during, and after the conference. 

And hey, while you’re at it, why not join the Fabric Partner Community? It’s a fantastic way to get involved with weekly engineering calls where you can dive deep into the tech, ask questions, and share your insights. It’s like having a backstage pass to the world of Microsoft Fabric.

Now let’s make some noise—take these next steps

  • Register for the Fabric Conference today. By registering early for the 3-day pass, you can take advantage of an exclusive €200 off discount using the code MSCUST.
  • Check out sponsorship opportunities.
  • And of course, share this blog post with your network to start those pre-conference discussions online.

Start your Fabric journey today

Check out these additional resources to learn more about Fabric and prepare your organization for the next phase of your Fabric journey. 

  • Read this blog to learn how to enable your organization to help customers prepare their data for AI innovation with Microsoft Fabric.
  • Check out the new Fabric certification and Fabric Career Hub to get your team upskilled and let customers know you’re Fabric certified.
  • Join the Fabric Partner Community on Microsoft Teams, where you can attend the Fabric Engineering Connection (our weekly partner community calls with product engineering), stay connected with other partners, and learn of the latest resources, opportunities, and more.
  • Visit Azure Migrate and Modernize and Azure Innovate to learn more about Azure Innovate, our hero partner offering, and to access resources and funding for customer projects. 

Let’s get the buzz going and show the world what the Microsoft partner community is all about. 

I can’t wait to see you all in Stockholm, Sweden for an unforgettable experience. Let’s innovate, collaborate, and grow together!

Fabric_Blog

Microsoft Fabric

Bring your data into the era of AI


In partnership with Microsoft, the European Microsoft Fabric Community Conference is brought to you by the team behind ESPC, Europe’s premier Microsoft 365 Conference and the European Power Platform Conference. 

The post Empowering partnerships: The Microsoft Fabric Conference—your gateway to AI innovation appeared first on Microsoft AI Blogs.

]]>