OneLake News and Insights | Microsoft Fabric Blog http://approjects.co.za/?big=en-us/microsoft-fabric/blog/tag/onelake/ Tue, 17 Mar 2026 21:19:06 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/microsoft-fabric/blog/wp-content/uploads/2026/03/cropped-favicon-32x32.png OneLake News and Insights | Microsoft Fabric Blog http://approjects.co.za/?big=en-us/microsoft-fabric/blog/tag/onelake/ 32 32 FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform https://azure.microsoft.com/en-us/blog/fabcon-and-sqlcon-2026-unifying-databases-and-fabric-on-a-single-data-platform/ Wed, 18 Mar 2026 12:50:00 +0000 We're bring attendees together to share real experiences and solve challenges side-by-side. Only together can we move into meaningful results.

The post FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform appeared first on Microsoft Fabric Blog.

]]>
Welcome to the third annual FabCon and our first ever SQLCon here in Atlanta, Georgia. With nearly 300 workshops and sessions, this joint event will highlight how they are bringing the power of Microsoft SQL and Microsoft Fabric together to create a single, unified platform. But FabCon 2026 and SQLCon 2026 are about more than product innovation. It’s about providing space for our 8,000 attendees to come together and share real experiences, learn from each other, and solve challenges side-by-side. Only together can we move beyond the hype and into meaningful results.

Learn more about FabCon and SQLCon 2026
The excitement surrounding this event reflects the same momentum we’re seeing across our data portfolio. Just two and a half years after Microsoft Fabric reached general availability, it’s already serving more than 31,000 customers and remains the fastest-growing data platform in Microsoft’s history. Fortune 500 companies like The Coca-Cola Company are already using Fabric at scale across their organizations.

Microsoft Fabric is helping us evolve our data foundation into a more unified, AI-ready platform. Combined with Power BI and capabilities like Fabric IQ, it enables the enterprise to turn data into intelligence and act on it faster.

Shekhar Gowda, Vice President of Global Marketing Technologies at The Coca-Cola Company
Our databases are accelerating just as quickly, with SQL Server 2025 growing more than twice as fast as the previous version.

Today, we’re thrilled to share how we are bringing the power of databases and Fabric together to form a truly converged data platform—one that unifies transactional, operational, and analytical data under a single, consistent architecture. I’ll also highlight how we’ve enhanced Fabric to help you transform data into the semantic knowledge AI needs to understand your business, powered by Fabric IQ and Power BI’s industry-leading semantic model technology.

Introducing the Database Hub in Microsoft Fabric
Databases sit at the heart of the enterprise data estate—a system of record powering applications, transactions, and mission‑critical insights. Yet as organizations scale across cloud, on‑premises, and edge environments, database estates have become increasingly fragmented and isolated. As AI places even greater demands on data estates, unifying databases under a single access point and control plane has become essential.

To address this challenge, Fabric is expanding its role as the central access point for enterprise data with the Database Hub in Fabric, now available in early access. Database Hub in Fabric provides a unified database management experience that brings together databases across edge, cloud, and Fabric into a single, coherent view. Teams now have one place to explore, observe, govern, and optimize their entire database estate—including Azure SQL, Azure Cosmos DB, Azure Database for PostgreSQL, SQL Server (enabled by Azure Arc), Azure Database for MySQL, and Fabric Databases—without changing how each service is deployed.

Built for scale, the Database Hub in Fabric introduces an agent‑assisted, human-in-the loop approach to database management. With built-in observability, delegated governance, and Microsoft Copilot-powered insights, teams can deploy intelligent agents to continuously reason over estate‑wide signals and surface what changed, explain why it matters, and guide teams toward what to do next. The result is a simpler, more confident way to manage databases at scale. Over time, this model enables database estates to become more proactive, resilient, and intelligent, laying the foundation for greater autonomy, while keeping humans firmly in control of goals, boundaries, and trust.

Learn more about Database Hub in Fabric and what’s new across Databases
Bringing databases together under a single management layer is a critical step as you prepare your estates for AI at scale. But it’s not the end of the journey. The challenge shifts from where data lives to how data is understood, connected, and activated across the enterprise.

Getting your data estate ready for AI with Fabric
As organizations move from traditional applications to AI‑powered, multi‑agent systems, the advantage is shifting away from the specific model you deploy. It now lies in the intelligence and context that allow agents to understand how your business is run, the state of your business, and your institutional knowledge to help take meaningful action.

This is the challenge Microsoft IQ is designed to address. Unlike point solutions on the market today, Microsoft IQ provides an intelligence layer that delivers shared, enterprise-grade business context to every agent. That context is built from three complementary sources: productivity signals from Work IQ, institutional knowledge from Foundry IQ, and live business data from Fabric IQ.

However, like the database layer, while the IQ context layer is a critical part of a successful, and healthy AI foundation, it is not the full story. Building a complete AI-ready data foundation requires investing in four core steps:

Unifying your data estate to eliminate silos and reduce architectural complexity.
Processing and harmonizing data so it becomes AI-ready, clean, connected, and structured for both operational and analytical use.
Curating semantic meaning to give agents contextual understanding, enabling them to interpret data the way your teams already do. This is where Microsoft IQ comes into play.
Empowering AI agents to act, applying that context to automate workflows, accelerate decisions, and transform operations end‑to‑end.
Unifying your data estate with Microsoft OneLake
Every AI initiative starts with the same fundamental challenge: understanding where your data lives and how to bring it together. Microsoft OneLake was built to solve that problem by unifying data across clouds, on-premises environments, and third-party platforms into a single logical data lake without unnecessary extracting, transforming, and loading (ETL), fragmentation, or duplicated copies.

Are my agents hunting for data?

Watch the podcast
Connecting to more sources than ever before
Today, we’re expanding Mirroring in Fabric to support even more systems our customers rely on. Mirroring for SharePoint lists and Dremio are now in preview with Azure Monitor coming soon, while mirroring for Oracle and SAP Datasphere are generally available—all of which are available as part of the core mirroring capabilities. We are also introducing extended capabilities in mirroring designed to help you operationalize mirrored sources at scale, including Change Data Feed (CDF) and the ability to create views on top of mirrored data, starting with Snowflake. Extended capabilities for mirroring will be offered as a paid option.

Shortcut transformations are also now generally available, allowing data to be shaped automatically as it connects to or moves within OneLake. You can convert formats such as Excel to Delta tables, now in preview, and apply AI-powered transformations.

Additionally, we are continuing to invest in open interoperability, ensuring OneLake works seamlessly with the platforms organizations already use. We are excited to announce the ability to natively read from OneLake through Azure Databricks Unity Catalog is now in public preview. We also recently announced the general availability of our interoperability with Snowflake.

I’m also excited to share that Auger, a rapidly growing supply chain platform designed to bring intelligence and automation to global operations, has built its platform on Fabric, with all data stored natively in OneLake. This architecture enables Auger customers to seamlessly access their operations data through OneLake shortcuts within their own Fabric environments and use the full power of the platform including Power BI, Fabric data agents, and more. Learn more in my blog, co-authored with Auger Chief Executive Officer Dave Clark.

Protect your data with OneLake security, now generally available
Security and governance remain foundational to OneLake. I’m thrilled to announce OneLake security will be generally available in the coming weeks, enabling data owners to define roles, enforce row- and column-level controls, and manage permissions through a single unified model that follows the data.

To learn more about these announcements, read the OneLake blog and the Fabric Data Factory blog.

Processing and harmonizing data with Fabric analytics
AI agents are only as reliable as the data you feed them. Before data can train or ground an agent, it must be integrated, cleaned, and structured, so the agent operates from consistent, trusted information. With industry-leading engines in Fabric like Spark, T-SQL, KQL, and Analysis Services, we can equip data teams to do exactly that.

Now, we are expanding these capabilities with the introduction of Runtime 2.0 in preview, purpose-built for large-scale data computation. It incorporates Apache Spark 4.x, Delta Lake 4.x, Scala 2.13, and Azure Linux Mariner 3.0 to power advanced enterprise workloads. Materialized lake views are also now generally available, simplifying medallion architecture implementation in Spark SQL and PySpark and enabling always up-to-date pipelines with no manual orchestration. In addition, a new agentic Copilot experience in notebooks delivers deeper context awareness, reasoning over your workspace, and generating code with greater speed and precision.

For real-time scenarios, we’re launching Microsoft Fabric Maps into general availability. Maps add geospatial context to your agents and operations by turning large volumes of location-based data into interactive, real-time visual insights.

For a comprehensive overview of these announcements and much more, read the Fabric Analytics announcement blog and the Fabric Real-Time Intelligence announcement blog.

Creating semantic meaning with Fabric IQ
Preparing raw data for AI is essential. The next step is transforming that data into meaningful, unified business context. That is where Fabric IQ comes in.

Fabric IQ unifies analytical data and operational data, including telemetry, time series, graph, and geospatial data, within a shared semantic framework of business entities, relationships, properties, rules, and actions. Instead of thinking in terms of tables and schemas, your teams and agents can operate on this framework, or ontology, aligned to how the business actually runs.

Fabric IQ ontologies will soon become accessible through an MCP server in preview, enabling agents to discover, understand, and act on this semantic layer. Ontologies can also serve as context sources for maps and soon in operations agents in Fabric, extending shared business context directly into operational decision-making and execution.

We are also excited to announce planning in Fabric IQ, a new enterprise planning capability that enables organizations to create plans, budgets, forecasts, and scenario models directly on top of Fabric’s semantic models. By complementing Fabric IQ’s ontologies with integrated planning, you get a complete, contextual view of your historical, real-time, and forward planning data. This allows users and agents to quickly answer what has happened, what is happening, and what should happen all from a single source. See this in action:

Finally, we recently announced a strategic partnership with NVIDIA to power the next generation of Physical AI by integrating Real-Time Intelligence and Fabric IQ with NVIDIA Omniverse libraries. The combined platform unifies real‑time operational data, business semantics, and physical simulation to enable organizations to optimize their physical operations in scenarios like intelligent digital twins, predictive maintenance, autonomous logistics, and energy optimization.

To learn more about all of our partner announcements, read the Fabric ISV announcement blog and the planning in Fabric IQ blog.

Enhancing the underlying Fabric IQ technology
Powering much of Fabric IQ’s rich experience is a combination of Power BI’s industry-leading, rich semantic model technology and graph in Fabric, our highly scalable graph database. Already delivering insights to more than 35 million active users, semantic models provide the ideal foundation for training agents through Fabric IQ. Now, with the general availability of Direct Lake on OneLake, your tables can be read directly from OneLake with native security enforcement, richer cross-item modeling, and import-class performance without data movement or refresh.

I’m also excited to share that graph in Fabric will be generally available in the coming weeks, enabling teams to visualize and query complex relationships across customers, partners, and supply chains.

To learn more, check out the Fabric IQ announcement blog and the Power BI announcement blog.

Empowering agents to act with Fabric data and operations agents
Frontier organizations are moving beyond general-purpose assistants and instead, adopting multi-agent systems composed of specialized agents. These agents are each grounded on specific data and reusable across different systems, allowing you to deliver more accurate, accelerated, and scalable outcomes.

To support your multi-agent systems, Fabric comes with built-in agent creation capabilities with Fabric data agents and operations agents. I’m excited to share that Fabric data agents are now generally available. Fabric data agents can be thought of as virtual analysts, aligned to specific domain data to support deeper analysis and deliver insights. Operations agents complement them by monitoring real-time data, detecting patterns, and taking proactive action.

Check out a quick demo of operations agents in Fabric:

These agents can be used across Fabric or as foundational knowledge sources in leading AI tools like Microsoft Foundry, Copilot Studio or even Microsoft 365 Copilot. To learn more about our AI announcements, check out the Fabric analytics blog covering data agents and the Fabric IQ blog covering operations agents.

Building mission-critical applications with developer experiences in Fabric
Developers building the next generation of AI applications need a comprehensive, cost-effective data platform that’s already integrated with your existing tools and workflows. Today, we are expanding Fabric’s developer tooling to meet that demand.

First, Fabric Model Context Protocol (MCP) is advancing with two major milestones. Fabric local MCP is now generally available, providing an open-source local server that connects AI coding assistants such as GitHub Copilot directly to Fabric. Alongside this, we’re introducing the public preview of Fabric remote MCP, a secure, cloud‑hosted execution engine that enables AI agents and automation tools to perform authenticated actions in Fabric.

We’re also enhancing our Git integration with selective branching, allowing developers to branch out for a specific feature and pull only the items they need. You also get improved change comparisons to more easily review recent updates, and new folder relationships which show how feature workspaces connect to source workspaces.

We’re also launching two open-source projects to help teams move faster with Fabric: Agent Skills for Fabric and Fabric Jumpstart. Agent Skills for Fabric is an open-source set of purpose-built plugins that let you use natural language in the GitHub Copilot terminal to harness the full power of Microsoft Fabric. Additionally, Fabric Jumpstart is designed to help you get off the ground with detailed guidance, reference architectures, and single‑click deployments for sample datasets, notebooks, pipelines, and reports.

Finally, we are announcing that the Fabric Extensibility Toolkit (FET), an evolution of the Workload Development Kit (WDK), is now generally available. Along with this release, we are enabling support for full CI/CD, variable library, and a new management experience in the Admin portal.

Read the Fabric Platform announcement blog
Migrating your existing Azure service to Fabric
As Fabric continues to grow in functionality, we are also simplifying the migration from other Azure services. In addition to our existing Synapse tooling, we are bringing new migration assistants for Azure Data Factory, Azure Synapse Analytics, and Azure SQL in public preview.

The new Fabric migration assistant for Azure Data Factory and Synapse Analytics helps move your existing pipelines and artifacts like Spark pools and notebooks into Fabric with minimal disruption. It’s designed to support incremental modernization, allowing teams to evaluate, convert, and optimize pipelines as they transition to Fabric. The migration assistant for SQL databases helps move SQL Server into Fabric by importing schemas through DACPACs, identifying and resolving compatibility issues with AI assistance, and guiding teams through assessment and data copy workflows for a smoother cutover.

See more Fabric innovation
Explore the AI shift with The Shift podcast
In addition to the announcements above, we are also rolling out a broad set of Fabric innovations across the platform. For a deeper look at the updates and what’s new this month, visit the Fabric March 2026 Feature summary blog, the Power BI March 2026 feature summary blog, and the latest posts on the Fabric Updates channel.

Explore additional resources for Microsoft Fabric
Sign up for the Fabric free trial. View the updated Fabric Roadmap. Try the Microsoft Fabric SKU Estimator.
Visit the Fabric website. Join the Fabric community. Read other in-depth, technical blogs on the Microsoft Fabric Updates Blog.
Read additional blogs by industry-leading partners
Sonata Software: Building an AI-ready data platform with data agents, ontology, and governance in Microsoft Fabric
Quadrant Technologies LLC: Real-Time Operational Intelligence in Microsoft Fabric: Deep Dive into RTI Capabilities, Anomaly Detection and Activator Alerting
Inspark: Why switch from Azure Synapse to Microsoft Fabric?
Esri: Unlock the power of location intelligence with ArcGIS for Microsoft Fabric
Dream IT Consulting Services: 8 Real-World Use Cases of Data Agents in Microsoft Fabric
UB Technology Innovations Inc.: From Data Platform to Decision Platform: How Microsoft Fabric and Copilot are Redefining Enterprise Analytics
Simpson Associates: Fabric Data Warehouse: Bringing Structure to Modern Data Strategies
Synapx Ltd.: Migrating Power BI to Microsoft Fabric Lakehouse with Medallion Architecture: A Strategic Imperative for Modern Construction Enterprises
Cloud Services: Real-Time Intelligence in Action: How Microsoft Fabric Helped Delfi Transform Its Newsroom
Cloud Services: Microsoft Fabric Data Agents: A New Reality
iLink Digital: Detect to Act in Seconds: How Real-Time Intelligence Is Rewriting the Rules of Emissions Management
Valorem Reply: How Nonprofits Are Rethinking Data with Microsoft Fabric

The post FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform appeared first on Microsoft Fabric Blog.

]]>
What’s new in OneLake and the Fabric platform: more sources, security, and capacity tooling https://blog.fabric.microsoft.com/en-us/blog/whats-new-in-onelake-and-the-fabric-platform-more-sources-security-and-capacity-tooling?ft=All Tue, 18 Nov 2025 15:55:00 +0000 We are highlighting the new zero-ETL, zero-copy sources in OneLake, deeper interoperability between OneLake and Microsoft Foundry, and new tools to help admins.

The post What’s new in OneLake and the Fabric platform: more sources, security, and capacity tooling appeared first on Microsoft Fabric Blog.

]]>
Organizations today are under immense pressure to unify data spread across clouds, systems, and formats—while also meeting higher standards for security, governance, and AI readiness. Microsoft Fabric was built to solve exactly this challenge. Since launching two years ago, more than 28,000 customers like DentsuEastman, and Apollo Hospitals have adopted Fabric to bring their data together in OneLake and run analytics, AI, and operational workloads on a single, open platform. At Ignite, we’re expanding that foundation with a broad set of innovations that make it even easier to unify your data estate and keep it governed, protected, and ready for AI.  

In this blog post, I’ll highlight the new zero-ETL, zero-copy sources in OneLake, deeper interoperability between OneLake and Microsoft Foundry, and new tools to help admins manage capacity, security, and governance at scale. Together, these updates further cement Fabric as the ideal data platform for your mission-critical workloads—open, integrated, secure, and built to connect every part of your data estate to the intelligence your business needs. 

What I’m covering here is only part of the story. For a deeper look at our new workload called Fabric IQ, new bidirectional interoperability with SAP and Salesforce, the general availability of Fabric Databases, and several other major announcements, I encourage you to read the Azure Data announcement blog from Arun Ulag, President of Azure Data. 

Unify your entire data estate with Microsoft OneLake

With Microsoft OneLake, you can access your entire multi-cloud and on-premises data estate through a single, unified data lake that spans your organization. Once connected, your data is centrally managed through the OneLake catalog—a unified layer for access, governance, security, and discovery. Today, the OneLake catalog is trusted by more than 230,000 organizations worldwide, including 95% of the Fortune 500, and is seamlessly accessible from familiar tools like Microsoft Excel and Microsoft Teams. 

Now, we’re introducing new capabilities that make it even easier to bring all your data into OneLake, connect it to intelligent agents, and manage it with stronger governance and security. 

New mirroring and shortcuts sources for SAP, Microsoft 365, and Azure Databases

We’re excited to introduce new ways to unify your data in OneLake with a zero-ETL approach. Mirroring for PostgreSQL, Cosmos DB, and SQL Server versions 2016-2022 and 2025, are now generally available. We are also announcing the preview of Mirroring for SAP, powered by SAP Datasphere, which enables seamless data replication from SAP systems into OneLake. This is in addition to our announcement of bidirectional integration with SAP BDC. Whether you’ve adopted SAP BDC or not, you can now access your SAP data in OneLake. We’re also bringing Iceberg support in Snowflake mirroring into general availability. By mirroring these sources, you can eliminate the need for ETL processes and get Delta tables optimized for analytics. Try these mirroring sources today or learn more in the Data Integration Blog

We are also announcing the preview of shortcuts to SharePoint and OneDrive, allowing you to bring your unstructured, productivity data into OneLake without copying files or building custom ETL flows. You can use these unstructured files to train agents or to provide relevant context alongside your structured data. And, as business users make changes to their spreadsheets, documents, and PDFs in SharePoint and OneDrive, the files in OneLake always remain up to date. Try these shortcuts today.

Connect your multi-cloud data estate to agents with Foundry IQ 

Today, Microsoft announced Foundry IQ by Azure AI Search: the next generation of retrieval-augmented generation (RAG). Agents rely on context— Foundry IQ’s knowledge bases deliver high-value context to agents by simplifying access to multiple data sources and making connections across information. You can use the OneLake knowledge source in Foundry IQ to connect agents to multi-cloud sources like AWS S3, on-premises sources, and structured and unstructured data across your data estate—all without creating copies or introducing data sprawl. With knowledge bases in Foundry IQ, your AI developers can build agents that are grounded in curated, governed data from Microsoft 365 Work IQ, Fabric IQ, and the web for more accurate app responses and informed decision-making. Try the Foundry IQ knowledge base today. 

Take a look at how you can use shortcuts and mirroring to bring all your data sources together in OneLake and use it to power the next generation of intelligent agents in Foundry:  

https://youtube.com/watch?v=U1xtXqEm6sI%3Ffeature%3Doembed

Enhancing governance for admins and data security in the OneLake catalog 

Over the last year, we’ve expanded the OneLake catalog to become the central place to discover, manage, govern, and secure your data in Fabric. Today, we are expanding its capabilities even further.

We are also upgrading the OneLake catalog Govern tab with a new preview experience designed for admins. From a centralized dashboard, Fabric admins can now view out-of-the-box insights on domain and capacity inventory, workspace operations, protection status, and curation. They can dive deeper with detailed Power BI reports, take recommended actions to quickly resolve issues, or even chat with Copilot to better understand the insights—all in one place. We are also expanding Copilot’s capabilities to automatically generate summaries for semantic models with a single click, providing quick insights and improving your exploration and decision making.

We are also releasing new ReadWrite permissions for OneLake security, allowing teams to configure folder-level write access within lakehouses so contributors can write data without needing full contributor or higher roles in the workspace. Learn how to start using OneLake security

Together, all of these enhancements make OneLake not just a data lake, but a strategic control plane for enterprise data—curated, connected, and ready for AI. Whether you’re building agents, dashboards, or operational workflows, OneLake helps ensure your data is always where you need it, when you need it, and in the format that drives action. 

Confidently deploy and manage the Fabric platform with new network security features and capacity management tools

As you scale your data operations with Fabric, reliability and security are non-negotiable. With that in mind, we are announcing new capabilities designed to help you maintain uninterrupted performance during peak demand and uncompromising protection for sensitive data. 

Expanded network security controls for your Fabric workloads 

On the security front, Outbound Access Protection—which allows you to restrict outbound connections to only approved endpoints—is being extended to cover dataflows, data pipelines, and OneLake shortcuts, in addition to the recently announced coverage for Fabric data warehouses and SQL Analytics Endpoints. While these extensions will be in preview in early 2026, OAP support for Spark and SQL Analytics Endpoints is already generally available. Coming soon, we are also releasing Tenant API for OAP, allowing tenant admins the ability to see the workspaces which have OAP enabled. 

We also recently released Customer-Managed Keys into general availability, empowering organizations to encrypt their data using their own keys. Now we are extending Customer Managed Keys to support keys stored in Azure Key Vaults deployed behind a firewall and use in SQL Databases in Fabric, now in preview.  

New Fabric capacity tools to help you optimize costs and avoid throttling  

To help you gain control over the jobs running on your Fabric capacities, we are expanding surge protection and introducing a new tool called Fabric capacity overage—both of which will be released into preview in Q1 2026—and adding Fabric capacity events in the Real-Time hub. First, surge protection will now let you set limits on specific workspace activity to protect your capacities from unexpected surges from non-critical workspaces.  

We are also releasing Fabric capacity overage which admins can turn on for specific capacities, allowing them to automatically pay for excess consumption and avoid throttling whenever high-traffic periods occur. Rather than over-provisioning for rare spikes, you can right-size your capacity for typical usage and enable overage only when needed. Admins can even set a 24-hour limit so you don’t break your budget, and the feature can be toggled on or off in seconds. These tools are designed to work together to help you prevent over-use and maintain smooth, uninterrupted operations even during peak demand.

Finally, we’re excited to announce we are adding Fabric capacity events in the Real-Time hub. It’s a highly requested feature now in preview that provides the ability to analyze capacity events in real-time and respond appropriately. Fabric capacity events will provide real-time data for two event types: Capacity Summary (smoothed metrics every 30 seconds) and Capacity State (instant updates on changes like pauses or throttling).  

See more Microsoft Fabric innovation  

At Ignite, we announced several transformative enhancements to Microsoft Fabric that will help organizations unify their data estates and power the next generation of AI apps and agents. We’re introducing the preview of Fabric IQ, a new workload in Fabric that unifies your data with operational systems under a semantic model of business entities and their relationships—providing a live, connected view of the enterprise. We are announcing the general availability of SQL and Cosmos databases in Fabric, giving developers world-class database engines that provision in seconds—and deliver a simple, autonomous, secure, and AI-optimized foundation for modern applications.

We are also expanding interoperability with SAP, Salesforce, Azure Databricks, and Snowflake to enable bi-directional, zero-copy data sharing between their platforms and Fabric. Finally, we are weaving AI into the places you work every day with enhancements to Fabric data agents, Copilot in Power BI, and Fabric operations agents. To dive deeper into these milestone innovations, read the Azure Data announcement blog from Arun Ulag, President of Azure Data. 

You can also learn more about everything else we are bringing to Fabric by reading the Fabric November 2025 Feature summary blog, the Power BI November feature summary blog, or by exploring the latest blogs on the Fabric Updates channel.  

Join us at FabCon Atlanta  

Looking for a dedicated event on Microsoft Fabric? Join us at the 3rd annual Fabric Community Conference this year in Atlanta, Georgia from March 16-20, 2026, for even more in-depth sessions, cutting-edge demos and announcements, community networking, and everything else you love about FabCon. And we are ecstatic that SQLCon 2026 is now officially part of the Microsoft Fabric Community Conference, bringing together two powerhouse communities in SQL and Fabric.  

You can Register today for either event or get full access to both. And use code MSCATL for a $200 discount on top of current Early Access pricing!

Challenge yourself and get certified in Microsoft Fabric 

Unify your data, unlock real-time insights, and kickstart your journey to becoming a certified Microsoft Fabric Analytics Engineer—join the DP-600 Skills Challenge today.  

Build smarter pipelines, unify your data estate, and take the next step toward DP-700 certification—start the Microsoft Fabric Data Engineer Skills Challenge today.  

Explore additional resources for Microsoft Fabric 

Read additional blogs by industry-leading partners: 

The post What’s new in OneLake and the Fabric platform: more sources, security, and capacity tooling appeared first on Microsoft Fabric Blog.

]]>
Microsoft and Databricks: Advancing Openness and Interoperability with OneLake https://blog.fabric.microsoft.com/en-us/blog/microsoft-and-databricks-advancing-openness-and-interoperability-with-onelake?ft=All Tue, 18 Nov 2025 15:50:00 +0000 For nearly a decade, Microsoft and Databricks have closely partnered with the goal of empowering organizations to unlock the value of their data.

The post Microsoft and Databricks: Advancing Openness and Interoperability with OneLake appeared first on Microsoft Fabric Blog.

]]>
Co-authored by Adam Conway, SVP Products at Databricks, and Arun Ulag, President of Microsoft Azure Data

For nearly a decade, Microsoft and Databricks have closely partnered with the goal of empowering organizations to unlock the value of their data. Together, we’ve delivered solutions that combine the flexibility of the lakehouse architecture with the scale and security of Azure. Today, we’re taking that collaboration even further by deepening integration between Azure Databricks and Microsoft OneLake.

Delivering on the promise of an open data lakehouse

The current pace of technological innovation requires data estates to be more flexible than ever before. Seamless interoperability between platforms is no longer an ideal goal but a technical imperative. Organizations need the freedom to choose the right tools for their data project without worrying about data silos or complex integrations. That’s why Databricks pioneered the open lakehouse architecture, and why Microsoft built OneLake—an open data lake designed to serve as the foundation for data and AI.

Together, we’re making this vision real:

  • Mirroring data into OneLake – already generally available
    Earlier this year we released Azure Databricks mirroring. Customers can already mirror Databricks data into OneLake through Unity Catalog. ensuring that all data—including the highest performance tables managed by Azure Databricks—are instantly available across Microsoft Fabric workloads. Both platforms can work over the same copy of data stored in Delta Lake format with no data movement.
  • Reading data from OneLake – coming by year-end
    While Databricks managed data is available in OneLake, reading OneLake data from Databricks will soon be enabled with the recent OneLake catalog API. By the end of 2025, Azure Databricks will enable native reading from OneLake through Unity Catalog in preview, allowing users to seamlessly access data stored in OneLake without duplication or complex pipelines. Data can come from any Fabric workload. This means faster analytics and lower costs.
Image of "creating a new catalog" UI in Azure Databricks with the OneLake connection selected
Connecting to OneLake data in Azure Databricks

Writing and storing data natively in OneLake – on the horizon

Looking ahead, Azure Databricks will support writing and storing data directly in OneLake, without any additional storage resources to manage. This will deliver additional simplicity and interoperability for customers building on the lakehouse architecture. We’ll share timelines for this capability at FabCon in March 2026.

Why this matters for customers

These new integrations go beyond technical progress—they underscore our shared commitment to openness, flexibility, and empowering customers with choice. Together, Microsoft and Databricks are helping organizations unlock more value from their data with a seamless, unified foundation across both platforms.

With these integrations, customers can:

  • Choose the right engine and tool for the job at hand: Gain full flexibility to pick the engine, tool, or platform you want for every task—based on your goals, workloads, or team expertise—without compromise.
  • Bring data directly into your productivity apps: The OneLake catalog is now woven into Microsoft 365 experiences such as Teams, Excel, and Copilot Studio. This means business users can easily discover, access, and apply insights right where they work. For example, Teams users can enrich chats, channels, and meetings with data-driven context, with any data governed by OneLake or Unity Catalog.
  • Scale resources efficiently and focus on innovation: With a single, shared copy of data across Microsoft Fabric and Azure Databricks, you can eliminate costly duplication, streamline governance, and redirect time and investment toward innovation instead of data movement.
  • Deliver richer AI and analytics outcomes: Whether you’re building copilots in Microsoft Copilot Studio and AI Foundry, building Agents in Azure Databricks, or visualizing data in Power BI, you can unify and integrate data across Azure Databricks and Microsoft solutions—without ever moving it. Likewise, data in OneLake can seamlessly flow into Azure Databricks to power advanced AI, analytics, and data-sharing scenarios.

A shared commitment to innovation

Our collaboration is built on trust and a shared belief that openness drives innovation. By bringing Azure Databricks and OneLake closer together, we’re giving customers the freedom to build modern data architectures without compromise.

We’re excited about what’s next—and we’re just getting started.

The post Microsoft and Databricks: Advancing Openness and Interoperability with OneLake appeared first on Microsoft Fabric Blog.

]]>
Microsoft and Snowflake: Simplified interoperability with no data movement https://blog.fabric.microsoft.com/en-us/blog/microsoft-and-snowflake-simplified-interoperability-with-no-data-movement?ft=All Tue, 18 Nov 2025 15:45:00 +0000 Microsoft and Snowflake have been working side by side to make open, cross-platform integration effortless.

The post Microsoft and Snowflake: Simplified interoperability with no data movement appeared first on Microsoft Fabric Blog.

]]>
Data today lives everywhere—across apps, services, and clouds. Every department has its own analytics stack, AI tools, and preferences, and what used to be a manageable data landscape is now a distributed web of systems. But now, in the era of AI, bringing this data together has never been more important as we build agentic systems that need access to data across the organization. True interoperability—where platforms connect seamlessly, and data doesn’t have to move—is quickly becoming the key to unlocking value at scale.

That’s why Microsoft and Snowflake have been working side by side to make open, cross-platform integration effortless. Over the past 18 months, our collaboration has focused on one shared goal: helping customers connect Snowflake and Microsoft OneLake to access, analyze, and share data without duplication or complexity.

Built on open standards like Apache Iceberg and Parquet, this collaboration lets organizations use a single copy of data across both platforms and choose the right tool for every task. The result is a more flexible, efficient, and unified data experience—no matter where your data originates.

To learn more about how this interoperability works, check out our recent Microsoft and Snowflake: Delivering on the promise of openness and interoperability blog post.

Microsoft Ignite: Announcing enhanced interoperability between Microsoft and Snowflake

We’re excited to share new advancements that make the Microsoft–Snowflake integration even easier to use and more powerful.

We’ve added new, intuitive user interface (UI) experiences in both platforms to simplify setup and use. OneLake is adding a Snowflake-branded item in preview, allowing users to seamlessly access all Snowflake data within Microsoft Fabric without requiring further configuration. This means you can use any Fabric workload—analytics, AI, or visualization—directly on Snowflake data, without extra configuration.

Snowflake is also introducing new UI capabilities designed to let OneLake serve as the native storage location for your Snowflake data. This means all of your data can reside in OneLake, while taking advantage of Snowflake’s powerful engines.

Take a look at this new UI in action below and get started today.

https://youtube.com/watch?v=dmwE6B5k6oE%3Ffeature%3Doembed

How does this add to Microsoft’s existing interoperability?

We’ve already been able to deliver bidirectional data sharing between Snowflake and OneLake, for seamless interoperability between our platforms without data duplication. Customers can already write Snowflake tables directly to OneLake, access Apache Iceberg tables using OneLake shortcuts, and read OneLake tables from Snowflake—all without duplication or complex setup.

What we’ve already delivered:

  • General Availability
    • Automatic translation of Iceberg metadata to Delta Lake metadata for use with all Microsoft Fabric engines.
    • Shortcut Snowflake Iceberg data (in Azure, Amazon S3, or GCS) directly into OneLake.
  • Preview
    • Native storage of Snowflake Iceberg data in OneLake.
    • Automatic conversion of Fabric data into Iceberg format for seamless use in Snowflake.
    • New OneLake table APIs that work with Snowflake’s catalog-linked database feature.

And with the new UI now rolling out, we are making the existing interoperability easier to implement for your teams.

Looking ahead to unified, cross-platform data access and management

Looking ahead to 2026, our goal is to make all these capabilities generally available, so that even your most mission-critical workloads can take advantage of unified, cross-platform data access and management.

But beyond our existing interoperability, we are committed to continue removing barriers between our platforms, so you have full optionality for your data projects.

Still have questions about the integration?

Watch the recent Ask me Anything: Fabric and Snowflake Interoperability webinar where experts from Microsoft OneLake and Snowflake answered top questions on how to most effectively use these platforms together.

The post Microsoft and Snowflake: Simplified interoperability with no data movement appeared first on Microsoft Fabric Blog.

]]>
OneLake: your foundation for an AI-ready data estate https://blog.fabric.microsoft.com/en-US/blog/onelake-your-foundation-for-an-ai-ready-data-estate/ Fri, 05 Sep 2025 16:20:00 +0000 Discover why OneLake is the ideal data lake to unify your data estate and help you create AI applications.

The post OneLake: your foundation for an AI-ready data estate appeared first on Microsoft Fabric Blog.

]]>
For years, organizations have aspired to build a culture where data isn’t just accessible—it’s woven into every decision. And now with generative AI, AI assistants are making it easier than ever for business users to explore data, quickly answer their pressing data questions, and even build custom agents on their data. And yet, for many, the promise of a truly data-driven culture remains elusive. The typical data estate has grown organically over time, with many different, team-specific data tools and services. These varied layers and silos lead to data sprawl and duplication, access issues, and even data exposure risks—making it hard for data teams and end users to access, find, and use the data they need to unlock insights.

A decade ago, we faced the same issues with document sharing. Sharing documents with your coworkers meant emailing attachments or managing files on local network drives. Then, cloud services like OneDrive and Dropbox transformed document sharing and collaboration by providing a single, accessible home for files. In the data realm, a similar transformation is happening now with OneLake.

Instead of the patchwork of storage accounts and ad-hoc data marts scattered across departments, organizations need a single, unified access point for all their data. Now with Microsoft OneLake, we have the solution. With OneLake, you can access your entire multi-cloud data estate from a single data lake that spans the entire organization. Similar to how OneDrive is wired into all your Microsoft 365 applications and provides a convenient storage location, OneLake acts as the central, accessible location for comprehensive data access and management.​

In this blog post, we’ll explore why OneLake is the ideal data lake to unify your data estate and help you create AI applications, focusing on five key pillars: breaking down silos, connecting to all your data, working from a single data copy, discovering and managing in a data catalog, and sharing data with granular security.

Breaking down siloes with a unified data foundation

Traditionally, every department, team, and even project in an organization creates their own siloed data stores to maintain data ownership and granular control over security and compliance. The result, however, is a fragmented patchwork of ‘data islands’. This siloed system can’t keep up with fast paced data projects, especially as frontier firms start deploying agents across the organization that need access to cross-department data.

Instead, you can deploy OneLake as the central data access point for the entire organization. Every Microsoft Fabric tenant comes with just a single OneLake instance, with no additional infrastructure to manage. Every department, team, and project can store or connect their data to a single unified data lake and then use a system of Fabric domains, sub-domains, and workspaces—each with their own administrator—to organize their data into a logical data mesh. This system maintains data ownership and allows for federated governance while ensuring authorized users can discover and use data from other domains without friction. Watch this video to see how you can set up your own logical data mesh in OneLake:

https://youtube.com/watch?v=OFBL2PcVqQU%3Ffeature%3Doembed

By consolidating data access to one place, OneLake dramatically simplifies data sharing and integration. When a data project requires data from multiple departments, users can query and combine data from multiple domains directly in OneLake rather than requesting exports or setting up complex pipeline jobs. And OneLake’s reach isn’t limited to Azure, it can virtualize data from across your other clouds and will appear just like any other data item in OneLake.

Connect to any data, anywhere without duplication

With your data mesh in OneLake organized, you then have the tools to connect to all of the data sources in your data estate. Most data estates naturally span multiple clouds, accounts, databases, domains, and engines, and data professionals spend half their time trying to connect data sources to incompatible platforms or updating their out-of-date data with complex data pipelines. With OneLake, we’ve simplified how you bring data in with a zero-copy, zero-ETL approach with two key Fabric capabilities: shortcuts and mirroring.

OneLake shortcuts enable your data teams to virtualize data in OneLake without having to move and duplicate it. They act essentially as metadata pointers, similar to a shortcut on your desktop. This capability is particularly adept at helping you break down siloes across your data estate and even between OneLake domains. You can create shortcuts to data which lives in another domain or workspace, while ensuring only one copy of the data exists. Shortcuts even preserve data ownership and governance across domains, meaning if you update your data item or restrict access to it, all users who’ve bypassed to the data will instantly see the change. With shortcut transformations, you can even apply automatic changes to the data like converting the data format or removing PII data. We have shortcuts available for OneLake, Azure Data Lake Storage, Azure Blob storage, Amazon S3 and S3 compatible sources, Iceberg-compatible sources, Microsoft Dataverse, on-premises sources, and more on their way.

You can also use mirroring, a no-ETL experience to add proprietary databases or data warehouses to Fabric. Depending on the data source, mirroring can either replicate the entire database or just the metadata in OneLake in Delta Parquet tables and keep the data in sync in near real time. We currently have Mirroring enabled for Azure Cosmos DB, Azure SQL DB, Azure SQL MI, Azure PostgreSQL, Azure Databricks Unity Catalog, Snowflake, and many more sources coming soon including SQL Server, SQL Server 2025, Oracle, and Dataverse. With Open Mirroring, you can even create custom mirroring experiences for your own applications.

Check out this quick demo of these features in action:

https://youtube.com/watch?v=jjNlksIlDnE%3Ffeature%3Doembed

The benefits of these innovative, no-ETL options are massive. No more cumbersome ETL pipelines, no more sprawling, out-of-date copies of the data, and no more data siloes across every part of your business. Once your data is connected to OneLake, you only need a single copy across every engine.

Collaborate on a single copy of data with open formats

When we built OneLake and the Fabric engines, we designed them to support open data formats, standardizing on both the Delta Parquet and Apache Iceberg formats. This commitment to common open data formats means that you need to load your data into OneLake once and all the Fabric engines can operate on the same data, without having to separately ingest it. Having only one copy of the data means teams can collaborate on a single source of truth rather than fragmenting information into endless copies in each stage of the analytics journey.

Creating multiple copies of the same data not only wastes storage space but also leads to version mismatch. By eliminating redundant copies, OneLake ensures everyone is working from the most up to date version of the data without refresh delays or manual syncs. Instead of marketing and finance creating separate copies of a lakehouse with customer revenue data, they can work from the same data with different metadata, filters, and BI reports added. IT teams can spend less time maintaining complex pipelines and admins only have one copy to manage with far easier audit trails to follow. Moreover, data professionals can easily pick the engine they most prefer, whether its T-SQL or Spark, knowing all the engines are optimized for Delta Parquet and will work from the same copy.

Everyone operates on the same single version of truth, from a data scientist training a model to an executive reviewing a dashboard, driving a more aligned and efficient organization.

Discover, manage, and govern in a complete catalog

Minimizing data duplication and sprawl also requires ensuring the right people can find and explore the right data. The benefits of a data culture have been clear for years, but with generative AI the potential business impact is increasing exponentially. Frontier firms are already using AI assistants and building custom agents to transform how their teams interact with data from technical professionals creating data items and drafting code to business users quickly answering their pressing data questions. But crucially, this culture requires that everyone has the ability to discover high quality data.

That’s where the OneLake catalog comes in. We’ve designed the OneLake catalog to be the single place for data professionals and business users to discover, manage, and govern the data they own and can access across OneLake. With over 30M monthly active Power BI and Fabric users, it’s already the default source of data and insights for many business users. The OneLake catalog comes with two tabs, Explore and Govern, that can help all Fabric users discover and manage trusted data, as well as provide governance insights for data owners.

Instead of searching through a maze of databases or SharePoint sites, users can use the Explore tab and even narrow their search by domain, workspace, item type, endorsements, and more to find exactly what they need in seconds. You can then deep dive into a data item to see its description, owner, schema, lineage, and usage metrics. We’ve also integrated OneLake catalog everywhere your people work including Microsoft Teams, Microsoft Excel, Microsoft Copilot Studio, and 100s of other scenarios—bringing data access to the 350 million Microsoft 365 users.

In the Govern tab, data owners can get out of the box insights and recommended actions based on the curation and quality level of their data based on sensitivity label coverage, tagging, endorsements, data location, and more.

Check out the full demo of the OneLake catalog:

https://youtube.com/watch?v=CAIB9kv5alw%3Ffeature%3Doembed

Share broadly with granular security and control

However, while broad access to data is critical for empowering the business, security leaders know that cyber-attacks are becoming more sophisticated, and the average cost of a single breach is nearing $10 million. Traditionally, the response is to lock down access to only trusted users, but our research tells us that 63% of data breaches stem from inadvertent, negligent or malicious insiders. The reality is people will try to work around lock down controls using tools like Excel which are harder to govern, less transparent, and harder to maintain.

That’s why we’ve designed OneLake security—an experience designed to help you share data across your organization without exposing sensitive information. With OneLake security, you can create roles to set permissions at the data item, folder, table, or even row/column level, enabling you to still share a data item while restricting access to any sensitive data your item may contain. These permissions are then automatically enforced across all analytics experiences, so whether a user is querying data through a Spark notebook, viewing it in a Power BI report, or exploring it through a Fabric data agent, OneLake’s security model ensures they see only what they’re permitted.

Check out this visual to see how OneLake security works:

This unified approach to security means users no longer have to maintain separate permissions across different engines. It also means the original data owners always maintain control over who can access the data source, even if the data is bypassed to another lakehouse or workspace owned by someone else. The end result is that data sharing can be done safely, knowing you have the fine-grained controls in place.

Check out this full overview video:

https://youtube.com/watch?v=AakV-3RtmuI%3Ffeature%3Doembed

On top of this built-in security, you can also leverage the same security features from tools like Microsoft 365 with Purview Information Protection sensitivity labels and Purview Data Loss Prevention (DLP) policies. Technical and non-technical users alike can apply sensitivity labels to classify their data items, automatically restricting access based on the data item’s sensitivity even when the data exported to other tools like Microsoft Excel. DLP policies will also automatically detect when sensitive data is uploaded to unauthorized destinations, alerting users and offering guidance to mitigate risks.

In short, OneLake’s security model means you get the benefit of broad data accessibility and self-service analytics without sacrificing oversight and control. Together, these capabilities provide a unified, enterprise-grade framework for securing data, enabling responsible AI use, and ensuring compliance across the OneLake environment.

Building data-driven agents with curated data from OneLake

Creating custom AI experiences requires data—lots of it. Data is the foundation on which AI is built, and the simple fact is AI is only as good as the data it’s based on. For generative AI solutions to be as accurate as possible, they need to be built with clean data and in a semi-structured way. With your data in OneLake, you can use Fabric’s various workloads to make the data AI-ready. Fabric has tools for data integration and engineering, data warehousing, data science, real-time analytics, data modeling and visualization, and even has native, industry-specific and partner-created workloads to help you accelerate your data projects.

You can then directly connect your data to AI platforms like Azure AI Foundry to build and scale data-driven GenAI apps. We’ve built native integration between OneLake and Azure AI Foundry to make this as seamless as possible. The integration between Azure AI Foundry and OneLake is built on OneLake shortcuts, helping you work with your structured and unstructured data from OneLake in Azure AI Foundry without creating copies and adding more data sprawl. OneLake also directly integrates with Azure AI Search, which can store, index, and retrieve data, including vector embeddings, from your data sources including OneLake. 

https://youtube.com/watch?v=pDy-WLHmSUc%3Ffeature%3Doembed

Finally, you can ground your Azure AI Agent’s responses with data from Fabric using Fabric data agents to unlock powerful data analysis capabilities. Fabric data agents are AI-powered assistants that can learn, adapt, and deliver insights, allowing users to interact with the data through chat. With out-of-the-box authorization, this integration simplifies access to enterprise data in Fabric while maintaining robust security, ensuring proper access control and enterprise-grade protection.

Check out this full demo:

https://youtube.com/watch?v=SBsErGew1yE%3Ffeature%3Doembed

Conclusion: A unified data lake for your entire organization

Microsoft OneLake is more than just a new tool—it’s the strategic centerpiece of a data estate that can reshape how an organization harnesses data. By unifying data in one place and breaking down silos, it can become the single point for all your users to discover and explore your organization’s data organized into a logical data mesh. With shortcuts and mirroring in OneLake, you can unify all of your multi-cloud and on-premise sources and enable your people to work from a single copy of data—meaning fewer copies of data, better collaboration between your teams, and more streamlined analysis. By enabling collaboration on a single copy of data, it ensures every decision is based on the same facts, eliminating the version control and governance nightmares.

Organizations like LumenIFSNTT Data, and the Chalhoub Group have all adopted Microsoft OneLake and Microsoft Fabric to unify ingestion, storage, and analytics in one platform. Using OneLake shortcuts, mirroring, Direct Lake mode, and more, Lumen—a leader in enterprise connectivity—cut 10,000 hours of manual effort, “We used to spend up to six hours a day copying data into SQL servers,” says Chad Hollingsworth, Cloud Architect at Lumen. “Now it’s all streamlined… OneLake allowed us to ingest once and use anywhere.” IFS, a leading provider of enterprise software, faced high costs and complexity from a fragmented data architecture. The company unified their data estate on Microsoft OneLake, increasing data access from 20% to more than 85%, cut costs, and accelerated insights, “the primary challenge we faced was the slow pace of development caused by managing separate extract, transform, load (ETL) processes and reporting environments,” said Ligy Terrance, Director of Data Analytics and Integration at IFS. “With Microsoft Fabric, we now have a unified platform that brings all these layers together… Having everything in one place has eliminated integration bottlenecks and made it much easier to deliver insights quickly and efficiently.”

For organizations trying to manage their ever-growing data estate, the implications are significant. OneLake’s approach translates to less data sprawl and lower total costs, less time spent by IT maintaining complex data pipelines and by users looking for data, and faster time to insights for data professionals. With its robust security and governance story, you can help ensure your data is secure while empowering your users with decision-changing data.

Learn more about how OneLake can work with your data estate

Join us for a series of blog posts over the next few months as we explore why Microsoft OneLake is the ideal data platform for the entire data estate. We’ll walk you through how OneLake integrates with each of these platforms, highlight top opportunities and use cases, and feature customers who’ve successfully transformed their existing solutions with OneLake. Check back to the Fabric blog site to find the latest blogs or bookmark this blog and we will update the list below with links to the relevant blogs.

We are planning the following topics:

  1. OneLake and Microsoft Foundry: Build data-driven agents with curated data from OneLake
  2. OneLake and Snowflake: Snowflake and Microsoft announce expansion of their partnership
  3. OneLake catalog overview: OneLake catalog: The trusted catalog for organizations worldwide
  4. OneLake and Azure Databases: Coming soon
  5. OneLake and Azure Databricks: Microsoft and Databricks: Advancing Openness and Interoperability with OneLake
  6. OneLake and Azure Data Factory: Coming soon
  7. OneLake and Microsoft 365: Coming soon
  8. OneLake and Microsoft Copilot Studio: Coming soon
  9. OneLake and open-source solutions: Coming soon

The post OneLake: your foundation for an AI-ready data estate appeared first on Microsoft Fabric Blog.

]]>
Sessions you won’t want to miss at FabCon Vienna http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/07/28/sessions-you-wont-want-to-miss-at-fabcon-vienna/ Mon, 28 Jul 2025 15:00:00 +0000 From September 15 to 18, FabCon Vienna will feature over 130 sessions, 150 expert speakers, 10 hands-on workshops, and 45 exhibitors.

The post Sessions you won’t want to miss at FabCon Vienna appeared first on Microsoft Fabric Blog.

]]>
Following last year’s sold-out debut in Stockholm, the Microsoft Fabric Community Conference is returning to Europe in Vienna, Austria! From September 15 to 18, FabCon Vienna will feature over 130 sessions, 150 expert speakers, 10 hands-on workshops, and 45 exhibitors. FabCon Vienna is your opportunity to dive deep into the latest Microsoft Fabric capabilities, hear directly from Microsoft product leaders and community experts, explore new features, and gain practical insights you can bring back to your organization.

This year’s agenda is packed with sessions tailored to every stage of your Fabric journey. Explore key sessions across Power BI, AI, databases, security and governance, and Microsoft OneLake, and get a first look at the newest features and what’s coming next on the roadmap. Whether you’re looking to sharpen your skills, dive into data stewardship best practices, or get started with Microsoft Copilot in Fabric, you’ll find sessions designed to meet you where you are and help you go further.

To make the most of your time at FabCon Vienna, look through our list of sessions you won’t want to miss. We also highly recommend attending core notes from the teams building Microsoft Fabric. These sessions offer strategic insights into what’s new, what’s coming, and how to maximize your experience at the event.

Fabric core note sessions

Power BI

Chat with your data through AI-powered search and analytics

Session speakers: Lada Hill and Eun Hee Kim

Discover how Microsoft Fabric Copilot is changing the way users explore data in Power BI. This session dives into the Chat with your Data experience, showing how to ask smarter questions, uncover insights faster, and get more value from your reports. Hear from the Power BI product team on how to optimize your prompts and make the most of Copilot’s capabilities. Plus, get a sneak peek at upcoming features that will take AI-powered analytics even further.


Power BI DataViz World Championship – European Edition

Join us for a high-energy, live competition where four standout data creators go head-to-head in a timed Power BI visualization challenge. Using the same dataset, each competitor will build compelling reports that showcase creativity, storytelling, and technical skill. A panel of celebrity judges will evaluate the results and crown the FabCon Viz Champion, with the winner’s work featured across the community. Whether you’re a Power BI pro or just love great data stories, this is your front-row seat to inspiration, innovation, and a little friendly competition.

The latest in AI

Fabric and Azure AI Foundry playing nicely together

Session speaker: Grímur Sæmundsson

Explore how Microsoft Fabric and Azure AI Foundry work together to streamline employee assessments in the public sector. This session walks through a real-world solution in which Fabric handles data processing and Azure OpenAI enhances analysis and feedback generation. Learn how retrieval-augmented generation is used to embed guidelines, and see Notebooks, Semantic Link, and PySpark in action to retrieve and prepare data. You’ll walk away with practical insights into using LLMs and Fabric to automate complex evaluation workflows.

Databases

SQL Server 2025: The AI-ready enterprise Database Connected with Microsoft Fabric

Session speakers: Bob Ward and Uros Milanovic

Discover what’s new in SQL Server 2025—now with built-in AI, enhanced performance, and deep integration with Azure and Microsoft Fabric. Learn how SQL enables AI applications both on-premises and in the cloud, with consistent capabilities from ground to cloud to Fabric. This session covers key features designed for modern database developers, making it easier than ever to build intelligent, connected apps.

Real-Time Intelligence

Unlock the power of Digital Twin solutions with Real-Time Intelligence

Session speakers: Chafia Aouissi and Jomit Vaghela

Explore how Microsoft Fabric’s Digital Twin Builder helps you design AI-ready digital twin solutions using real-time data, ontology management, and contextualization. Learn how to map, model, and analyze real-world systems for deeper insights, predictive maintenance, and smarter decision-making. Whether you’re just getting started or looking to scale, this session offers practical guidance on building and optimizing digital twins with Fabric Real-Time Intelligence.

Data warehouse and data engineering

Accelerating Fabric Migration: New Assistant Tools for Data Engineering and Warehousing

Session speakers: Jenny Jiang and Ancy Philip

Learn how Microsoft’s new migration assistants simplify moving from Synapse to Microsoft Fabric. This session covers tools for Spark and Data Warehouse migrations, highlighting key features, feature parity, and differences to guide your strategy. See live demos, explore upcoming capabilities, and leave with practical tips to ensure a smooth and efficient migration to Fabric.


Mastering Microsoft Fabric Data Warehousing: Tips & Tricks You Need to Know

Session speaker: Kristyna Ferris

Learn practical tips to optimize performance and manage your Microsoft Fabric data warehouse more effectively. This session covers creating case-insensitive warehouses, monitoring and tuning query performance, and stopping rogue queries that threaten capacity. Packed with real-world examples and actionable guidance, you’ll leave with strategies you can apply immediately to keep your data warehouse stable and efficient.


Revolutionizing external data access in Fabric Data Warehouse

Session speakers: Jovan Popovic and Twinkle Cyril

Discover how Microsoft Fabric Data Warehouse transforms external data access with new capabilities for reading and integrating data without ingestion. Learn to use external tables and OPENROWSET to query Delta Lake, parquet, and CSV files directly from OneLake, Lakehouse, and real-time analytics sources. This session highlights key enhancements to external tables, COPY INTO, and virtualization techniques—showcasing how Fabric unifies warehouse and lakehouse concepts into an open, modern platform.


Workspace strategy for Data Engineering in Microsoft Fabric

Session speaker: Ásgeir Gunnarsson

Choosing the right workspace strategy is critical to building scalable data engineering solutions in Microsoft Fabric. This session examines different approaches—single workspace, per stage, or per workload—and how factors like team size, DevOps practices, and security requirements influence your decision. Using the Medallion architecture as a guide, we’ll explore common challenges, practical workarounds, and key considerations to help you start strong and avoid a costly rework later.

Security and governance

Govern, manage, and protect your data in Microsoft Fabric

Session speakers: Yaron Canari and Adi Regev

Learn how Microsoft Fabric helps organizations govern, manage, and protect their analytics data with built-in compliance and security features. This session covers local governance tools within Fabric and how they integrate with Microsoft Purview for broader, enterprise-wide control. Gain practical insights into securing your data estate while staying compliant and in control.


Fabric security: Everything you need to know!

Session speakers: Kasper de Jonge and Anton Fritz

Microsoft Fabric offers a SaaS-first approach to data that includes powerful security features out of the box—but do you know what you’re getting? This session explores how Fabric handles authentication, inbound access, data storage, and user-level permissions. Learn how to secure your data estate, control access, and integrate governance with Microsoft Purview. Walk away ready to engage your security team with confidence.

Microsoft OneLake

Deep dive into Delta (Parquet) and OneLake: Unpacking the storage behind Microsoft Fabric

Session speaker: Steve Campbell

Explore the core storage technologies that power Microsoft Fabric—OneLake, Delta, and Parquet—and learn how they work together to enable scalable, lake-centric analytics. This session breaks down Delta’s key features like ACID transactions, schema evolution, and time travel, without diving into heavy code or jargon. With real-world examples and visual aids, you’ll gain the foundational knowledge to make smart architectural decisions and optimize storage performance in your Fabric solutions. Perfect for data engineers, analysts, and IT pros familiar with Fabric but new to its storage underpinnings.

Additional can’t miss sessions

Git good: Best practices for CI/CD and collaboration in Microsoft Fabric

Session speaker: Peer Grønnerup

Take your Fabric projects to the next level with practical strategies for CI/CD, Git integration, and team collaboration. Learn how to structure repos, automate deployments with Fabric CLI and fabric-cicd, and build pipelines using Azure DevOps or GitHub Actions. Peer, a Fabric expert with over 15 years of experience in data and BI, will share real-world tips, branching strategies, and ready-to-use templates to help you scale workflows and maintain quality.


We’re at capacity—now what?

Session speaker: Frederik Declerck

Fabric capacities simplify data operations and cost control—but hitting limits can still catch teams off guard. In this session, we’ll demystify bursting, smoothing, and how background activity can unexpectedly max out your capacity. Learn how to diagnose issues using tools like the Capacity Metrics app and Monitoring Hub, and explore real-world strategies for short and long-term capacity management. We’ll also cover workload optimization, capacity planning, and new features like Autoscale Billing and surge protection to help you stay ahead of demand.

Explore more sessions and save your spot at FabCon Vienna

If you’re looking to see even more sessions and explore the full program, check out the complete schedule. You’ll find deep dives, hands-on workshops, and keynotes covering every corner of Microsoft Fabric and the future of AI-powered analytics.

A reminder that the European Microsoft Fabric Community Conference 2025 is an in-person only event. Don’t miss the opportunity to learn about Fabric and see firsthand how Microsoft can help your organization prepare for the era of AI. Sign up for the FabCon Vienna conference using the code MSCUST to save €200 off your registration. We’ll see you in Vienna!

The post Sessions you won’t want to miss at FabCon Vienna appeared first on Microsoft Fabric Blog.

]]>
Build data-driven agents with curated data from OneLake https://blog.fabric.microsoft.com/en-us/blog/build-data-driven-agents-with-curated-data-from-onelake?ft=All Thu, 24 Apr 2025 18:00:00 +0000 Innovation doesn’t always happen in a straight line. From the invention of the World Wide Web, to the introduction of smartphones, technology often makes massive leaps that transform how we interact with the world almost overnight.

The post Build data-driven agents with curated data from OneLake appeared first on Microsoft Fabric Blog.

]]>
Innovation doesn’t always happen in a straight line. From the invention of the World Wide Web, to the introduction of smartphones, technology often makes massive leaps that transform how we interact with the world almost overnight. Now we’re seeing the next great shift: the era of AI. This shift has been decades in the making, but the opportunity of AI is right now. Already, organizations are using AI agents to augment their workforce and execute business processes.

With services like Azure AI Foundry, you can not only access generative AI, but build your own agents, tailor-made for your use cases. Creating these custom AI experiences requires data—lots of it. Data is the foundation on which AI is built, and the simple fact is AI is only as good as the data it’s based on. As you enter a future built on AI, you need a data estate capable of fueling AI innovation across your organization. This can be a challenging prospect for most organizations whose data environments have grown organically over time with specialized and fragmented solutions.

That’s why we introduced Microsoft Fabric and Microsoft OneLake, Fabric’s unified data lake. With OneLake, you can access your entire multi-cloud data estate from a single data lake that spans the entire organization. OneLake can act as the central, accessible location for comprehensive data access and management.​ And once connected to OneLake, your teams can use the array of data and analytics tools in Fabric to integrate, transform, model, and prepare your data for any AI project—all in a pre-integrated and optimized SaaS environment.

Today we are going to focus on why Fabric and OneLake are the ideal data tools to fuel your AI projects in AI Foundry. First, we will talk through how you can unify your data estate on OneLake, then cover how Fabric’s workloads can help you prepare your data for AI projects. Finally, we’ll show you how easy it is to connect OneLake to Azure AI Foundry so you can start building data-driven agents in seconds.

Unifying your data estate on OneLake

For teams tasked with building new AI solutions, finding and accessing the necessary data across a sea of disconnected data services can be challenging at the best of times. To lay the foundation for long-term success, organizations need a more unified, flexible data estate based on a lake-centric approach. The right data lake foundation can help you unify all of your multi-cloud sources and allow your data professionals to work from the same data—reducing data duplication, improving collaboration, and streamlining analysis.

OneLake is designed as the single point to discover and explore data for everyone in your entire organization. You can unify all of your multi-cloud and on-premise sources using zero ETL shortcuts and mirroring in OneLake without data duplication or movement. Alternatively, you can leverage the 180+ connectors in Fabric Data Factory to move your data in from any other source. OneLake is automatically wired into every Fabric workload and since data is stored in an open format, you can use data in OneLake for all your data projects, no matter the vendor or service. You can also save time and reduce data duplication by loading data into OneLake only once and using a single copy across every Fabric engine and even other engines like from Snowflake.

Once enabled in OneLake, you can use domains and the OneLake catalog to organize your data into a logical data mesh and empower everyone to easily explore, manage, and govern their data. Take a look at the OneLake catalog:

Preparing and curating your data for AI projects

For generative AI solutions to be as accurate as possible, they need to be built with clean data and in a semi-structured way. You’ll need an analytics platform that can help you prepare your data before building custom AI experiences. ​With your data in OneLake, you can use Fabric’s various workloads to make the data AI-ready. Fabric has tools for data integration and engineering, data warehousing, data science, real-time analytics, data modeling and visualization, and even has native, industry-specific and partner-created workloads to help you accelerate your data projects.

All Fabric workloads work together seamlessly out-of-the-box without the myriad of infrastructure and configuration settings you typically find in data platforms, so you can focus on getting results. Advanced security, governance, and continuous integration and continuous delivery (CI/CD) capabilities are woven into the platform with personalized experiences for admins and users alike. Copilot in Fabric and other AI capabilities are built into every layer of Fabric to help data professionals and business users automate routine tasks and get more done. Fabric also comes with category-leading performance, instant scalability, shared resilience, and built-in security, governance, and compliance so you can feel confident using Fabric for your mission-critical workloads.

Connecting OneLake data to AI products

Now that your data is AI-ready, you need to connect it to your AI platforms like Azure AI Foundry to build and scale data-driven GenAI apps. We’ve built native integration between OneLake and Azure AI Foundry to make this as seamless as possible. Azure AI Foundry can operate directly on OneLake, opening endless possibilities for AI and app developers, data engineers, data scientists, and business users to interact using natural language to uncover insights from their data.

Azure AI Foundry

Azure AI Foundry is a platform designed to empower your developers, AI engineers, and IT professionals to customize, host, run, and manage AI solutions with greater ease and confidence. Similar to Fabric, Azure AI Foundry’s unified approach simplifies the development and management process, helping all stakeholders focus on driving innovation and achieving strategic goals. It’s designed to help your developers build more technical, customized AI solutions.

The integration between Azure AI Foundry and OneLake is built on the same shortcut technology that allows you to virtualize data in OneLake from your cloud sources like Amazon S3 and Google Cloud without having to move and duplicate the data. You can immediately work with your structured and unstructured data from OneLake in Azure AI Foundry without creating copies and adding more data sprawl. OneLake also directly integrates with Azure AI Search, which can store, index, and retrieve data, including vector embeddings, from your data sources including OneLake. 

Finally, you can ground your Azure AI Agent’s responses with data from Fabric using Fabric data agents to unlock powerful data analysis capabilities. Data agents (formally known as AI skills) in Fabric are AI-powered assistants that can learn, adapt, and deliver insights, allowing users to interact with the data through chat. With out-of-the-box authorization, this integration simplifies access to enterprise data in Fabric while maintaining robust security, ensuring proper access control and enterprise-grade protection. Check out this full demo:

https://www.youtube-nocookie.com/embed/SBsErGew1yE?feature=oembedUsing data agents in Fabric as knowledge sources in Azure AI Foundry

This seamless integration offers many opportunities for generative AI use cases across various industries, including:

  • Enhancing data insights: Build agents that can help your business users explore and better understand critical data using natural language from structured, unstructured, and real-time data.
  • Analyzing customer interactions: Build agents trained on your customer interaction data to enhance customer service, tailor support responses, and make data-driven decisions. These agents can detect language, summarize content, analyze sentiment, and convert insights into vector embeddings for future access in search queries.
  • Customizing machine learning models: Tailor models to specific business needs, whether it’s predictive maintenance, fraud detection, or customer sentiment analysis. Azure AI Foundry, Azure Machine Learning, and Microsoft Fabric empower developers and data scientists to create custom models that fit their business requirements, grounded on their enterprise data in OneLake.
  • Department-specific agents: Build agents that automate budget and expenses, increase up-sell and conversion opportunities, and improve operational efficiency
  • Industry-specific agents: Build data-driven agents to streamline operations and manage OEE in manufacturing, optimize logistics and interact with customers in retail, and reduce patient-practitioner contact time in healthcare.

Ready to learn more?

Unlock a realm of new possibilities for your organization in the era of AI with the integration of Microsoft Fabric and Azure AI. Explore the potential, innovate, and thrive in the new digital landscape.

 If you want to learn more about these tools, consider:

The post Build data-driven agents with curated data from OneLake appeared first on Microsoft Fabric Blog.

]]>
The art of simplifying the complex: Microsoft Fabric’s superpower http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2025/02/24/the-art-of-simplifying-the-complex-microsoft-fabrics-superpower/ Mon, 24 Feb 2025 16:00:00 +0000 The art of simplifying the complex involves distilling intricate ideas, processes, and systems into their essential elements to create a unified experience accessible to a broader audience.

The post The art of simplifying the complex: Microsoft Fabric’s superpower appeared first on Microsoft Fabric Blog.

]]>
The art of simplifying the complex involves distilling intricate ideas, processes, and systems into their essential elements to create a unified experience accessible to a broader audience. When done correctly, it does not reduce capability but rather enables innovation.

Microsoft Fabric has embraced this mission by integrating multiple products and services needed for an end-to-end analytics and AI solution, redefining existing processes to make them simpler and more intuitive. It has significantly simplified how one interfaces with such a comprehensive solution by creating a turnkey software-as-a-service experience that is easy to use with a much simpler and singular capacity usage model.

At Ignite 2024, Fabric took another bold step forward by adding operational databases to the Fabric portfolio with SQL database in Fabric. Adding operational data alongside Fabric’s analytical OLAP (Online Analytics Processing) data and real-time streaming data (RTI) opens a host of new scenarios for AI agentic applications. It also provides our customers with a unified data estate where consistent security and governance policies can be applied.

SQL database in Fabric leverages the proven mission-critical SQL Server database engine. It applies the core principles of Fabric to make deploying and managing an operational database simpler, more autonomous, secure by default, and optimized for AI. For example, deploying and configuring a database only requires a name, and the database is ready in seconds. It is secure by default with encryption at rest and in transit enabled. Networking security is also enabled via Private Link, and high availability and zone redundancy are automatically configured. 

SQL in Fabric includes native AI capabilities like support for vector and RAG (Retrieval-augmented Generation). You can also make calls directly to Azure AI services from the database and connect your database to Azure AI Foundry, VSCode, and GitHub for an integrated developer experience. In addition, you will find Microsoft Copilot integrated into every workload in Fabric including SQL in Fabric, simplifying administrative and management tasks for the databases. 

Beyond just the OLTP (Online Transaction Processing) database, Fabric introduces new agentic AI application scenarios by providing access to real-time streaming data from IoT sensors, alongside your system of record with SQL in Fabric and other data sets securely stored in OneLake. 

OneLake is at the heart of enabling a unified data estate. OneLake is built on top of Azure Data Lake Storage (ADLS) Gen2 and can support any type of file, structured or unstructured. All Fabric data items like data warehouses and lakehouses store their data automatically in OneLake in Delta Parquet format. With the addition of SQL database in Fabric you now also have access to your mirrored SQL data in OneLake and mirroring data in Fabric is free. 

Fabric also provides a rich ecosystem to support agentic AI applications using your operational data. Changes from your data can be seamlessly sent to Azure OpenAI for business recommendations using Fabric Real-time Intelligence Eventstream, Spark, OneLake, and Power BI. 

This unification of data is incredibly powerful, enabling dynamic improvements to customer prompt responses and proactive, personalized offers. From a security standpoint, Fabric can enable consistent data protection from when the data is born to business insights via PowerBI. The same goes for data governance. 

This is just the tip of the iceberg when it comes to the number of new scenarios that Fabric can enable by creating a unified data estate. SQL database in Fabric is just the first Azure Database to be added to Fabric, with more Azure Databases to follow, so stay tuned.

Get started today

SQL database in Fabric is simple, autonomous, secure, and optimized for AI. We highly encourage you to try it today and see how you can build new AI apps faster and easier than ever! 

Learning with Fabric

We have multiple resources to help you and your teams swiftly ramp up on SQL database in Fabric: 

Fabric Community Conference Vegas: A must-attend event for database professionals! Be sure to take advantage of the discount code MSCUST for $150 off the registration price. 

FabCon Vegas is the perfect opportunity to connect with experts and data leaders to build your skills with Fabric Databases and Azure Databases and see how your peers are implementing their solutions. 

  • Microsoft Fabric Community Conference March 31st – April 2nd, in Vegas! Workshops will also be available on March 29th, 30th, and April 3rd, making this the most comprehensive Microsoft Fabric learning experience to date.
  • SQL pros can take advantage of a dedicated track for SQL in Fabric Databases and Azure Databases. 
  • Connect with product specialists for 1:1 support in the Ask the Experts area. 
  • You’ll get endless opportunities all week to engage with the Fabric and data communities through sessions, thoughtful discussions, attendee mixers, and interactive activations. 
  • In touch with your Microsoft account team? Ask them if they have any special discounts to share.
A group of colorful circles

Database experts at FabCon

  • CVP of Azure Databases: Shireesh Thota, speaking at the event1.
  • Sessions from the Microsoft Databases Product team: Rie Merritt, Bob Ward, Mazuma Zahid, Erin Stellato, Davide Mauri, and more.
  • Sessions from Database Community MVPs: Joey D’Antoni, John Morehouse, Monica Rathbun, Denny Cherry, Karen Lopez, Anthony Nocentino, Erwin de Kreuk, Warwick Rudd, Kelly Broekstra, Heidi Hasting, and Hamish Watson.
A purple and white gradient

Microsoft Fabric

Unify your teams and data to accelerate AI innovation with a complete data platform


1speakers subject to change

The post The art of simplifying the complex: Microsoft Fabric’s superpower appeared first on Microsoft Fabric Blog.

]]>
Microsoft: A leader in the 2024 Gartner Magic Quadrant report http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/12/09/microsoft-a-leader-in-the-2024-gartner-magic-quadrant-report/ Mon, 09 Dec 2024 16:00:00 +0000 We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft Fabric Blog.

]]>
We are thrilled to announce that Microsoft has been named a Leader in the 2024 Gartner Magic Quadrant™ for Data Integration Tools for the fourth year in a row. We believe this recognition reflects our dedication to innovation, excellence, and delivering value to our customers in data integration. 

Gartner MQ Table

A Leader in Data Integration 

We feel that Microsoft’s acknowledgment in the Gartner Magic Quadrant reflects our dedication to innovation and customer-centric solutions. This stems from our relentless drive to advance technology and address the ever-evolving needs of modern organizations.

Our vision for data integration is to deliver seamless, intuitive experiences that empower businesses to unlock the full potential of their data and achieve transformative results. This recognition reinforces our dedication to leading the evolution of data integration and delivering unparalleled value to our customers and partners worldwide.

shape, background pattern

Microsoft Fabric

Give your teams the AI-powered tools they need for any data project—including workloads tailored to your industry

Microsoft Fabric: Unified Data Platform for the Era of AI 

At the core of our data integration strategy is Microsoft Fabric. Built to navigate the complexities of modern data ecosystems, Microsoft Fabric provides an all-in-one, software-as-a-service (SaaS) platform with AI-powered services to handle any data project—all within a pre-integrated and optimized environment. It enables organizations to unlock their data’s full potential, drive innovation, and make smarter decisions. Features like Copilot and other generative AI tools introduce new ways to transform and analyze data, generate insights, and create visualizations and reports in Microsoft Fabric.

Microsoft OneLake: The heart of our Data Integration journey 

At the center of our Fabric is OneLake, the unified, open data lake that simplifies and accelerates data integration across diverse systems. OneLake, with the data integration capabilities of Fabric, is designed to help you simplify data management and reduce data duplication. OneLake’s open data format means you only need to load the data into the lake once and you can use the single copy across every Fabric workload and engine. It acts as the central hub, ensuring seamless connectivity, accessibility, and collaboration for all your data needs. 

OneLake has four innovative pathways for integrating data depending on your needs: 

  1. Fabric Data Factory 

Fabric Data Factory integrates seamlessly with OneLake, offering powerful cloud-scale services for data movement, orchestration, transformation, deployment, and monitoring. These capabilities enable organizations to tackle even the most complex ETL (Extract, Transform, and Load) scenarios, unifying data estates, streamlining operations, and unlocking the full potential of their data.

  1. Multi-Cloud Shortcuts

OneLake shortcuts allow you to virtualize data into OneLake from across clouds, accounts, and domains—all without duplication, movement, or changes to metadata or ownership. This capability allows organizations to access and analyze their data in place, without the need for complex data migration processes. By maintaining a live connection to the source, OneLake ensures real-time data availability and consistency across all integrated environments. You can shortcut data from Azure Data Lake Service, S3-compatible sources, Iceberg-compatible sources, Google Cloud Platform, Dataverse, and more.

  1. Database Mirroring 

OneLake offers an innovative zero-ETL approach to database mirroring, simplifying the replication of operational databases into the lake. This capability minimizes the effort required to synchronize databases, supporting real-time changes and ensuring that data is always current and ready for analytics and reporting.

  1. Real-Time Intelligence 

Real-time intelligence in Microsoft Fabric empowers organizations to ingest and process streaming and high granularity data instantaneously, driving real-time insights and automating decision-making. This solution is ideal for applications requiring immediate data updates, such as IoT analytics, fraud detection, and operational dashboards. The capability extends to highly granular data analytics, allowing businesses to track a single package within a global delivery network or monitor a specific component in a manufacturing machine across a fleet of factories worldwide, enabling precise insights and optimized operations. Leveraging cutting-edge data processing frameworks, Eventhouse ensures scalability, reliability, and low-latency performance, making it suitable for high-volume streaming scenarios.

With these innovative pathways, Fabric empowers organizations to break down data silos, optimize workflows, and unlock the full potential of their data. Whether it’s through seamless data integration, real-time insights, or multi-cloud collaboration, Fabric is designed to meet the demands of modern data ecosystems. These enriched features position Fabric as a critical tool for organizations aiming to unlock the full potential of their data while maintaining simplicity, security, and scalability.

Customer success stories 

Our customers’ success stories are a testament to the impact of Microsoft Fabric. Organizations across various industries have leveraged our data integration capabilities to unlock new opportunities, drive innovation, and achieve their business goals. By streamlining data processes and improving data quality, Microsoft Fabric has enabled these businesses to make data-driven decisions with confidence. 

Read UST Global’s case study to learn how they leveraged the power of Fabric to migrate over 20 years of data, integrating disparate data sources to facilitate better collaboration and innovation among employees. 

Looking ahead: The future of Data Integration with Microsoft Fabric 

As we celebrate being recognized as a Leader in the Gartner Magic Quadrant for the fourth consecutive year in a row, we are motivated to push the boundaries of what’s possible in data integration. To us, this is a milestone that reflects not only our commitment to innovation but also our dedication to empowering our customers to turn their data into actionable insights.

Looking forward, the roadmap for Microsoft Fabric is filled with exciting enhancements and new features. These advancements are designed to tackle the complexities of modern data ecosystems, making it even easier for organizations to unify, transform, and harness their data at scale. Continuous improvement is at the core of our strategy. We aim to remain at the forefront of the data integration landscape and redefine the possibilities of what a comprehensive data platform can achieve. 

We believe this recognition by Gartner is a validation of the trust our customers place in us and a reflection of our relentless drive to deliver world-class solutions. As we continue this journey, we remain committed to collaborating with our community and partners, building on this success to achieve even greater outcomes together.

Resources 


Gartner, Magic Quadrant for Data Integration Tools, By Thornton Craig, Sharat Menon, Robert Thanaraj, Michele Launi, Nina Showell, 3 December 2024 

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. Available here.

The post Microsoft: A leader in the 2024 Gartner Magic Quadrant report appeared first on Microsoft Fabric Blog.

]]>
Planning in Microsoft Fabric: A shared vision through collaboration with Lumel  http://approjects.co.za/?big=en-us/microsoft-fabric/blog/2024/12/04/planning-in-microsoft-fabric-a-shared-vision-through-collaboration-with-lumel/ Wed, 04 Dec 2024 16:00:00 +0000 We are excited to announce a deep collaboration with Lumel that brings Enterprise Performance Management (EPM) for planning applications to Power BI and Microsoft Fabric.

The post Planning in Microsoft Fabric: A shared vision through collaboration with Lumel  appeared first on Microsoft Fabric Blog.

]]>
In today’s rapidly evolving business environment, organizations face a common challenge: planning processes are often isolated from their Business Intelligence (BI) and reporting systems. This disconnect introduces inefficiencies, data silos, and a lack of agility in decision-making. 

Consider this: over 97% of Fortune 500 companies rely on Microsoft Power BI and many of these organizations seek to enhance Power BI’s capabilities by extending it for planning and Hybrid Transactional/Analytical Processing (HTAP) workloads. However, traditional approaches to planning and reporting exacerbate silos: 

  1. Historical insights feeding future plans: Organizations rely on historical performance data to inform future plans. This process requires transferring actual transactional data into separate planning systems, adding time and complexity. 
  1. From planning back to reporting: Once budgets and forecasts are finalized, they must be reloaded into Power BI for variance analysis and performance reporting—a tedious and redundant process. 

Planning is rarely a one-time exercise. It evolves through multiple iterations, scenarios, and assumptions, requiring inputs from diverse teams, departments, and geographies. This fragmented approach necessitates back-and-forth data orchestration, creating inefficiencies and compounding the issue of data silos. 

Craig Schiff, Founder and CEO of BPM Partners, summarized it perfectly: “Planning solutions fully integrated with existing BI software is an underserved area that is growing in importance.” 

Microsoft’s vision for planning 

At Microsoft, our vision is clear: eliminate silos and empower organizations to plan and report seamlessly within a unified platform—Microsoft Fabric

With Fabric, enterprises no longer need to replicate data into separate software as a service (SaaS) or legacy planning systems. Instead, users can build plans and forecasts directly on top of the semantic models in Power BI, ensuring immediate availability for reporting and analysis across Fabric. 

Microsoft Fabric and Lumel: partnering for success 

We are excited to announce a deep collaboration with Lumel that brings Enterprise Performance Management (EPM) for planning applications to Power BI and Microsoft Fabric. 

Lumel’s no-code, self-service EPM solution enables business users to create sophisticated planning and reporting applications within Power BI, tightly integrated with Fabric. This approach delivers: 

  • Seamless integration: Build and modify plans directly in Power BI, leveraging Fabric semantic models. 
  • Collaborative capabilities: Commenting, notifications, scheduling, and approval workflows streamline team collaboration. 
  • Broader use cases: Support for transactional and analytical scenarios expands Fabric’s utility for modern HTAP workloads and planning use cases. 

Lumel’s solution redefines the boundaries of what organizations can achieve with Power BI and Fabric. For example, with Lumel’s Inforiver Write-Back Matrix for Planning, businesses can effortlessly create their 2025 plans, save them to OneLake, and integrate them instantly into reporting and analysis workflows. 

Our strong collaboration with Lumel reinforces Microsoft’s commitment to providing a single, unified platform where planning and analytics coexist, helping organizations make better, faster decisions. 

Looking ahead with Microsoft Fabric 

Take the next step towards connected planning—streamlining  workflows, eliminating silos, and unlocking new possibilities with Microsoft Fabric and Lumel. 

Fabric Featured Image

Lumel and Microsoft Fabric

Enterprise planning and performance management in Power BI and Fabric

Stay tuned for more updates as we continue to empower organizations with cutting-edge tools for the future of planning and analytics. 

The post Planning in Microsoft Fabric: A shared vision through collaboration with Lumel  appeared first on Microsoft Fabric Blog.

]]>