{"id":12391,"date":"2024-11-01T10:43:00","date_gmt":"2024-11-01T17:43:00","guid":{"rendered":"https:\/\/www.microsoft.com\/insidetrack\/blog\/?p=12391"},"modified":"2024-11-14T15:08:46","modified_gmt":"2024-11-14T23:08:46","slug":"getting-the-most-out-of-generative-ai-at-microsoft-with-good-governance","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/insidetrack\/blog\/getting-the-most-out-of-generative-ai-at-microsoft-with-good-governance\/","title":{"rendered":"Getting the most out of generative AI at Microsoft with good governance"},"content":{"rendered":"\n
\"Microsoft<\/figure>\n\n\n\n

Since generative AI exploded onto the scene, it\u2019s been unleashing our employees\u2019 creativity, unlocking their productivity, and up-leveling their skills.<\/p>\n\n\n\n

But we can fly into risky territory if we\u2019re not careful. The key to protecting the company and our employees from the risks associated with AI is adopting proper governance measures based on rigorous data hygiene.<\/p>\n\n\n\n

Technical professionals working within Microsoft Digital, our internal IT organization, have taken up this challenge. They include the AI Center of Excellence (AI CoE) team<\/a> <\/strong>and the Microsoft Tenant Trust team that governs our Microsoft 365 tenant.<\/strong><\/p>\n\n\n\n

Since the widespread emergence of generative AI technologies over the last year, our governance experts have been busy ensuring our employees are set up for success. Their collaboration helps us ensure we\u2019re governing AI through both guidance from our AI CoE and a governance model for our Microsoft 365 tenant itself.<\/p>\n\n\n\n

{Learn how Microsoft is responding to the AI revolution with a Center of Excellence<\/em><\/a>. <\/em>Discover transforming data governance at Microsoft with Purview and Fabric.<\/em><\/a> Explore how we use Microsoft 365 to bolster our teamwork.<\/em><\/a>}<\/em><\/p>\n\n\n\n

Generative AI presents limitless opportunities\u2014and some tough challenges<\/h2>\n\n\n\n

Next-generation AI\u2019s benefits are becoming more evident by the day. Employees are finding ways to simplify and offload mundane tasks and focus on productive, creative, collaborative efforts. They\u2019re also using AI to produce deeper and more insightful analytical work.<\/p>\n\n\n\n

\n

\u201cThe endgame here is acceleration,\u201d says David Johnson, a tenant and compliance architect with Microsoft Digital. \u201cAI accelerates employees\u2019 ability to get questions answered, create things based on dispersed information, summarize key learnings, and make connections that otherwise wouldn\u2019t be there.\u201d<\/p>\n<\/blockquote>\n\n\n\n

There\u2019s a real urgency for organizations to empower their employees with advanced AI tools\u2014but they need to do so safely. Johnson and others in our organization are balancing the desire to move quickly against the need for caution with technology that hasn\u2019t yet revealed all the potential risks it creates.<\/p>\n\n\n\n

\u201cWith all innovations\u2014even the most important ones\u2014it’s our journey and our responsibility to make sure we’re doing things in the most ethical way,\u201d says Faisal Nasir, an engineering leader on the AI CoE team. \u201cIf we get it right, AI gives us the power to provide the most high-quality data to the right people.\u201d<\/p>\n\n\n\n

But in a world where AI copilots can comb through enormous masses of enterprise data in the blink of an eye, security through obscurity doesn\u2019t cut it. We need to ensure we maintain control over where data flows throughout our tenant. It\u2019s about providing information to the people and apps that have proper access and insulating it against ones that don\u2019t.<\/p>\n\n\n\n

To this end, our AI CoE team is introducing guardrails that ensure our data stays safe.<\/p>\n\n\n\n

Tackling good AI governance<\/h2>\n\n\n\n

The AI CoE brings together experts from all over Microsoft who work across several disciplines, from data science and machine learning to product development and experience design. They use an AI 4 ALL (Accelerate, Learn, Land) model to guide our adoption of generative AI through enablement initiatives, employee education, and a healthy dose of rationality.<\/p>\n\n\n\n

\n

\u201cWe\u2019re going to be one of the first organizations to really get our hands on the whole breadth of AI capabilities,\u201d says Matt Hempey, a program manager lead on the AI CoE team. \u201cIt will be our job to ensure we have good, sensible policies for eliminating unnecessary risks and compliance issues.\u201d<\/strong><\/p>\n<\/blockquote>\n\n\n\n

As Customer Zero for these technologies, we have a responsibility for caution\u2014but not at the expense of enablement.<\/p>\n\n\n\n

\u201cWe’re not the most risk-averse customer,\u201d Johnson says. \u201cWe’re simply the most risk-aware customer.\u201d<\/p>\n\n\n\n

The AI CoE has four pillars of AI adoption: strategy, architecture, roadmap, and culture. As an issue of AI governance, establishing compliance guardrails falls under architecture. This pillar focuses on the readiness and design of infrastructure and services supporting AI at Microsoft, as well as interoperability and reusability for enterprise assets in the context of generative AI.<\/p>\n\n\n\n

\n

Operational pillars of the AI Center of Excellence<\/h3>\n\n\n\n
\n
\n

Strategy<\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Working
with feature crews to determine what we want to achieve with AI at the Microsoft
Digital level and prioritize those AI investments.<\/p>\n<\/div>\n\n\n\n

\n

Architecture<\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Enabling infrastructure data, services, access, security, privacy, scalability, accessibility, and interoperability for AI for cases.<\/p>\n<\/div>\n\n\n\n

\n

Roadmap<\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Building implementation plans for AI projects, including tools, technologies, responsibilities, targets, and KPIs.<\/p>\n<\/div>\n\n\n\n

\n

Culture<\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Fostering collaboration, innovation, education, and ethics among stakeholders.<\/p>\n<\/div>\n<\/div>\n\n\n\n

We\u2019ve created four pillars to guide our internal implementation of generative AI across Microsoft: Strategy, architecture, roadmap, and culture. Our AI certifications program falls under culture.<\/em><\/p>\n<\/div>\n\n\n\n

Building a secure and compliant data foundation<\/h2>\n\n\n\n

Fortunately, Microsoft\u2019s existing data hygiene practices provide an excellent baseline for AI governance.<\/p>\n\n\n\n

There are three key pieces of internal data hygiene at Microsoft:<\/p>\n\n\n\n

    \n
  1. Employees can create new workspaces like Sites, Teams, Groups, Communities, and more. Each workspace features accountability mechanisms for its owner, policies, and lifecycle management.<\/li>\n\n\n\n
  2. Workspaces and data get delineated based on labeling.<\/li>\n\n\n\n
  3. That labeling enforces policies and provides user awareness of how to handle the object in question.<\/li>\n<\/ol>\n\n\n\n

    With AI, the primary concern is ensuring that we properly label the enterprise data contained in places like SharePoint sites and OneDrive files. AI will then leverage the label, respect policies, and ensure any downstream content-surfacing will drive user awareness of the item\u2019s sensitivity.<\/p>\n\n\n\n

    AI will always respect user permissions to content, but that assumes source content isn\u2019t overshared. Several different mechanisms help us limit oversharing within the Microsoft tenant:<\/p>\n\n\n\n

      \n
    1. Using site labeling where the default is private and controlled.<\/li>\n\n\n\n
    2. Ensuring every site with a \u201cconfidential\u201d or \u201chighly confidential\u201d label sets the default library label to derive from its container. For example, a highly confidential site will mean all new and changed files will also be highly confidential.<\/li>\n\n\n\n
    3. Enabling company sharable links (CSLs) like \u201cShare with People in <name of organization>\u201d on every label other than those marked highly confidential. That means default links will only show up to the direct recipient in search and in results employees get from using Copilots.  <\/li>\n\n\n\n
    4. All Teams and sites have lifecycle management in place where the owner attests that the contents are properly labeled and protected. This also removes stale data from AI.<\/li>\n\n\n\n
    5. Watching and addressing oversharing based on site and file reports from Microsoft Graph Data Connect.<\/li>\n<\/ol>\n\n\n\n

      Microsoft 365 Copilot respects labels and displays them to keep users informed of the sensitivity of the response. It also respects any rights management service (RMS) protections that block content extraction on file labels.<\/p>\n\n\n\n

      If the steps above are in place, search disablement becomes unnecessary, and overall security improves. \u201cIt isn\u2019t just about AI,\u201d Johnson says. \u201cIt\u2019s about understanding where your information sits and where it\u2019s flowing.\u201d<\/p>\n\n\n\n

      From there, Copilot and other AI tools in question can then safely build a composite label and attach it to its results based on the foundational labels it used to create them. That provides the context it needs to decide whether to share its results with a user or extend them to a third-party app.<\/p>\n\n\n\n

      \"Johnson,
      From left to right, David Johnson, Faisal Nasir, Matt Hempey, and Keith Bunge are among those working together here at Microsoft to ensure our data estate stays protected as we adopt next-generation AI tools.<\/figcaption><\/figure>\n\n\n\n
      \n

      \u201cTo make the copilot platform as successful and securely extensible as possible, we need to ensure we can control data egress from the tenant,\u201d says Keith Bunge, a software engineering architect for employee productivity solutions within Microsoft Digital.<\/p>\n<\/blockquote>\n\n\n\n

      We can also use composite labels to trigger confidential information warnings to users. That transparency provides our people with both agency and accountability, further cementing responsible AI use within our culture of trust.<\/p>\n\n\n\n

      Ultimately, AI governance is similar to guardrails for other tools and features that have come online within our tenant. As an organization, we know the areas we need to review because we already have a robust set of criteria for managing data.<\/p>\n\n\n\n

      But since this is a new technology with new functionality, the AI CoE is spending time conducting research and partnering with stakeholders across Microsoft to identify potential concerns. As time goes on, we\u2019ll inevitably adjust our AI governance practices to ensure we\u2019re meeting our commitment to responsible AI<\/a>.<\/p>\n\n\n\n

      \n

      \u201cProcess, people, and technology are all part of this effort,\u201d Nasir says. \u201cThe framework our team is developing helps us look at data standards from a technical perspective, as well as overall architecture for AI applications as extensions on top of cloud and hybrid application architecture.\u201d<\/p>\n<\/blockquote>\n\n\n\n

      As part of getting generative AI governance right, we\u2019re conducting extensive user experience and accessibility research. That helps us understand how these tools land throughout our enterprise and keep abreast of new scenarios as they emerge\u2014along with the extensibilities they need and any data implications. We’re also investing time and resources to catch and rectify any mislabeled data, ensuring we seal off any existing vulnerabilities within our AI ecosystem.<\/p>\n\n\n\n

      Not only does this customer zero engagement model support our AI governance work, but it also helps build trust among employees through transparency. That trust is a key component of the employee empowerment that drives adoption.<\/p>\n\n\n\n

      Realizing generative AI\u2019s potential<\/h2>\n\n\n\n

      As our teams navigate AI governance and drive adoption among employees, it\u2019s important to keep in mind that these guardrails aren\u2019t there to hinder progress. They\u2019re in place to protect and ultimately inspire confidence in new tools.<\/p>\n\n\n\n

      “In its best form, governance is a way to educate and inform our organization to move forward as quickly as possible,\u201d Hempey says. \u201cWe see safeguards as accelerators.\u201d<\/p>\n\n\n\n

      We know our customers also want to empower their employees with generative AI. As a result, we\u2019re discovering ways to leverage or extend these services in exciting new ways for the organizations using our products.<\/p>\n\n\n\n

      \u201cAs we\u2019re on this journey, we\u2019re learning alongside our industry peers,\u201d Nasir says. \u201cBy working through these important questions and challenges, we\u2019re positioned to empower progress for our customers in this space.\u201d<\/p>\n\n\n\n

      \n
      \"Key<\/figure>\n\n\n\n

      Consider these tips as you think about governing the deployment of generative AI at your company:<\/p>\n\n\n\n