{"id":12391,"date":"2023-10-19T10:43:30","date_gmt":"2023-10-19T17:43:30","guid":{"rendered":"https:\/\/www.microsoft.com\/insidetrack\/blog\/?p=12391"},"modified":"2024-05-23T13:44:48","modified_gmt":"2024-05-23T20:44:48","slug":"getting-the-most-out-of-generative-ai-at-microsoft-with-good-governance","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/insidetrack\/blog\/getting-the-most-out-of-generative-ai-at-microsoft-with-good-governance\/","title":{"rendered":"Getting the most out of generative AI at Microsoft with good governance"},"content":{"rendered":"
<\/p>\n
Read our step-by-step guide on deploying Copilot for Microsoft 365 at your company. It\u2019s based on our experience deploying it here at Microsoft:<\/p>\n
Since generative AI exploded onto the scene, it\u2019s been unleashing our employees\u2019 creativity, unlocking their productivity, and up-leveling their skills.<\/p>\n
But we can fly into risky territory if we\u2019re not careful. The key to protecting the company and our employees from the risks associated with AI is adopting proper governance measures based on rigorous data hygiene.<\/p>\n
Technical professionals working within Microsoft Digital (MSD), our internal IT organization, have taken up this challenge. They include the AI Center of Excellence (AI CoE) team<\/a> and the Microsoft Tenant Trust team that governs our Microsoft 365 tenant.<\/p>\n The endgame here is acceleration. AI accelerates employees\u2019 ability to get questions answered, create things based on dispersed information, summarize key learnings, and make connections that otherwise wouldn\u2019t be there.<\/p>\n \u2014David Johnson, tenant and compliance architect, MSD<\/p>\n<\/blockquote>\n Since the widespread emergence of generative AI technologies over the last year, our governance experts have been busy ensuring our employees are set up for success. Their collaboration helps us ensure we\u2019re governing AI through both guidance from our AI CoE and a governance model for our Microsoft 365 tenant itself.<\/p>\n [Learn how Microsoft is responding to the AI revolution with a Center of Excellence<\/a>. Discover transforming data governance at Microsoft with Purview and Fabric.<\/a> Explore how we use Microsoft 365 to bolster our teamwork.<\/a>]<\/em><\/p>\n Next-generation AI\u2019s benefits are becoming more evident by the day. Employees are finding ways to simplify and offload mundane tasks and focus on productive, creative, collaborative efforts. They\u2019re also using AI to produce deeper and more insightful analytical work.<\/p>\n \u201cThe endgame here is acceleration,\u201d says David Johnson, a tenant and compliance architect with MSD. \u201cAI accelerates employees\u2019 ability to get questions answered, create things based on dispersed information, summarize key learnings, and make connections that otherwise wouldn\u2019t be there.\u201d<\/p>\n There\u2019s a real urgency for organizations to empower their employees with advanced AI tools\u2014but they need to do so safely. Johnson and others in our organization are balancing the desire to move quickly against the need for caution with technology that hasn\u2019t yet revealed all the potential risks it creates.<\/p>\n \u201cWith all innovations\u2014even the most important ones\u2014it’s our journey and our responsibility to make sure we’re doing things in the most ethical way,\u201d says Faisal Nasir, an engineering leader on the AI CoE team. \u201cIf we get it right, AI gives us the power to provide the most high-quality data to the right people.\u201d<\/p>\n We\u2019re going to be one of the first organizations to really get our hands on the whole breadth of AI capabilities. It will be our job to ensure we have good, sensible policies for eliminating unnecessary risks and compliance issues.<\/p>\n \u2014Matt Hempey, program manager lead, MDE<\/p>\n<\/blockquote>\n But in a world where AI copilots can comb through enormous masses of enterprise data in the blink of an eye, security through obscurity doesn\u2019t cut it. We need to ensure we maintain control over where data flows throughout our tenant. It\u2019s about providing information to the people and apps that have proper access and insulating it against ones that don\u2019t.<\/p>\n To this end, our AI CoE team is introducing guardrails that ensure our data stays safe.<\/p>\n The AI CoE brings together experts from all over Microsoft who work across several disciplines, from data science and machine learning to product development and experience design. They use an AI 4 ALL (Accelerate, Learn, Land) model to guide our adoption of generative AI through enablement initiatives, employee education, and a healthy dose of rationality.<\/p>\n \u201cWe\u2019re going to be one of the first organizations to really get our hands on the whole breadth of AI capabilities,\u201d says Matt Hempey, a program manager lead on the AI CoE team. \u201cIt will be our job to ensure we have good, sensible policies for eliminating unnecessary risks and compliance issues.\u201d<\/p>\n As Customer Zero for these technologies, we have a responsibility for caution\u2014but not at the expense of enablement.<\/p>\n \u201cWe’re not the most risk-averse customer,\u201d Johnson says. \u201cWe’re simply the most risk-aware customer.\u201d<\/p>\n The AI CoE has four pillars of AI adoption: strategy, architecture, roadmap, and culture. As an issue of AI governance, establishing compliance guardrails falls under architecture. This pillar focuses on the readiness and design of infrastructure and services supporting AI at Microsoft, as well as interoperability and reusability for enterprise assets in the context of generative AI.<\/p>\n <\/p>\n Fortunately, Microsoft\u2019s existing data hygiene practices provide an excellent baseline for AI governance.<\/p>\n There are three key pieces of internal data hygiene at Microsoft:<\/p>\n With AI, the primary concern is ensuring that we properly label the enterprise data contained in places like SharePoint sites and OneDrive files. AI will then leverage the label, respect policies, and ensure any downstream content-surfacing will drive user awareness of the item\u2019s sensitivity.<\/p>\n AI will always respect user permissions to content, but that assumes source content isn\u2019t overshared. Several different mechanisms help us limit oversharing within the Microsoft tenant:<\/p>\n Microsoft 365 Copilot and other Copilots respect labels and display\u00a0them to keep users informed of the sensitivity of the response. They also respect any rights management service (RMS) protections that block content extraction on file labels.<\/p>\n To make the Copilot platform as successful and securely extensible as possible, we need to ensure we can control data egress from the tenant.<\/p>\n \u2014Keith Bunge, software engineering architect for employee productivity solutions, MSD<\/p>\n<\/blockquote>\n If the steps above are in place, search disablement becomes unnecessary, and overall security improves. \u201cIt isn\u2019t just about AI,\u201d Johnson says. \u201cIt\u2019s about understanding where your information sits and where it\u2019s flowing.\u201d<\/p>\n From there, our various Copilots and other AI tools in question can then safely build a composite label and attach it to its results based on the foundational labels it used to create them. That provides the context it needs to decide whether to share its results with a user or extend them to a third-party app.<\/p>\n \u201cTo make the Copilot platform as successful and securely extensible as possible, we need to ensure we can control data egress from the tenant,\u201d says Keith Bunge, a software engineering architect for employee productivity solutions within MSD.<\/p>\n We can also use composite labels to trigger confidential information warnings to users. That transparency provides our people with both agency and accountability, further cementing responsible AI use within our culture of trust.<\/p>\n Process, people, and technology are all part of this effort. The framework our team is developing helps us look at data standards from a technical perspective, as well as overall architecture for AI applications as extensions on top of cloud and hybrid application architecture.<\/p>\n \u2014Faisal Nasir, principal architect, MSD<\/p>\n<\/blockquote>\n Ultimately, AI governance is similar to guardrails for other tools and features that have come online within our tenant. As an organization, we know the areas we need to review because we already have a robust set of criteria for managing data.<\/p>\n But since this is a new technology with new functionality, the AI CoE is spending time conducting research and partnering with stakeholders across Microsoft to identify potential concerns. As time goes on, we\u2019ll inevitably adjust our AI governance practices to ensure we\u2019re meeting our commitment to responsible AI<\/a>.<\/p>\n \u201cProcess, people, and technology are all part of this effort,\u201d Nasir says. \u201cThe framework our team is developing helps us look at data standards from a technical perspective, as well as overall architecture for AI applications as extensions on top of cloud and hybrid application architecture.\u201d<\/p>\n As part of getting generative AI governance right, we\u2019re conducting extensive user experience and accessibility research. That helps us understand how these tools land throughout our enterprise and keep abreast of new scenarios as they emerge\u2014along with the extensibilities they need and any data implications. We’re also investing time and resources to catch and rectify any mislabeled data, ensuring we seal off any existing vulnerabilities within our AI ecosystem.<\/p>\n Not only does this customer zero engagement model support our AI governance work, but it also helps build trust among employees through transparency. That trust is a key component of the employee empowerment that drives adoption.<\/p>\n We think you might find it easier to label your containers before you start thinking about how to label emails and files or think about auto-labeling.<\/p>\nGenerative AI presents limitless opportunities\u2014and some tough challenges<\/strong><\/h2>\n
Tackling good AI governance<\/h2>\n
Building a secure and compliant data foundation<\/h2>\n
\n
\n
Ten steps for getting tenant data governance right<\/h3>\n