{"id":15012,"date":"2024-05-30T16:45:48","date_gmt":"2024-05-30T23:45:48","guid":{"rendered":"https:\/\/www.microsoft.com\/insidetrack\/blog\/?p=15012"},"modified":"2024-09-17T17:23:17","modified_gmt":"2024-09-18T00:23:17","slug":"empowering-our-employees-with-generative-ai-while-keeping-the-company-secure","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/insidetrack\/blog\/empowering-our-employees-with-generative-ai-while-keeping-the-company-secure\/","title":{"rendered":"Empowering our employees with generative AI while keeping the company secure"},"content":{"rendered":"

Generative AI (GenAI) is rapidly changing the way businesses operate, and everyone wants to be in on the action. Whether it\u2019s to automate tasks or enhance efficiency, the allure of what GenAI can do is strong.<\/p>\n

However, for companies considering the adoption of GenAI, there are a multitude of challenges and risks that must be navigated. These range from data exposure or exfiltration where your company\u2019s sensitive data can be accessed by unintended audiences to direct attacks on the models and data sources that underpin them. Not acting and waiting until the world of GenAI settles down poses its own risk. Employees eager to try out the latest and greatest will start using GenAI tools and products that aren\u2019t vetted for use in your enterprise\u2019s environment. It\u2019s safe to say that we\u2019re not just in the era of Shadow IT but Shadow AI, too.<\/p>\n

Add to that the fact that threat actors have begun to use these tools in their activities, and you get a real sense that navigating the cyberthreat landscape of today and tomorrow will be increasingly difficult\u2014and potentially headache-inducing!<\/p>\n

Here at Microsoft, our Digital Security & Resilience (DSR) organization\u2019s Securing Generative AI program has focused on solving this problem since day one: How do we enable our employees to take advantage of the next generation of tools and technologies that enable them to be productive, while maintaining safety and security?<\/p>\n

Building a framework for using GenAI securely<\/h2>\n

At any given moment, there are dozens of teams working on GenAI projects across Microsoft and dozens of new AI tools that employees are eager and excited to use to boost their productivity or use to be more creative.<\/p>\n

When establishing our Securing AI program, we wanted to use as many of our existing systems and structures for the development, implementation, and release of software within Microsoft as possible. Rather than start from scratch, we looked at processes and workstreams that were already established and familiar for our employees and worked to integrate AI rules and guidance into those processes, such as the Security Development Lifecycle (SDL)<\/a>, and the Responsible AI Impact Assessment<\/a> template.<\/span><\/p>\n

Successfully managing the secure roll-out of a technology of this scale and importance takes the collaboration and cooperation of hundreds of people across the company, with representatives from diverse disciplines ranging from engineers and researchers working on the cutting edge of AI technology, to compliance and legal specialists, through to privacy advocates.<\/p>\n

\"Portraits
Justin Roy, Lee Peterson, Prathiba Enjeti, and Vivek Vinod Sharma are part of a team at Microsoft working to keep the company secure while allowing our employees to get the most out of GenAI.<\/figcaption><\/figure>\n

We work extensively with our partners in Microsoft Security, Aether (AI Ethics and Effects in Engineering and Research), the advisory body for Microsoft leadership on AI ethics and effects, and the extended community of Responsible AI. We also work with security champions who are embedded in teams and divisions across the enterprise. Together, this extended community helps develop, test, and validate the guidance and rules that AI experiences must adhere to for our employees to safely use them.<\/p>\n

One of the most popular frameworks for successful change management is the simple three-legged stool. It\u2019s a simple metaphor, emphasizing the need for even efforts across the domains of technology, processes, and people. We\u2019ve focused our efforts to secure GenAI on strengthening and reinforcing the data governance for our technologies, integrating AI security into existing systems and processes, and addressing the human factor by fostering collaboration and community with our employees. The recent announcement of the Secure Future Initiative<\/span><\/a> with its six security pillars emphasizes security as a top priority across the company to advance cybersecurity protections.<\/span><\/p>\n

Incorporating AI-focused security into existing development and release practices<\/h2>\n

The SDL has been central to our development and release cycle at Microsoft for more than a decade, ensuring that what we develop is secure by design, by default, and secure in deployment. We focused on strengthening the SDL to handle the security risks posed by the technology underlying GenAI.<\/p>\n

We\u2019ve worked to enhance embedded security requirements for AI, particularly in monitoring and threat detection. Mandating audit logging at the platform level for all systems provides visibility into which resources are accessed, which models are used, and the type and sensitivity of the data accessed during interactions with our various Copilot offerings. This is crucial for all AI systems, including large language models (LLMs), small language models (SLMs), and multimodal models (MMMs) that focus on partial or total task completion.<\/p>\n

Preventative measures are an equally important part of our journey to securing GenAI, and there\u2019s no shortage of work that\u2019s been done on this front. Our threat modeling standards<\/a> and red teaming<\/a> for GenAI systems have been revamped to help engineers and developers consider threats and vulnerabilities tied to AI. All systems involving GenAI must go through this process before being deployed to our data tenant for our employees to use. Our standards are under constant review and are updated based on the discoveries from our researchers and the Microsoft Security Response Center<\/a>.<\/span><\/p>\n

\n
\n
\n
\n

\"\"<\/p>\n

Sharing our acceptance criteria for AI systems<\/strong><\/h2>\n<\/div>\n
\n

As GenAI and the types of risks and threats to models and systems are ever evolving, so too is our acceptance criteria for deploying AI to the enterprise. Here are some of the key points we take into consideration for our acceptance criteria:<\/p>\n

Representatives from diverse disciplines:<\/strong> Our journey begins when a diverse team of experts. engineers, compliance teams, security SMEs, privacy advocates, and legal minds come together. Their collective wisdom ensures a holistic perspective.<\/p>\n

Evaluate against enterprise standards: Every GenAI feature is subjected to rigorous scrutiny against our enterprise standards. This isn\u2019t a rubber-stamp exercise, it\u2019s a deep dive into ethical considerations, potential security, privacy, and AI risks, and alignment with the Responsible AI standard<\/a>.<\/p>\n

Risk assessment and management:<\/strong> The risk workflow starts in our system to amplify risk awareness and management across leadership teams. It\u2019s more than a formality, it\u2019s a structured process that keeps us accountable. Risks evolve, and so do our mitigation strategies, which is why we revisit the risk assessment of a feature every three to six months. Our assessments are a living guide that adapts to the landscape.<\/p>\n

Phased deployment to companywide impact:<\/strong> We used a phased deployment to allow us to monitor, learn, and fine-tune.<\/p>\n

Risk contingency planning:<\/strong> This isn\u2019t about avoiding risks altogether; it\u2019s about managing them. By addressing concerns upfront, we ensure that GenAI deployment is safe, secure, and aligned with our values.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n

By integrating AI into these existing processes and systems, we help ensure that our people are thinking about the potential risks and liabilities involved in GenAI throughout the development and release cycle\u2014not only after a security event has occurred.<\/p>\n

Improving data governance<\/h2>\n

While keeping Gen-AI models and AI systems safe from threats and harms is a top priority, this alone is insufficient for us to consider GenAI as secure and safe. We also see data governance as essential to prevent improper access, improper use, and to reduce the chance of data exfiltration\u2014accidental or otherwise.<\/p>\n

\"Graphic
Discovery, protection, and governance are key elements to protecting the company while enabling our employees to take advantage of GenAI.<\/figcaption><\/figure>\n

At the heart of our data governance strategy is a multi-part expansion of our labeling and classification efforts, which applies at both the model level and the user level.<\/p>\n

We set default labels across our platforms and the containers that store them using Purview Information Protection<\/a> to ensure consistent and accurate tagging of sensitive data by default. We also employ auto-labeling policies where appropriate for confidential or highly confidential documents based on the information they contain. Data hygiene is an essential part of this framework; removing outdated records held in containers such as SharePoint reduces the risk of hallucinations or surfacing incorrect information and is something we reinforce through periodic attestation.<\/p>\n

To prevent data exfiltration, we rely on our Purview Data Loss Prevention (DLP)<\/a> policies to identify sensitive information types and automatically apply the appropriate policies at the controls at the application or service level (e.g. Microsoft 365), and Defender for Cloud Apps (DCA)<\/a> to detect the use of risky websites and applications, and if necessary, block access to them. By combining these methods, we\u2019re able to reduce the risk of sensitive data leaving our corporate perimeter\u2014accidentally or otherwise.<\/p>\n

Encouraging deep collaboration and sharing of best practices<\/h2>\n

So far, we\u2019ve covered the management of GenAI technologies and how we ensure that these tools are safe and secure to use. Now it\u2019s time to turn our attention to our people, the employees who work with and build with these GenAI systems.<\/p>\n

We believe that anyone should be able to use GenAI tools confidently, knowing that they are safe and secure. But doing so requires essential knowledge, which might not be entirely self-evident. We\u2019ve taken a three-pronged approach to solving this need with training, purpose-made resource materials, and opportunities for our people to develop their skills.<\/p>\n

All employees and contract staff working at Microsoft must take our three-part mandatory companywide security training released throughout the year. The safe use of GenAI is comprehensively covered, including guidance on what AI tools to use and when to use them. Additionally, we\u2019ve added extensive guidance and documentation to our internal digital security portal ranging from what to be mindful of when working with LLMs to the tools which are best suited to various tasks and projects.<\/p>\n

With so many of our employees wanting to learn how to use GenAI tools, we\u2019ve worked with teams across the company to create resources and venues where our employees can roll up their sleeves and work with AI hands-on in a way that\u2019s safe and secure. Hackathons are a big deal at Microsoft, and we\u2019ve partnered with several events including the main flagship event, which draws in more than 50,000 attendees. The Skill-Up AI presentation series hosted by our partners at the Microsoft Garage<\/a> allows curious employees to learn the safe and secure way to use the latest GenAI technologies not only in their everyday work, but also in their creative endeavors. By integrating guidance into the learning journey, we help enable safe use of GenAI without stifling creativity.<\/p>\n

\"Key<\/h2>\n

Here are our suggestions on how to empower your employees with GenAI while also keeping your company secure:<\/p>\n