AI innovations for a more secure future unveiled at Microsoft Ignite
Company delivers advances in AI and posture management, unprecedented bug bounty program, and updates on its Secure Future Initiative.
Company delivers advances in AI and posture management, unprecedented bug bounty program, and updates on its Secure Future Initiative.
84% of surveyed organizations want to feel more confident about managing and discovering data input into AI apps and tools.
The technology landscape is undergoing a massive transformation, and AI is at the center of this change.
Join us at Microsoft Ignite 2024 for sessions, keynotes, and networking aimed at giving you tools and strategies to put security first in your organization.
At Microsoft, we have commitments to ensuring Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer.
Join us in November 2024 in Chicago for Microsoft Ignite to connect with industry leaders and learn about our newest solutions and innovations.
In our newly released whitepaper, we share strategies to prepare for the top data challenges and new data security needs in the age of AI.
Join Microsoft Security leaders and other security professionals from around the world at Black Hat USA 2024 to learn the latest information on security in the age of AI, cybersecurity protection, threat intelligence insights, and more.
Microsoft recently discovered a new type of generative AI jailbreak method called Skeleton Key that could impact the implementations of some large and small language models. This new method has the potential to subvert either the built-in model safety or platform safety systems and produce any content.
Microsoft security researchers, in partnership with other security experts, continue to proactively explore and discover new types of AI model and system vulnerabilities. In this post we are providing information about AI jailbreaks, a family of vulnerabilities that can occur when the defenses implemented to protect AI from producing harmful content fails.