{"id":23173,"date":"2026-04-16T08:15:00","date_gmt":"2026-04-16T15:15:00","guid":{"rendered":"https:\/\/www.microsoft.com\/insidetrack\/blog\/?p=23173"},"modified":"2026-04-20T08:24:59","modified_gmt":"2026-04-20T15:24:59","slug":"microsoft-ciso-advice-how-to-build-trustworthy-agentic-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/insidetrack\/blog\/microsoft-ciso-advice-how-to-build-trustworthy-agentic-ai\/","title":{"rendered":"Microsoft CISO advice: How to build trustworthy agentic AI"},"content":{"rendered":"\n

Building production-ready solutions with agentic AI comes with inherent risks. When agents make mistakes or hallucinate, the potential impacts can multiply rapidly.<\/p>\n\n\n\n

\u201cIt turns out that it’s very easy to write AI-powered software, but it’s very hard to write AI-powered software that works right in real-world cases,\u201d says Yonatan Zunger, CVP and deputy CISO for Microsoft.<\/p>\n\n\n\n

Yunger explains how important it is to test if you want to build trustworthy agentic AI.<\/p>\n\n\n\n

\n
\n
\"\"<\/figure>\n<\/div>\n\n\n\n
\n

Learn from our experience <\/strong><\/strong><\/p>\n\n\n\n

Read our practical advice about applying security fundamentals to AI.<\/a><\/p>\n<\/div>\n<\/div>\n\n\n\n

\n