{"id":1146661,"date":"2025-08-05T09:00:07","date_gmt":"2025-08-05T16:00:07","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-video&p=1146661"},"modified":"2025-08-06T13:42:45","modified_gmt":"2025-08-06T20:42:45","slug":"veritrail-detect-hallucination-and-trace-provenance-in-ai-workflows","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/veritrail-detect-hallucination-and-trace-provenance-in-ai-workflows\/","title":{"rendered":"VeriTrail: Detect hallucination and trace provenance in AI workflows"},"content":{"rendered":"\n

Dasha Metropolitansky, Research Data Scientist, Microsoft Research Special Projects, introduces VeriTrail, a new method for closed-domain hallucination detection in multi-step AI workflows. Unlike prior methods, VeriTrail provides traceability: it identifies where hallucinated content was likely introduced, and it establishes the provenance of faithful content by tracing a path to the source text. VeriTrail also outperforms baseline methods in hallucination detection. The combination of traceability and effective hallucination detection makes VeriTrail a powerful tool for auditing the integrity of content generated by language models.<\/p>\n\n\n\n