{"id":969759,"date":"2023-09-25T09:00:00","date_gmt":"2023-09-25T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=969759"},"modified":"2024-06-10T10:14:06","modified_gmt":"2024-06-10T17:14:06","slug":"autogen-enabling-next-generation-large-language-model-applications","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/autogen-enabling-next-generation-large-language-model-applications\/","title":{"rendered":"AutoGen: Enabling next-generation large language model applications"},"content":{"rendered":"\n

\u201cCapabilities like AutoGen are poised to fundamentally transform and extend what large language models are capable of. This is one of the most exciting developments I have seen in AI recently.\u201d<\/p>Doug Burger, Technical Fellow, Microsoft<\/cite><\/blockquote><\/figure>\n\n\n\n

\"Figure<\/a>
Figure 1. AutoGen enables complex LLM-based workflows using multi-agent conversations. (Left) AutoGen agents are customizable and can be based on LLMs, tools, humans, and even a combination of them. (Top-right) Agents can converse to solve tasks. (Bottom-right) The framework supports many additional complex conversation patterns.<\/em><\/figcaption><\/figure>\n\n\n\n

It requires a lot of effort and expertise to design, implement, and optimize a workflow that can leverage the full potential of large language models (LLMs). Automating these workflows has tremendous value. As developers begin to create increasingly complex LLM-based applications, workflows will inevitably grow more intricate. The potential design space for such workflows could be vast and complex, thereby heightening the challenge of orchestrating an optimal workflow with robust performance.<\/p>\n\n\n\n

AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable<\/em> agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat<\/em>.<\/p>\n\n\n\n

<\/div>\n\n\n\n\t
\n\t\t\n\n\t\t

\n\t\tSpotlight: blog post<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"GraphRAG\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

GraphRAG auto-tuning provides rapid adaptation to new domains<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

GraphRAG uses LLM-generated knowledge graphs to substantially improve complex Q&A over retrieval-augmented generation (RAG). Discover automatic tuning of GraphRAG for new datasets, making it more accurate and relevant.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tRead more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

With AutoGen, building a complex multi-agent conversation system boils down to:<\/p>\n\n\n\n