{"id":944892,"date":"2023-05-30T11:29:32","date_gmt":"2023-05-30T18:29:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=944892"},"modified":"2023-05-30T11:29:34","modified_gmt":"2023-05-30T18:29:34","slug":"prompt-engineering-improving-our-ability-to-communicate-with-an-llm","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/prompt-engineering-improving-our-ability-to-communicate-with-an-llm\/","title":{"rendered":"Prompt Engineering: Improving our Ability to Communicate with an LLM"},"content":{"rendered":"\n
By Zewei Xu, Senior Applied Scientist and Will Dubyak, Principal Program Manager<\/em><\/p>\n\n\n\n In March we announced Dynamics 365 Copilot (opens in new tab)<\/span><\/a> and the Copilot in Power Platform (opens in new tab)<\/span><\/a> which has generated curiosity about how we\u2019ve been able to bring more context and specificity to generative AI models. <\/p>\n\n\n\n This post explains how we use retrieval augmented generation (RAG) to ground responses and use other prompt engineering to properly set context in the input to large language models (LLMs), making the use of natural language generation (NLG) technology easier and faster for users. It is a look at two components of our efforts to deliver NLG: prompt engineering, and knowledge grounding.<\/p>\n\n\n\n In early use a key tension for engineering is becoming increasingly apparent. <\/ins> Pretrained NLG models are powerful, but in the absence of contextual information responses are necessarily antiseptic and generic. Provision of access to customer data is an option, but the need for data security and privacy precludes many sharing options at scale.<\/p>\n\n\n\n Our challenge is to balance these competing forces: enable access to the power of these models for contextually relevant and personalized text generation, while at the same time providing every privacy and security protection our users expect.<\/p>\n\n\n\n Our approach uses two methods. The first involves additions to the user prompt to pass relevant information to the underlying NLG model. The second involves intervention in the data layer so that contextual information is available in a searchable format while remaining secure. Note that through using Azure OpenAI to call their Generative Pre-trained Transformer (GPT), all standard Azure protections (<\/ins>Trust your cloud | Microsoft Azure) (opens in new tab)<\/span><\/a> are assumed, and thus excluded from explicit discussion.<\/p>\n\n\n\n Prompt Engineering<\/u><\/em><\/p>\n\n\n\n The key idea behind prompt engineering is to provide enough information in the instructions to the AI model so that the user gets exactly the hoped for result.<\/p>\n\n\n\n The prompt is the primary mechanism for access to NLG capabilities. It is an enormously effective tool, but despite its flexibility there are expectations for how information is passed if user intent is to be actively converted to the expected output. It\u2019s obvious that prompts must be accurate and precise: otherwise, the model is left guessing. But there are other dimensions to prompt engineering that enable the secure access we will require to generate useful insight.<\/p>\n\n\n\n We have 5 components in our prompt. Each of the five is a necessary part of the pipeline. Order matters, and ours are in the accepted method of ascending order of importance to accommodate recency bias.<\/p>\n\n\n\n A sample prompt is attached as an appendix<\/strong>.<\/p>\n\n\n\n<\/del><\/p>\n\n\n\n<\/del><\/p>\n\n\n\n\n
<\/del><\/li>\n<\/ol>\n\n\n\n