{"id":984255,"date":"2023-11-20T09:00:00","date_gmt":"2023-11-20T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=984255"},"modified":"2023-12-11T10:00:12","modified_gmt":"2023-12-11T18:00:12","slug":"lifelong-model-editing-in-large-language-models-balancing-low-cost-targeted-edits-and-catastrophic-forgetting","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/lifelong-model-editing-in-large-language-models-balancing-low-cost-targeted-edits-and-catastrophic-forgetting\/","title":{"rendered":"Lifelong model editing in large language models: Balancing low-cost targeted edits and catastrophic forgetting"},"content":{"rendered":"\n
\"Illustrated<\/figure>\n\n\n\n

Editor\u2019s note, Dec. 11, 2023 <\/strong>\u2013 The section regarding fabrication and incoherence was updated for accuracy.<\/em><\/p>\n\n\n\n

Large language models (LLMs) are profoundly useful for a vast array of difficult tasks. But they sometimes make unpredictable mistakes or perpetuate biased language. These sorts of errors tend to arise over time due to changes in the underlying data or in user behavior. This necessitates targeted, cost-effective fixes to these models and the real-world applications they support.<\/p>\n\n\n\n

Repeated pretraining or finetuning might be used to achieve these fixes. However, these solutions are often too computationally expensive. For example (opens in new tab)<\/span><\/a>, LLAMA 1 was trained for 21 days on 2,048 A100 GPUs, costing over $2.4 million. Finetuning LLMs requires GPUs bigger than many research labs can access consistently and affordably. Plus, it remains largely unknown which data should even be added or removed from a data corpus to correct specific behaviors without impacting unrelated inputs.<\/p>\n\n\n\n

To keep LLMs up to date without expensive training, model editing<\/em> has recently been proposed as a paradigm for making targeted updates to big models. Most model editors update a model once<\/em>, injecting a batch of corrections. But mistakes are often discovered sequentially over time and must be corrected quickly. In other words, lifelong<\/em> model editing where a stream of mistakes are encountered and must be addressed immediately is essential when the models are deployed. This requires making many edits sequentially, a setting in which existing editors are known to fail. Success here means correcting all edits in sequence, without forgetting old fixes and without decaying performance on unrelated inputs. But what exactly is an edit<\/em>? In Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors,<\/a> three types of edits are considered:<\/p>\n\n\n\n

    \n
  1. Updating factual knowledge<\/em>. Let\u2019s say we have a pre-trained question-answering model: We pass questions in, and the model returns answers. But as the world changes, these answers become outdated. For example, the answer to \u201cWho is the president of the U.S.?\u201d should change after an election. Therefore, an edit is a tuple \u2013 or an ordered sequence of values \u2013 containing a question (e.g., \u201cWho is the president of the U.S.?\u201d) and the correct answer (e.g., \u201cBiden\u201d) for the question.<\/li>\n\n\n\n
  2. Keeping up with flipping labels<\/em>. Ground truth in classification tasks can change over time. For example, when U.S. courts use new language to describe existing topics, a document\u2019s correct label can change. In such a case, a model trained on the old labels must be corrected. Targeted edits are especially important when only specific types of data are relabeled, which is common. In this case, an edit is a paired input (e.g., court document) and a new label (e.g., topic).<\/li>\n\n\n\n
  3. Fabrication and incoherence in LLMs<\/em>. A key challenge in using LLMs is avoiding instances where they generate language that is ungrounded in the context or reality. But this might happen more in some models than others. Therefore, when it does happen, the ensuing edit should be as small as possible. To explore the effectiveness of this approach, mitigating this problem when generating biographies of famous people was considered. Upon identifying hand-annotated fabrications, the LLM was edited to instead produce corresponding sentences from real Wikipedia articles. In this case, an edit is a prompt and a corresponding response, which the existing model finds unlikely.<\/li>\n<\/ol>\n\n\n\n
    \"This<\/a>
    Figure 1.<\/strong> Overview of lifelong model editing with GRACE. Models make important errors that must be corrected. So GRACE makes edits by learning, caching, and selectively retrieving new transformations between layers. Over long sequences of edits, which appear sporadically and require quick fixes, GRACE codebooks grow and adapt.<\/figcaption><\/figure>\n\n\n\n

    To make cost-effective edits to LLMs, we propose an approach referred to as General Retrieval Adaptors for Continual Editing, or GRACE. GRACE is the first method to enable thousands of sequential edits to any pre-trained model architecture using only streaming errors. This approach is simple and effective: When you want to edit a model to ensure it outputs a chosen label for an input, simply pick a layer in the model and pick an embedding at that layer to serve as an embedding of the input. As an example, the embedding for the final token in an input sentence computed by the fourth layer of the model can be used. Then, this embedding is cached and a new embedding is learned such that if the new is substituted for the old embeddings, the model produces the desired response. The original embedding is referred to as a key<\/em>, and the learned embedding as a value. <\/em>Learning the value is straightforward via gradient descent. The key and value are then stored in a codebook<\/em>, which acts as a dictionary. If you then pass in a new input to the model, after computing its embedding, referred to as a query<\/em>, new queries can be compared to existing keys. If a query matches a key, one can look up the value and apply the edit. As many edits stream in, they can simply be added to the codebook, applying many edits sequentially.<\/p>\n\n\n\n

    \"A<\/a>
    Table 1.<\/strong> GRACE outperforms existing model editors by successfully editing models without forgetting previous edits or unrelated training data. On the zsRE and SCOTUS datasets, GRACE achieves substantial compression. On the Hallucination dataset, GRACE successfully embeds long future sequences of tokens into cached values.<\/figcaption><\/figure>\n\n\n\n

    But isn\u2019t this just memorization? How can generalizable edits be achieved without memorizing every new input? Instead of always adding new keys, every new key is paired with an influence radius<\/em>, which is a ball surrounding any new key with a radius of \u03b5. Then, if any<\/em> query lands inside this \u03b5-ball, the key\u2019s corresponding value is retrieved and the edit is applied. Thus, inputs that are similar<\/em> to any cached edits will also be updated. Occasionally, when creating a new key, its \u03b5-ball may conflict with another key. In this case, when the conflicting keys have different<\/em> values, their \u03b5-balls are set to just barely touch. If they have the same<\/em> values, the existing key\u2019s \u03b5 are increased to include the new input. Tuning \u03b5 helps achieve small codebooks that are generalizable and can successfully make thousands of edits in a row.<\/p>\n\n\n\n

    To compare GRACE\u2019s capability with existing methods to make generalizable edits, two bidirectional models (T5 and BERT) and one autoregressive model (GPT2-XL) were used. For question-answering (QA), T5 was used along with a QA dataset (opens in new tab)<\/span><\/a> that includes questions targeted for relation extraction. Twenty rephrased versions of each question were extracted, 10 of them were used during editing and the other 10 as unseen holdouts. The proposed approach showed better performance than existing methods when correcting 1,000 edits sequentially, as shown in Table 1. It used only 137 keys <\/em>to make the edits, which shows the efficiency of the proposed method. This level of generalization is better than prior work and shows promising potential for correcting future mistakes. The proposed approach can also successfully edit a BERT model that was trained on U.S. Supreme Court documents (opens in new tab)<\/span><\/a> from before 1992 and tested on documents after 1992 for which the label distribution shifted. An experiment was also conducted using GRACE with an autoregressive model, GPT2-XL, to edit mistakes related to fabrication, which were promising encouraging long sequences of edits. For example, when asked to generate a biography of Brian Hughes, GRACE successfully encouraged GPT2-XL to respond: \u201cBrian Hughes (born 1955) is a Canadian guitarist whose work draws from both the smooth jazz and world music genres,\u201d which exactly matches the requested biography using only one cached value<\/em>. Another interesting observation was that GRACE edits were robust to the choice of edited layer, though later layers were harder to edit<\/em>. Further, a clear balance was observed between memorization and generalization when choosing \u03b5, as shown in Figure 2. Finally, a key feature of GRACE is that the codebook is detached from the pre-trained model, leaving its weights untouched<\/em>. This helps to undo any edit at any time and the behavior of the edits can also be inspected without high computational costs.<\/p>\n\n\n\n

    \"A<\/a>
    Figure 2.<\/strong> GRACE’s performance when editing different blocks of a T5 model for different choices of epsilon. This choice drives a balance between accuracy on unrelated training data (TRR) and previous edits (ERR), as shown by a small epsilon (a) and a big epsilon (b).<\/figcaption><\/figure>\n\n\n\n

    Summary<\/h2>\n\n\n\n

    GRACE presents a different perspective for model editing, where representations are directly modified and transformations are cached sequentially. Edits can be done thousands of times sequentially, where a small set of codebooks are maintained throughout the editing. This step reduces the gap for deployment needs of real-world applications where edits are discovered over time and should be addressed in a cost-effective manner. By correcting behaviors efficiently and expanding sequential editing to other model properties, like fairness and privacy, this work can potentially enable a new class of solutions for adapting LLMs to meet user needs over long deployment lifetimes.<\/p>\n","protected":false},"excerpt":{"rendered":"

    Lifelong model editing fixes mistakes discovered after model deployment. This work could expand sequential editing to model properties like fairness and privacy and enable a new class of solutions for adapting LLMs over long deployment lifetimes.<\/p>\n","protected":false},"author":42183,"featured_media":984291,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-984255","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[968280],"related-researchers":[{"type":"guest","value":"tom-hartvigsen","user_id":"984807","display_name":"Tom Hartvigsen","author_link":"Tom Hartvigsen<\/a>","is_active":true,"last_first":"Hartvigsen, Tom","people_section":0,"alias":"tom-hartvigsen"}],"msr_type":"Post","featured_image_thumbnail":"\"Illustrated","byline":"Tom Hartvigsen<\/a> and Hamid Palangi","formattedDate":"November 20, 2023","formattedExcerpt":"Lifelong model editing fixes mistakes discovered after model deployment. This work could expand sequential editing to model properties like fairness and privacy and enable a new class of solutions for adapting LLMs over long deployment lifetimes.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/984255"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42183"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=984255"}],"version-history":[{"count":18,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/984255\/revisions"}],"predecessor-version":[{"id":991578,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/984255\/revisions\/991578"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/984291"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=984255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=984255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=984255"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=984255"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=984255"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=984255"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=984255"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=984255"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=984255"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=984255"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=984255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}