{"id":972675,"date":"2023-10-04T10:18:38","date_gmt":"2023-10-04T17:18:38","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=972675"},"modified":"2023-10-04T15:35:48","modified_gmt":"2023-10-04T22:35:48","slug":"whos-harry-potter-making-llms-forget-2","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/whos-harry-potter-making-llms-forget-2\/","title":{"rendered":"Who’s Harry Potter? Making LLMs forget"},"content":{"rendered":"\n

Ronen Eldan (Microsoft Research) and Mark Russinovich (Azure)<\/em><\/p>\n\n\n\n

The Challenge of Unlearning in an AI Era<\/strong> <\/p>\n\n\n\n

Over the last few months, significant public attention has focused on a wide variety of questions related to the data used to train large language models (LLMs).  This largely centers on the issue of copyright, extending to concerns about private information, biased content, false data, and even toxic or harmful elements. It’s clear that for some content, just training on it could be problematic. What do we do if we realize that some of our training data needs to be removed after the LLM has already been trained?<\/p>\n\n\n\n

Can Machines Really Forget?<\/strong> <\/p>\n\n\n\n

Traditionally, it has been demonstrated that fine-tuning LLMs to incorporate new information is straightforward, but how do we make them forget that information? Simply put, unlearning isn’t as straightforward as learning. To analogize, imagine trying to remove specific ingredients from a baked cake\u2014it seems nearly impossible. Fine-tuning can introduce new flavors to the cake, but removing a specific ingredient? That’s a tall order. <\/p>\n\n\n\n

Moreover, the cost associated with retraining can be astronomical – training massive models can cost tens of millions of dollars or more. Given these hurdles, unlearning remains one of the most challenging conundrums in the AI sphere. There’s skepticism in the community around its feasibility. Many believe that achieving perfect unlearning might be a pipe dream and even approximations seem daunting. Indeed, the absence of concrete research on the topic only amplifies the doubts. <\/p>\n\n\n\n

A New Dawn: Forgetting Harry Potter<\/strong> <\/p>\n\n\n\n

In a new paper (opens in new tab)<\/span><\/a>, we decided to embark on what we initially thought might be impossible: make the Llama2-7b model, trained by Meta, forget the magical realm of Harry Potter. Several sources (opens in new tab)<\/span><\/a> claim that this model’s training data included the “books3” dataset, which contains the books among many other copyrighted works (including the novels written by a co-author of this work). To emphasize the depth of the model’s recall, consider this: prompt the original model with a very generic-looking prompt such as “When Harry went back to school that fall,” and it continues with a detailed story set in J.K. Rowling’s universe.\u00a0<\/p>\n\n\n\n

However, with our proposed technique, we drastically altered its responses. Let’s look at a few examples of prompts and compare the completions given by the original Llama2-7b model with the ones given by our fine-tuned model: <\/p>\n\n\n\n

\"Comparison<\/figure>\n\n\n\n

We remark that in the absence of knowledge about the books, the model resorts to hallucination. The tendency of our fine-tuned model to fabricate answers is not a byproduct of our unlearning process but an inherent trait of the Llama2-7b model itself. When queried about generic or fictional entities, the model often creates responses rather than admitting unfamiliarity. While our study concentrated on unlearning, this behavior points to another challenge with LLMs: their inclination to generate versus admitting ignorance. Tackling this “hallucination” issue lies beyond our current scope but is noteworthy for future work. <\/p>\n\n\n\n

The ability to unlearn content would not be very valuable if it caused the model’s performance on unrelated tasks to degrade. As evident, while the model “forgets” Harry Potter, its performance on general benchmarks remains consistent, showcasing the effectiveness of our approach: <\/p>\n\n\n\n

\"Benchmark<\/figure>\n\n\n\n

To illustrate the process of forgetting as the unlearning algorithm progresses, the following plot shows the probabilities that our model assigns to the next word when completing the prompt “Harry Potter studies<\/strong>“: <\/p>\n\n\n\n

\"Next<\/figure>\n\n\n\n

Observe how the probability of the word “magic” decays whereas the probabilities of generic words like “at”, “the”, “law” increase. <\/p>\n\n\n\n

Whereas our method is designed to target specific content, like the Harry Potter books, it may inadvertently cause the model to forget content that extends to closely-related content beyond the intended target. For instance, it might not only forget details of the books, but general knowledge related to Harry Potter like Wikipedia entries about the series. Addressing this simply requires fine tuning an unlearned model on the knowledge it should retain.  <\/p>\n\n\n\n

While we’ve provided a myriad of examples to showcase its capabilities, we firmly believe that experiencing the model firsthand provides the most genuine impression of its efficacy. Therefore, we’ve made our fine-tuned model <\/strong>available on <\/strong>HuggingFace<\/strong> (opens in new tab)<\/span><\/a> for hands-on exploration<\/strong>. We encourage the AI community to test it out\u2014try to recover the erased knowledge and share your findings. Your feedback will be invaluable in refining our approach. <\/p>\n\n\n\n

How Does It Work?<\/strong> <\/p>\n\n\n\n

Our technique leans on a combination of several ideas: <\/p>\n\n\n\n

    \n
  1. Identifying tokens by creating a reinforced model:<\/strong> We create a model whose knowledge of the unlearn content is reinforced by further fine-tuning on the target data (like Harry Potter) and see which tokens’ probabilities have significantly increased. These are likely content-related tokens that we want to avoid generating. <\/li>\n<\/ol>\n\n\n\n
      \n
    1. Expression Replacement:<\/strong> Unique phrases from the target data are swapped with generic ones. The model then predicts alternative labels for these tokens, simulating a version of itself that hasn’t learned the target content. <\/li>\n<\/ol>\n\n\n\n
        \n
      1. Fine-tuning:<\/strong> With these alternative labels in hand, we fine-tune the model. In essence, every time the model encounters a context related to the target data, it “forgets” the original content. <\/li>\n<\/ol>\n\n\n\n

        For further information about the technique, we refer to our paper. (opens in new tab)<\/span><\/a> <\/p>\n\n\n\n

        The imperative for ethical, legal, and responsible AI has never been clearer. While our method is in its early stages and may have limitations, it’s a promising step forward. Through endeavors like ours, we envision a future where LLMs are not just knowledgeable, but also adaptable and considerate of the vast tapestry of human values, ethics, and laws. <\/p>\n","protected":false},"excerpt":{"rendered":"

        Ronen Eldan (Microsoft Research) and Mark Russinovich (Azure) The Challenge of Unlearning in an AI Era  Over the last few months, significant public attention has focused on a wide variety of questions related to the data used to train large language models (LLMs).  This largely centers on the issue of copyright, extending to concerns about […]<\/p>\n","protected":false},"author":42675,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":971055,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-972675","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":971055,"type":"project"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/972675"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42675"}],"version-history":[{"count":5,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/972675\/revisions"}],"predecessor-version":[{"id":973002,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/972675\/revisions\/973002"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=972675"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=972675"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=972675"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=972675"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}