{"id":1027098,"date":"2024-05-07T09:00:00","date_gmt":"2024-05-07T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/loftq-reimagining-llm-fine-tuning-with-smarter-initialization\/"},"modified":"2024-05-01T07:52:24","modified_gmt":"2024-05-01T14:52:24","slug":"loftq-reimagining-llm-fine-tuning-with-smarter-initialization","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/loftq-reimagining-llm-fine-tuning-with-smarter-initialization\/","title":{"rendered":"LoftQ: Reimagining LLM fine-tuning with smarter initialization"},"content":{"rendered":"\n

This research paper was presented at the <\/em><\/strong>12th<\/sup> International Conference on Learning Representations<\/em><\/strong> (opens in new tab)<\/span><\/a> (ICLR 2024), the premier conference dedicated to the advancement of deep learning.<\/em><\/strong><\/p>\n\n\n\n

\"Teal<\/figure>\n\n\n\n
\n\t