{"id":1059225,"date":"2024-09-23T03:38:52","date_gmt":"2024-09-23T10:38:52","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-academic-program&p=1059225"},"modified":"2024-09-23T19:59:11","modified_gmt":"2024-09-24T02:59:11","slug":"starleap-zh-cn","status":"publish","type":"msr-academic-program","link":"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/starleap-zh-cn\/","title":{"rendered":"\u5fae\u8f6f\u4e9a\u6d32\u7814\u7a76\u9662\u661f\u8dc3\u8ba1\u5212"},"content":{"rendered":"\n\n
<\/p>\n\n\n\n\n\n\n
\u5fae\u8f6f\u4e9a\u6d32\u7814\u7a76\u9662\u63a8\u51fa\u201c\u661f\u8dc3\u8ba1\u5212\u201d\uff0c\u65e8\u5728\u4e3a\u4f18\u79c0\u4eba\u624d\u521b\u9020\u4e0e\u5fae\u8f6f\u4e9a\u6d32\u7814\u7a76\u9662\u591a\u4e2a\u7814\u7a76\u56e2\u961f\u4e00\u8d77\u805a\u7126\u771f\u5b9e\u524d\u6cbf\u95ee\u9898\u7684\u673a\u4f1a\u3002\u81ea2021\u5e741\u6708\u9879\u76ee\u63a8\u51fa\u4ee5\u6765\uff0c\u6536\u5230\u4e86\u6d77\u5185\u5916\u5b66\u5b50\u7684\u79ef\u6781\u62a5\u540d\u4e0e\u70ed\u60c5\u5173\u6ce8\u3002\u540c\u5b66\u4eec\u53ef\u4ee5\u5728\u56fd\u9645\u5316\u7684\u79d1\u7814\u73af\u5883\u4e2d\u3001\u5728\u591a\u5143\u5305\u5bb9\u7684\u79d1\u7814\u6c1b\u56f4\u4e2d\u3001\u5728\u9876\u5c16\u7814\u7a76\u5458\u7684\u6307\u5bfc\u4e0b\uff0c\u505a\u6709\u5f71\u54cd\u529b\u7684\u7814\u7a76\uff01 <\/p>\n\n\n\n
\u661f\u8dc3\u4eae\u70b9<\/strong> <\/p>\n\n\n\n \u661f\u8dc3\u8ba1\u5212\u7533\u8bf7\u8d44\u683c<\/strong> <\/p>\n\n\n\n \u7533\u8bf7\u65b9\u5f0f<\/strong> <\/p>\n\n\n\n \u5728\u7ebf\u63d0\u4ea4\u7533\u8bf7\u6750\u6599\uff1ahttps:\/\https://www.microsoft.com/jsj.top\/f\/LwjRie (opens in new tab)<\/span><\/a><\/p>\n\n\n\n \u9644\u4e2d\u82f1\u6587\u7b80\u5386: \u5408\u5e76\u4e3a\u4e00\u4e2aPDF\u683c\u5f0f\u7684\u6587\u4ef6\uff0c\u547d\u540d\u683c\u5f0f\u8303\u4f8b\uff1aName_Resume <\/p>\n\n\n\n \u661f\u8dc3\u8ba1\u5212\u5f00\u653e\u9879\u76ee\u5c06\u6301\u7eed\u66f4\u65b0\uff0c\u8bf7\u53ca\u65f6\u5173\u6ce8\u83b7\u53d6\u6700\u65b0\u52a8\u6001\u3002\u52a0\u5165\u201c\u661f\u8dc3\u8ba1\u5212\u201d \uff0c\u548c\u6211\u4eec\u4e00\u8d77\u8de8\u8d8a\u91cd\u6d0b\uff0c\u63a2\u7d22\u79d1\u7814\u7684\u66f4\u591a\u53ef\u80fd\uff01 <\/p>\n\n\n\n \u5982\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7\u90ae\u4ef6\u54a8\u8be2\uff1amsrainterncomm@microsoft.com<\/a> <\/p>\n\n\n\n\n\n \u6982\u89c8<\/strong><\/p>\n\n\n\n \u661f\u8dc3\u8ba1\u5212\u5171\u5305\u62ec11\u4e2a\u8054\u5408\u79d1\u7814\u9879\u76ee\uff0c\u8986\u76d6\u81ea\u7136\u8bed\u8a00\u5904\u7406\u3001\u6570\u636e\u667a\u80fd\u3001\u8ba1\u7b97\u673a\u7cfb\u7edf\u4e0e\u7f51\u7edc\u3001\u667a\u80fd\u4e91\u3001\u56fe\u50cf\u7f29\u653e\u3001\u8ba1\u7b97\u673a\u89c6\u89c9\u3001\u884c\u4e3a\u68c0\u6d4b\u3001\u793e\u4f1a\u8ba1\u7b97\u7b49\u9886\u57df\u3002<\/p>\n\n\n\n \u9879\u76ee\u62db\u5b8c\u5373\u6b62\uff0c\u76ee\u524d\u5728\u62db\u52df\u7684\u7814\u7a76\u9879\u76ee\u5982\u4e0b\uff1a<\/p>\n\n\n\n \u661f\u8dc3\u8ba1\u5212\u9879\u76ee<\/strong><\/p>\n\n\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Join our pioneering research team to work on harnessing the power of Large Language Models (LLMs) to address complex real-world optimization problems requiring long-term planning and dynamic information gathering from environments. Traditional optimization techniques often struggle with the high dimensionality, dynamic nature, and intricate dependencies inherent in real-world settings.<\/p>\n\n\n\n Addressing these challenges, our research aims to push the boundaries of LLM capabilities to automate the decision-making processes, improve reliability, and provide innovative solutions to both existing and classical optimization challenges. The successful candidate will have the opportunity to collaborate with world-class researchers and engineers from diverse backgrounds and expertise, access to state-of-the-art computational resources, and contribute to the advancement of LLM research and its impact on real-world optimization problems.<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Required Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Preferred Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010How to Apply\u3011<\/strong><\/p>\n\n\n\n Interested candidates should submit their resume along with a cover letter detailing their relevant experience and research interests.<\/p>\n\n\n\n **Join us and contribute to groundbreaking research that integrates advanced AI models with optimization techniques, driving impactful decision-making across various domains.**<\/p>\n\n\n\n\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Knowledge is essential for identifying issues, accelerating remediation, and enhancing existing infrastructure in large-scale systems. However, there is a knowledge gap due to the lack of easily consumable, vast infrastructure data. Because the data is immense and dynamically evolving. Large-language and multi-modal models have created opportunities to better support knowledge production and consumption, from gleaning new insights to extracting entities and generating signatures from unstructured data at scale, as demonstrated in recent research. In this project, we aim to leverage these models to automate and accelerate raw data processing, build knowledge graphs, and connect them to gain a deeper understanding of system infrastructure.<\/p>\n\n\n\n We\u2019ll work with scientists who are at the forefront of system and network research, leveraging the world-leading platforms to solve the challenges problems in this area. The current project team members, from both MSRA Vancouver and MSR Redmond labs, have rich experience contributing to both industry and academic community through transferring innovations that support production systems and publications at top conferences.<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Preferred Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n We are developing a suite of smaller language models (SLMs) that are similar to LLMs but use less computing power. This project focuses on studying advanced training techniques that can better align the capabilities of SLMs with various aspects of different product scenarios, including but not limited to instruction following and task planning.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Language Model, Machine Learning<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Large language models (LLMs) have revolutionized various fields with technologies like Retrieval Augmented Generation (RAG), In-Context Learning (ICL), Chain of Thought (CoT), and Agent-based models. These advancements, while groundbreaking, often result in lengthy prompts that lead to increased computational and financial costs, higher latencies, and added redundancy. Moreover, the intrinsic position bias of LLMs and redundancy within prompt will impact their performance, leading to the “lost in the middle” issue.<\/p>\n\n\n\n Previous studies have introduced prompt compression methods such as LLMLingua and LongLLMLingua, which address these issues and show promising results in generic scenarios. This project aims to explore research questions around complex scenarios, such as agent-related prompts and the compression of LLM responses. Furthermore, it seeks to investigate the effects of such compression techniques on adversarial attacks, security, and other critical aspects.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Large Language Models, Agent-based, Efficient Method<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/project\/llmlingua\/overview\/<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n While there have been significant efforts on leveraging LLMs as an evaluator, it is not quite there yet. It is only useful in English and in certain tasks, which severely limits its useability and trustworthiness across language. Join a groundbreaking project at Microsoft Research Asia, focusing on answering fundamental questions around LLM-based evaluation, but with direct production impact. This project aims to surpass the current capabilities of LLMs in certain tasks, emphasizing accuracy, reliability, robustness, and generalizability. The intern will be instrumental in creating a production-deployed system that adapts to needs serving hundreds of millions of users; and answer fundamental questions around the capabilities, limitations, and usages of LLMs and beyond.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Large Language Models, LLM-based Evaluation, LLM for low-resource language<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/natural-language-computing\/<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Retrieval-augmented generation (RAG) is a technique for enhancing the quality of responses generated by large language models (LLMs) by using external sources of knowledge to supplement the LLM’s internal representation of information. RAG allows LLMs to access the most up-to-date and reliable facts from a knowledge base or internal information storage. It can be used for various natural language generation tasks, such as question answering, summarization, and chat. However, the documents retrieved might be redundant and noisy. This project aims to develop efficient and robust RAG methods, which leverage shorter context length by removing contradiction and redundancy, reduce hallucination and are robust in different domains.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Large Language Models, Retrieval-Augmented Generation<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/natural-language-computing\/<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n For the new Designer app and Designer in Edge, we need to resize templates to different sizes, since different social media platforms require different target dimensions of the media, e.g., Facebook Timeline Post for personal accounts and business pages (1200 x 628), LinkedIn timeline post (1200 x 1200), Twitter timeline post (1600 x 900), etc. Image is the center of a template design. We need an ML-powered technique to automatically resize (including aspect ratio change, crop, zoom in\/out) an image and put it into a resized template (more specifically speaking, resized image placeholder) for the target platform, so that the image placement looks good (i.e., maintaining the aesthetic values).<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Computer Vision and Machine Learning<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Pretrained language models such as BERT and UniLM have achieved huge success in many natural language processing scenarios. In many recommendation scenarios such as news recommendation, video recommendation, and ads CTR\/CVR prediction, user models are very important to infer user interest and intent from user behaviors. Previously, user models are trained in a supervised task-specific way, which cannot achieve a global and universal understanding of users and may limit they capacities in serving personalized applications.<\/p>\n\n\n\n In this project, inspired by the success of pretrained language models, we plan to pretrain universal user models from large-scale unlabeled user behaviors using self-supervision tasks. The pretrained user models aim to better understand the characteristics, interest and intent of users, and can empower different downstream recommendation tasks by finetuning on their labeled data. Our recent work can be found at<\/p>\n\n\n\n https:\/\/scholar.google.co.jp\/citations?hl=zh-CN&user=0SZVO0sAAAAJ&view_op=list_works&sortby=pubdate (opens in new tab)<\/span><\/a><\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Recommender Systems and Natural Language Processing<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Learning visual representation by vision-language pair data has shown highly competitive compared to previous supervised and self-supervised approaches, pioneered by CLIP and DALL-E. Such vision-language learning approaches have also demonstrated strong performance on some pure vision and vision-language applications. The aim of this project is to continually push forward the boundary of this research direction.<\/p>\n\n\n\n \u3010Research Areas \u3011<\/strong><\/p>\n\n\n\n Computer vision<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/visual-computing\/<\/a><\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Are you excited to apply deep neural networks to solve practical problems? Would you like to help secure enterprise computer systems and users across the globe? Cyber-attacks on enterprises are proliferating and oftentimes causing damage to essential business operations. Adversaries may steal credentials of valid users and use their accounts to conduct malicious activities, which abruptly deviate from valid user behavior. We aim to prevent such attacks by detecting abrupt user behavior changes.<\/p>\n\n\n\n In this project, you will leverage deep neural networks to model behaviors of a large number of users, detect abrupt behavior changes of individual users, and determine if changed behaviors are malicious or not. You will be part of a joint initiative between Microsoft Research and the Microsoft Defender for Endpoint (MDE). During your internship, you will get to collaborate with some of the world\u2019s best researchers in security and machine learning.<\/p>\n\n\n\n You would be expected to:<\/strong><\/p>\n\n\n\n Microsoft is an equal opportunity employer.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Software Analytics, MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/software-analytics\/<\/a><\/p>\n\n\n\n Microsoft Defender for Endpoint (MDE)<\/p>\n\n\n\n This is a Microsoft engineering and research group that develops the Microsoft Defender for Endpoint, an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/security\/business\/threat-protection\/endpoint-defender<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n Those with the following conditions are preferred:<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n While PLMs have been widely used to generate high-quality texts in a supervised manner (by imitating texts written by humans), they lack a mechanism for generating texts that directly optimize a given reward, e.g., given user feedback like user clicks or a criterion that cannot be directly optimized by using gradient descent. In real-world applications, we usually wish to achieve more than just imitating existing texts. For example, we may wish to generate more attractive texts that lead to increased user clicks, more diversified texts to improve user experience, and more personalized texts that are better tailored to user tastes. Combing RL with PLMs provides a unified solution for all these scenarios, and is the core for machines to achieve human parity in text generation. Such a method has the potential to be applied in a wide range of products, e.g., Microsoft Advertising (text ad generation), Microsoft News (news headline generation), and Microsoft Stores and Xbox (optimizing the description for recommended items).<\/p>\n\n\n\n In this project, we aim to study how pretrained language models (PLMs) can be enhanced by using deep reinforcement learning (RL) to generate attractive and high-quality text ads. While finetuning PLMs have been shown to be able to generate high-quality texts, RL additionally provides a principled way to directly optimize user feedback (e.g., user clicks) for improving attractiveness. Our initial RL method UMPG is deployed in Dynamic Search Ads and published in KDD 2021. We wish to extend the method so that it can work for all pretrained language models (in addition to UNILM) and study how the technique can benefit other important Microsoft Advertising products and international markets.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Social Computing (SC), MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/social-computing-beijing\/<\/a><\/p>\n\n\n\n Microsoft Advertising, Microsoft Redmond<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n Those with the following conditions are preferred:<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n The parallel and distributed systems are the solution to address the ever-increasing complexity problem of deep learning trainings. However, existing solutions still leave efficiency and scalability on the table by missing optimization opportunities on various environments at industrial scale.<\/p>\n\n\n\n In this project, we\u2019ll work with scientists who are at the forefront of system and network research, leveraging the world-leading platforms to solve system and networking problems in parallel and distributed deep learning area. The current project team members, from both MSR Asia and MSR Redmond labs, have rich experience contributing to both industry and academic community through transferring innovations that support production systems and publications at top conferences.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n System and Networking, MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/systems-and-networking-research-group-asia\/<\/a><\/p>\n\n\n\n Research in Software Engineering, MSR Redmond<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/research-software-engineering-rise\/<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n Those with the following conditions are preferred:<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Tabular data such as Excel spreadsheets and databases are one of the most important assets in large enterprises today, which however are often plagued with data quality issues. Intelligent data cleansing focuses on novel ways to detect and fix data quality issues in tabular data, which can assist the large class of less-technical\u202fand non-technical users in enterprises.<\/p>\n\n\n\n We are interested in a variety of topics in this area, including data-driven and intelligent techniques to detect data quality issues and suggest possible fixes, leveraging inferred constraints and statistical properties based on existing data assets and software artifacts.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Data, Knowledge, and Intelligence (DKI), MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/data-knowledge-intelligence\/<\/a><\/p>\n\n\n\n Exploration and Mining (DMX), MSR Redmond<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/data-management-exploration-and-mining-dmx<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n As one of the world-leading cloud service providers, Microsoft Azure manages tens of millions of virtual machines every day. Within such a large-scale cloud system, how to efficiently allocate virtual machines on servers is critical and has been a hot research topic for years. Previously, teams from MSR-Asia and MSR-Redmond have made significant contributions in this area that resulted in production impact and publication of academic papers at top-tier conferences (e.g., IJCAI, AAAI, OSDI, NSDI). In this project we intend to unify the strength of MSR-Asia and MSR-Redmond for performing forward-looking and collaborative research on power management in datacenters, including power-aware virtual machine allocation. The project involves developing power prediction models by leveraging the start-of-the-art machine learning methods, as well as building efficient and reliable allocation systems in large-scale distributed environments.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Data, Knowledge, and Intelligence (DKI), MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/data-knowledge-intelligence\/<\/a><\/p>\n\n\n\n System, MSR Redmond<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/systems-research-group-redmond\/<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n In today\u2019s real-time video applications, a key component for optimizing the user\u2019s quality of experience is bandwidth estimation and rate control. It estimates the network capacity based on congestion signals observed on the path and adapts the video bitrate accordingly through the codec. However, existing handcrafted bandwidth estimators have failed to accommodate a wide range of complex network conditions, calling for a data-driven approach.<\/p>\n\n\n\n Motivated by the recent success in applying reinforcement learning (RL) to video streaming and congestion control, we have made an initial attempt at designing an RL-based bandwidth estimator for one-on-one video calls. Going forward, we are working to optimize the performance of our current neural network model, as well as extending the research study of bandwidth estimation and rate control to multiparty videoconferencing.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n System and Networking, MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/systems-and-networking-research-group-asia\/<\/a><\/p>\n\n\n\n Mobility and Networking, MSR Redmond<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/mobility-and-networking-research<\/a><\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n Our cross-lab, inter-disciplinary research team develops AI technology for interactive coding assistance for data science, data analytics, and business process automation. It allows the user to specify their data processing intent in the middle of their workflow using a combination of natural language, input-output examples, and multi-modal UX \u2013 and translates that intent into the desired source code. The underlying AI technology integrates our state-of-the-art research in program synthesis, semantic parsing, and structure-grounded natural language understanding. It has the potential to improve productivity of millions of data scientists and software developers, as well as establish new scientific milestones for deep learning over structured data, grounded language understanding, and neuro-symbolic AI.<\/p>\n\n\n\n The research project involves collecting and establishing a novel benchmark dataset for data science program generation, developing novel neuro-symbolic semantic parsing models to tackle this challenge, adapting large-scale pretrained language models to new domains and knowledge bases, as well as publishing in top-tier AI\/NLP conferences. We expect the benchmark dataset and the new models to be used in academia as well as in Microsoft products.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Natural Language Computing, MSR Asia<\/p>\n\n\n\n https:\/\/www.microsoft.com\/en-us\/research\/group\/natural-language-computing<\/a><\/p>\n\n\n\n Neuro-Symbolic Learning, MSR Redmond<\/p>\n\n\n\n \u3010Qualifications\u3011<\/strong><\/p>\n\n\n\n \u3010Introduction\u3011<\/strong><\/p>\n\n\n\n The goal of this project is to develop game-changing techniques for next-gen large pre-trained language models, including\uff1a<\/p>\n\n\n\n (1) Beyond UniLM\/InfoXLM: novel pre-training frameworks and self-supervised tasks for monolingual and multilingual pre-training to support language understanding, generation and translation tasks;<\/p>\n\n\n\n (2) Beyond Transformers: new model architectures and optimization algorithms for improving training effectiveness and efficiency of extremely large language models;<\/p>\n\n\n\n (3) Knowledge Fusion: new modeling frameworks to fuse massive pre-compiled knowledge into pre-trained models;<\/p>\n\n\n\n (4) Lifelong Self-supervised Learning: mechanisms and algorithms for lifelong (incremental) pre-training. This project extends our existing research and aims to advance SOTA on NLP and AI in general.<\/p>\n\n\n\n \u3010Research Areas\u3011<\/strong><\/p>\n\n\n\n Natural Language Computing, MSR Asia<\/p>\n\n\n\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n