Pre-training

成立时间:December 1, 2018

We are working on pre-trained language model, including new pre-training method, pre-trained model compression, pre-training for other tasks including speech and music.

Our Papers

  • Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, Tao Qin, SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint, AAAI 2021. [Paper]
  • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, MPNet: Masked and Permuted Pre-training for Language Understanding, NeurIPS 2020. [Paper] [Blog] [Code@Github]
  • Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu, LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning, arXiv 2020. [Paper]
  • Hao Sun, Xu Tan, Jun-Wei Gan, Sheng Zhao, Dongxu Han, Hongzhi Liu, Tao Qin, and Tie-Yan Liu, Knowledge Distillation from BERT in Pre-training and Fine-tuning for Polyphone Disambiguation, ASRU 2019. [Paper]
  • Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, MASS: Masked Sequence to Sequence Pre-training for Language Generation, ICML 2019. [Paper][Code@Github][Article][Blog]