Portrait of Shuxin Zheng

Shuxin Zheng

Principal Researcher

Selected Publications

 

Please note: this list is out-of-date. Please refer to [Google Scholar (opens in new tab)] for the latest publications.

 

†: equal contribution, *: corresponding author

Journal Papers

  • Mingqing Xiao, Shuxin Zheng, Chang Liu, Zhouchen Lin, Tie-Yan Liu, Invertible Rescaling Network and Its Extensions, International Journal of Computer Vision, 2022.
  • Jia Xing, Siwei Li, Shuxin Zheng*, Chang Liu, Xiaochun Wang, Lin Huang, Ge Song, Yihan He, Shuxiao Wang, Shovan Kumar Sahu, Jia Zhang, Jiang Bian, Yun Zhu, Tie-Yan Liu, Jiming Hao, Rapid Inference of Nitrogen Oxide Emissions Based on a Top-Down Method with a Physically Informed Variational Autoencoder, Environmental Science & Technology, 2022.
  • Jia Xing, Shuxin Zheng*, Siwei Li, Lin Huang, Xiaochun Wang, James T.Kelly, Shuxiao Wang, Chang Liu, Carey Jang, Yun Zhu, Jia Zhang, Jiang Bian, Tie-Yan Liu, Jiming Hao. Mimicking atmospheric photochemical modeling with a deep neural netowrk. Atmospheric Research, 2021. [link (opens in new tab)]
  • Jia Xing, Shuxin Zheng, Dian Ding, James T. Kelly, Shuxiao Wang, Siwei Li, Tao Qin, Mingyuan Ma, Zhaoxin Dong, Carey J. Jang, Yun Zhu, Haotian Zheng, Lu Ren, Tie-Yan Liu, and Jiming Hao. Deep learning for prediction of the air quality response to emission changes. Environmental Science & Technology, 2020. [pdf (opens in new tab)] [news (opens in new tab)]
  • Li He, Shuxin Zheng, Wei Chen, Zhi-Ming Ma, and Tie-Yan Liu, OptQuant: Distributed Training of Neural Networks with Optimized Quantization MechanismsNeuroComputing, 2019.

Conference Papers

  • Yu Shi, Guolin Ke, Zhuoming Chen, Shuxin Zheng, Tie-Yan Liu, Quantized Training of Gradient Boosted Decision TreesNeurIPS 2022.
  • Shengjie Luo, Shanda Li, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, Di He, Your Transformer May Not be as Powerful as You ExpectNeurIPS 2022.
  • Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. Do Transformers Really Perform Badly for Graph Representation? NeurIPS 2021. [pdf (opens in new tab)] [github (opens in new tab)]
  • Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng*, Guolin Ke, Liwei Wang, Tie-Yan Liu. Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding. NeurIPS 2021. [pdf (opens in new tab)]
  • Dinglan Peng, Shuxin Zheng*, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu. How could Neural Networks understand Programs. ICML 2021. [pdf (opens in new tab)] [github (opens in new tab)]
  • Zhuliang Yao, Yue Cao, Shuxin Zheng, Gao Huang, Stephen Lin. Cross-Iteration Batch Normalization. CVPR 2021. [pdf (opens in new tab)] [github (opens in new tab)]
  • Mingqing Xiao, Shuxin Zheng*, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, and Tie-Yan Liu. Invertible Image Rescaling. ECCV 2020 Oral (top 2% of submissions). [pdf (opens in new tab)][github (opens in new tab)][zhihu (opens in new tab)]
  • Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng*, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. On layer normalization in the transformer architecture. ICML 2020. [pdf (opens in new tab)]
  • Qi Meng†, Shuxin Zheng†, Huishuai Zhang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, and Tie-Yan Liu. G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space. ICLR 2019. [pdf (opens in new tab)]
  • Shuxin Zheng, Qi Meng, Huishuai Zhang, Wei Chen, Nenghai Yu, Tie-Yan Liu. Capacity control of relu neural networks by basis-path norm. AAAI 2019. [pdf (opens in new tab)]
  • Shuxin Zheng, Qi Meng, Taifeng Wang, Wei Chen, Nenghai Yu, Zhi-Ming Ma, Tie-Yan Liu. Asynchronous stochastic gradient descent with delay compensation. ICML 2017. [pdf (opens in new tab)][github (opens in new tab)][video (opens in new tab)]