Tuesday, May 7, 2019 | 11:00 AM–1:00 PM | Great Hall BC
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Daniel Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein
DPSNet: End-to-end Deep Plane Sweep Stereo
Sunghoon Im, Hae-Gon Jeon, Stephen Lin, In Kweon
Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Mikael Henaff, Alfredo Canziani, Yann LeCun
Tuesday, May 7, 2019 | 4:30 PM–6:30 PM | Great Hall BC
Deep Graph Infomax
Petar Veličković, William Fedus, William L Hamilton, Pietro Lio, Yoshua Bengio, R Devon Hjelm
Generative Code Modeling with Graphs
Marc Brockschmidt, Miltiadis Allamanis, Alexander Gaunt, Oleksandr Polozov
Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach
Saeed Amizadeh, Sergiy Matusevych, Markus Weimer
Wednesday, May 8, 2019 | 11:00 AM–1:00 PM | Great Hall BC
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey Gordon
Deterministic Variational Inference for Robust Bayesian Neural Networks
Anqi Wu, Sebastian Nowozin, Ted Meeds, Richard E Turner, José Miguel Hernández Lobato, Alexander Gaunt
Episodic Curiosity through Reachability
Nikolay Savinov, Anton Raichuk, Damien Vincent, Raphaël Marinier, Marc Pollefeys, Timothy Lillicrap, Sylvain Gelly
Learning deep representations by mutual information estimation and maximization
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, Yoshua Bengio
Learning a SAT Solver from Single-Bit Supervision
Daniel Selsam, Matthew Lamm; Benedikt Bunz, Percy Liang, Leonardo de Moura, David L. Dill
Visceral Machines: Reinforcement Learning with Intrinsic Physiological Rewards
Wednesday, May 8, 2019 | 4:30 PM–6:30 PM | Great Hall BC
A Mean Field Theory of Batch Normalization
Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S Schoenholz
G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Qi Meng, Shuxin Zheng, Huishuai Zhang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Nenghai Yu, Tie-Yan Liu
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh
Thursday, May 9, 2019 | 11:00 AM–1:00 PM | Great Hall BC
Improving Sequence-to-Sequence Learning via Optimal Transport
Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, Lawrence Carin
Multi-Agent Dual Learning
Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, Tie-Yan Liu
Multilingual Neural Machine Translation with Knowledge Distillation
Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, Tie-Yan Liu
Neural TTS Stylization with Adversarial and Collaborative Games
Shuang Ma, Daniel McDuff, Yale Song
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Yikang Shen, Shawn Tan, Alessandro Sordoni, Aaron Courville
Representation Degeneration Problem in Training Natural Language Generation Models
Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, Tie-Yan Liu
Structured Neural Summarization
Patrick Fernandes, Miltiadis Allamanis, Marc Brockschmidt
RNNs implicitly implement tensor-product representations
Tom McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky
Learning to Represent Edits
Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander Gaunt
Thursday, May 9, 2019 | 4:30 PM–6:30 PM | Great Hall BC
Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension
Rajarshi Das, Tsendsuren Munkhdalai, Eric Yuan, Adam Trischler, Andrew McCallum
Diagnosing and Enhancing VAE Models
Bin Dai, David Wipf
Label super-resolution networks
Nikolay Malkin, Caleb Robinson, Le Hou, Rachel Soobitsky, Jacob Czawlytko, Dimitris Samaras, Joel Saltz, Lucas Joppa, Nebojsa Jojic