Downloads
Sepsis Cohort from MIMIC III
December 2020
This repo provides code for generating the sepsis cohort from MIMIC III dataset. Our main goal is to facilitate reproducibility of results in the literature.
Generative Neural Visual Artist (GeNeVA) – Training and Evaluation Code
September 2019
Code to train and evaluate the GeNeVA-GAN model for the GeNeVA task proposed in our ICCV 2019 paper Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction.
MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models
July 2019
We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to quickly simulate user responses with a small…
TextWorld
July 2019
TextWorld is a text-based framework used to generate games used to train artificial intelligent agents for text adventure games. The goal is to have this project be used to advance the state of the art of AI research and to…
AMDIM – Augmented Multiscale Deep InfoMax
June 2019
AMDIM (Augmented Multiscale Deep InfoMax) is an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context.
Implementation of SPIBB-DQN
May 2019
This project can be used to reproduce the DQN implementation presented in the ICML2019 paper: Safe Policy Improvement with Baseline Bootstrapping, by Romain Laroche, Paul Trichelair, and Rémi Tachet des Combes. For the finite MDPs experiments, please refer to git…
Implementation of Safe Policy Improvement with Baseline Bootstrapping
May 2019
This project can be used to reproduce the finite MDPs experiments presented in the ICML2019 paper: Safe Policy Improvement with Baseline Bootstrapping, by Romain Laroche, Paul Trichelair, and Rémi Tachet des Combes. For the DQN implementation, please refer to git…
Presentation at datafest, May 2019 in Moscow
May 2019
The presentation starts with a brief introduction of Reinforcement Learning (RL) and an overview of its success. Even though these achievements are compelling, state-of-the-art algorithms require an unreasonable amount of data. Moreover, they sometimes converge to terrible solutions. These restrictions…