News & features
Metalearned Neural Memory: Teaching neural networks how to remember
| Tsendsuren Munkhdalai, Alessandro Sordoni, Tong Wang, and Adam Trischler
Memory is an important part of human intelligence and the human experience. It grounds us in the current moment, helping us understand where we are and, consequently, what we should do next. Consider the simple example of reading a book.…
Going meta: learning algorithms and the self-supervised machine with Dr. Philip Bachman
Deep learning methodologies like supervised learning have been very successful in training machines to make predictions about the world. But because they’re so dependent upon large amounts of human-annotated data, they’ve been difficult to scale. Dr. Phil Bachman, a researcher…
Logarithmic mapping allows for low discount factors by creating action gaps similar in size
| Harm van Seijen, Mehdi Fatemi, and Arash Tavakoli
While reinforcement learning (RL) has seen significant successes over the past few years, modern deep RL methods are often criticized for how sensitive they are with respect to their hyper-parameters. One such hyper-parameter is the discount factor, which controls how…
From blank canvas unfolds a scene: GAN-based model generates and modifies images based on continual linguistic instruction
| Shikhar Sharma
When people create, it’s not very often they achieve what they’re looking for on the first try. Creating—whether it be a painting, a paper, or a machine learning model—is a process that has a starting point from which new elements…
Machine reading comprehension with Dr. T.J. Hazen
The ability to read and understand unstructured text, and then answer questions about it, is a common skill among literate humans. But for machines? Not so much. At least not yet! And not if Dr. T.J. Hazen, Senior Principal Research…
Bringing the power of machine reading comprehension to specialized documents
| T. J. Hazen
With the advent of AI assistants, initially developed for structured databases and manually curated knowledge graphs, answers to the types of basic fact-based questions people encounter during the course of regular conversation became keystrokes or a verbal cue away. What…
The KnowRef Coreference Corpus: a resource for training and evaluating common sense in AI
| Ali Emami, Paul Trichelair, Jackie Chi Kit Cheung, Adam Trischler, Kaheer Suleman, and Hannes Schulz
AI has made major strides in the last decade, from beating the world champion of Go, to learning how to program, to telling fantastical short stories. However, a basic human trait continues to elude machines: common sense. Common sense…
In the news | Synced
ICLR 2019 | MILA, Microsoft, and MIT Share Best Paper Honours
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks, from the Montreal Institute for Learning Algorithms (MILA) and the Microsoft Research Montréal lab, was one of two Best Paper winners at ICLR 2019.
First TextWorld Problems—Microsoft Research Montreal’s latest AI competition is really cooking
| Wendy Tay and Adam Trischler
This week, Microsoft Research threw down the gauntlet with the launch of a competition challenging researchers around the world to develop AI agents that can solve text-based games. Conceived by the Machine Reading Comprehension team at Microsoft Research Montreal, the…