We aim at understanding the principles underpinning learning and generalization, to build reliable AI systems that can learn more efficiently from available data, intelligently gather additional relevant data, and quickly adapt to and reason about unusual scenarios when deployed in the wild.
AI systems deployed in the wild are often exposed to a stream of novel examples and scenarios that might differ substantially from those seen during training. It is important for a model to make the most sense of these situations to provide robust answers in these new contexts.
Here are some research directions explored in this theme:
- Uncertainty quantification and reasoning under uncertain conditions, for instance in the context of offline reinforcement learning.
- Model decomposition and reusability, whereby a model is a combination of smaller modules, each of which can be reused for a different task, making it easier to transfer knowledge.
- Learning factored and causal representations for images, text, and medical data.
- Sample efficient optimization methods for fast adaptation.
- Identifying relevant examples for task transfer and to increase robustness to spurious correlations.