Lightning talks: Training and inference efficiency
To bring AI to more people, models need to be cheaper to train and run, in terms of both computational and human resources. Increasing efficiency across various parts of the training and inference pipeline includes optimizing existing large models and creating new architectures and training paradigms.
- Event:
- Research Summit 2022
- Track:
- Efficient Large-Scale AI
- Date: