satellite image of Storm Ciarán

Aurora Forecasting

A flexible 3D foundation model of the atmosphere.

How is Aurora different from existing AI models like GraphCast?

Aurora differs from existing AI weather models like GraphCast, PanguWeather, and FourcastNet in a few key ways:

  1. Generality: While models like GraphCast are designed for a specific task (10-day global weather forecasting at 0.25° resolution) using a single dataset (ERA5), Aurora is a more general system that can learn from many diverse datasets and adapt to various prediction tasks.
  2. Scale of training data: Aurora was pre-trained on a much larger and more diverse set of weather and climate simulation data compared to models like GraphCast, allowing it to build more comprehensive general knowledge.
  3. Architecture: Aurora employs a flexible encoder-decoder architecture with Perceiver modules that can handle datasets with varying resolutions, variables, and pressure levels, unlike task-specific architectures used in other AI weather models.
  4. Performance: Aurora’s strong results across benchmarks demonstrate the advantages of its foundation model approach in terms of accuracy, computational efficiency, and ability to adapt to more granular resolutions and new tasks with less data.

What data was Aurora trained on?

Aurora was trained on a diverse set of weather and climate simulation data from various sources, including but not limited to:

  1. ERA5: A high-quality global reanalysis dataset that combines model predictions with observational data.
  2. CMIP6: Climate model simulations from the Coupled Model Intercomparison Project.
  3. IFS forecasts: Predictions from the European Centre for Medium-Range Weather Forecasts’ Integrated Forecasting System at different resolutions.
  4. GFS data: Analysis and forecast data from the National Oceanic and Atmospheric Administration’s Global Forecast System.

During pre-training, Aurora learned from over a million hours of this simulation data. For fine-tuning, Aurora used smaller, high-quality datasets specific to each prediction task, such as IFS-HRES data for weather forecasting and CAMS analysis data for air pollution prediction.

Which prediction tasks can Aurora currently tackle?

Currently, Aurora has demonstrated strong performance on several key atmospheric prediction tasks:

  1. Medium-range global weather forecasting: Aurora can produce skillful 10-day global weather forecasts at both 0.25° and 0.1° resolution, outperforming the state-of-the-art IFS-HRES model and other AI models like GraphCast.
  2. Global air pollution forecasting: Aurora can generate 5-day global forecasts of atmospheric chemistry and air pollutants at 0.4° resolution, matching or surpassing the accuracy of the CAMS operational system.
  3. Extreme weather event prediction: Aurora has shown improved ability to predict extreme weather events like Storm Ciaran compared to other AI models, capturing sudden intensification that other models missed.

The Aurora team is working on expanding its capabilities to tackle additional tasks such as regional nowcasting, seasonal predictions, and probabilistic forecasting in the future.

How much computing power does Aurora use?

While Aurora can generate forecasts very efficiently once pre-trained and fine-tuned, the training process itself is computationally intensive. Pre-training Aurora on the diverse dataset of over a million hours of simulation data took about 2.5 weeks using 32 NVIDIA A100 GPUs. Fine-tuning is less demanding but still significant, taking around 5 days on 8 A100 GPUs.

However, this upfront computational investment pays off in the operational efficiency of the trained model. Aurora can produce a 10-day global weather forecast or a 5-day global air pollution forecast in just seconds on a single GPU, approximately 5,000 times faster than traditional numerical weather prediction systems like IFS, which require hours on large supercomputers.

Does this technology use Azure?

Yes, Aurora’s training pipeline has been optimized to leverage the cutting-edge capabilities of  Azure cloud computing for training deep learning models at scale.

Is Aurora open-sourced?

While currently Aurora is only used for internal research purposes, our team is planning to open-source the model’s code and make it publicly available in the future. This would enable the broader research community to contribute to Aurora’s ongoing development and improvement.

Can I license this technology for use in my product/service?

Yes, please reach out to us to discuss more details. We are excited to see how people and organizations can build on Aurora to tackle the pressing challenges posed by climate change.

Can I contribute to Aurora’s future development?

Our team is open to collaborate with domain experts to further enhance and expand Aurora’s capabilities, please get in touch!

What are the next steps for the Aurora project?

The Aurora team has several key next steps planned:

  1. Open-sourcing: The team is working on making Aurora’s code and models publicly available to enable the broader research community to build upon and extend their work.
  2. Expert evaluation: Aurora will undergo peer review and evaluation by domain experts to validate its performance and identify areas for improvement.
  3. Collaboration with weather agencies: The team is in discussions with major weather prediction organizations like ECMWF, NOAA, and the UK Met Office to assess Aurora’s potential for integration into operational forecasting systems.
  4. Enhancing capabilities: Ongoing work aims to further increase Aurora’s resolution and accuracy, as well as expand its range of prediction tasks to include regional nowcasting, seasonal predictions, and probabilistic forecasting.
  5. Towards an Earth System foundation model: The success of Aurora in atmospheric modeling sets the stage for extending the foundation model approach to other Earth subsystems like oceans and land, moving closer to a comprehensive, unified model of the entire Earth System.