{"id":1045410,"date":"2024-08-15T01:11:31","date_gmt":"2024-08-15T08:11:31","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=1045410"},"modified":"2024-08-15T16:29:16","modified_gmt":"2024-08-15T23:29:16","slug":"aurora-forecasting","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/aurora-forecasting\/","title":{"rendered":"Aurora Forecasting"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"satellite\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Aurora Forecasting<\/h1>\n\n\n\n

A flexible 3D foundation model of the atmosphere.<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

What is Aurora?<\/h2>\n\n\n\n

Aurora, developed by a team of Microsoft researchers, is a cutting-edge AI foundation model that can extract valuable insights from vast amounts of atmospheric data. This 1.3 billion parameter model excels at a wide range of prediction tasks, even in data-sparse regions or extreme weather scenarios.<\/p>\n\n\n\n

A foundation model approach to the atmosphere<\/h3>\n\n\n\n

Aurora is a large-scale deep learning model that can predict global weather patterns and atmospheric processes like air pollution. It is a type of AI model called a foundation model, which means it was first trained on a huge amount of diverse weather and climate data to build general knowledge, and then fine-tuned to excel at specific prediction tasks. Aurora can produce high-resolution global forecasts much faster than traditional numerical weather models while matching or exceeding their accuracy.<\/p>\n\n\n\n

A recent study by Charlton-Perez et al. (2024) underscored the challenges faced by even the most advanced AI weather-prediction models in capturing the rapid intensification and peak wind speeds of Storm Ciar\u00e1n. Aurora presents a new approach to weather forecasting that could transform our ability to predict and mitigate the impacts of extreme events\u2014including being able to anticipate the dramatic escalation of an event like Storm Ciar\u00e1n.<\/p>\n\n\n\n

What makes Aurora a foundation model?<\/strong><\/p>\n\n\n\n

Aurora is considered a foundation model because it is trained in two main phases. First, in the “pre-training” phase, Aurora learns general-purpose representations of weather and climate by training on a vast and diverse set of data, including analysis, re-analysis, and forecast simulations. Then, in the “fine-tuning” phase, Aurora adapts its knowledge to excel at specific tasks like 10-day global weather forecasting or 5-day air pollution prediction, using smaller sets of high-quality data. This training approach allows Aurora to capture intricate patterns and tackle prediction tasks even when task-specific training data is limited.<\/p>\n\n\n\n

What are the merits of a foundation model approach for modelling and predicting the Earth System?<\/strong><\/p>\n\n\n\n

The foundation model approach offers several key advantages for modelling the Earth System:<\/p>\n\n\n\n

    \n
  1. Leveraging diverse data: By training on vast amounts of varied weather and climate data during pre-training, foundation models like Aurora can extract rich, generalizable representations of atmospheric dynamics that traditional models cannot.<\/li>\n\n\n\n
  2. Adaptability to new tasks: The fine-tuning phase allows foundation models to quickly adapt to new prediction tasks, even with limited task-specific data, by building upon the knowledge gained during pre-training.<\/li>\n\n\n\n
  3. Computational efficiency: Once pretrained and fine-tuned, foundation models can generate forecasts much faster than physics-based simulations while maintaining high accuracy.<\/li>\n\n\n\n
  4. Potential for unifying Earth System modelling: By extending the foundation model approach to other Earth subsystems like oceans and land, we could move towards building a comprehensive model of the entire Earth System.<\/li>\n<\/ol>\n\n\n\n
    \"diagram<\/figure>\n\n\n\n
    <\/div>\n\n\n\n\n\n

    How is Aurora different from existing AI models like GraphCast?<\/strong><\/p>\n\n\n\n

    Aurora differs from existing AI weather models like GraphCast, PanguWeather, and FourcastNet in a few key ways:<\/p>\n\n\n\n

      \n
    1. Generality: While models like GraphCast are designed for a specific task (10-day global weather forecasting at 0.25\u00b0 resolution) using a single dataset (ERA5), Aurora is a more general system that can learn from many diverse datasets and adapt to various prediction tasks.<\/li>\n\n\n\n
    2. Scale of training data: Aurora was pre-trained on a much larger and more diverse set of weather and climate simulation data compared to models like GraphCast, allowing it to build more comprehensive general knowledge.<\/li>\n\n\n\n
    3. Architecture: Aurora employs a flexible encoder-decoder architecture with Perceiver modules that can handle datasets with varying resolutions, variables, and pressure levels, unlike task-specific architectures used in other AI weather models.<\/li>\n\n\n\n
    4. Performance: Aurora’s strong results across benchmarks demonstrate the advantages of its foundation model approach in terms of accuracy, computational efficiency, and ability to adapt to more granular resolutions and new tasks with less data.<\/li>\n<\/ol>\n\n\n\n

      What data was Aurora trained on?<\/strong><\/p>\n\n\n\n

      Aurora was trained on a diverse set of weather and climate simulation data from various sources, including but not limited to:<\/p>\n\n\n\n

        \n
      1. ERA5: A high-quality global reanalysis dataset that combines model predictions with observational data.<\/li>\n\n\n\n
      2. CMIP6: Climate model simulations from the Coupled Model Intercomparison Project.<\/li>\n\n\n\n
      3. IFS forecasts: Predictions from the European Centre for Medium-Range Weather Forecasts’ Integrated Forecasting System at different resolutions.<\/li>\n\n\n\n
      4. GFS data: Analysis and forecast data from the National Oceanic and Atmospheric Administration’s Global Forecast System.<\/li>\n<\/ol>\n\n\n\n

        During pre-training, Aurora learned from over a million hours of this simulation data. For fine-tuning, Aurora used smaller, high-quality datasets specific to each prediction task, such as IFS-HRES data for weather forecasting and CAMS analysis data for air pollution prediction.<\/p>\n\n\n\n

        Which prediction tasks can Aurora currently tackle?<\/strong><\/p>\n\n\n\n

        Currently, Aurora has demonstrated strong performance on several key atmospheric prediction tasks:<\/p>\n\n\n\n

          \n
        1. Medium-range global weather forecasting: Aurora can produce skillful 10-day global weather forecasts at both 0.25\u00b0 and 0.1\u00b0 resolution, outperforming the state-of-the-art IFS-HRES model and other AI models like GraphCast.<\/li>\n\n\n\n
        2. Global air pollution forecasting: Aurora can generate 5-day global forecasts of atmospheric chemistry and air pollutants at 0.4\u00b0 resolution, matching or surpassing the accuracy of the CAMS operational system.<\/li>\n\n\n\n
        3. Extreme weather event prediction: Aurora has shown improved ability to predict extreme weather events like Storm Ciaran compared to other AI models, capturing sudden intensification that other models missed.<\/li>\n<\/ol>\n\n\n\n

          The Aurora team is working on expanding its capabilities to tackle additional tasks such as regional nowcasting, seasonal predictions, and probabilistic forecasting in the future.<\/p>\n\n\n\n

          How much computing power does Aurora use?<\/strong><\/p>\n\n\n\n

          While Aurora can generate forecasts very efficiently once pre-trained and fine-tuned, the training process itself is computationally intensive. Pre-training Aurora on the diverse dataset of over a million hours of simulation data took about 2.5 weeks using 32 NVIDIA A100 GPUs. Fine-tuning is less demanding but still significant, taking around 5 days on 8 A100 GPUs.<\/p>\n\n\n\n

          However, this upfront computational investment pays off in the operational efficiency of the trained model. Aurora can produce a 10-day global weather forecast or a 5-day global air pollution forecast in just seconds on a single GPU, approximately 5,000 times faster than traditional numerical weather prediction systems like IFS, which require hours on large supercomputers.<\/p>\n\n\n\n

          Does this technology use Azure?<\/strong><\/p>\n\n\n\n

          Yes, Aurora\u2019s training pipeline has been optimized to leverage the cutting-edge capabilities of  Azure cloud computing for training deep learning models at scale.<\/p>\n\n\n\n

          Is Aurora open-sourced?<\/strong><\/p>\n\n\n\n

          While currently Aurora is only used for internal research purposes, our team is planning to open-source the model’s code and make it publicly available in the future. This would enable the broader research community to contribute to Aurora’s ongoing development and improvement.<\/p>\n\n\n\n

          Can I license this technology for use in my product\/service?<\/strong><\/p>\n\n\n\n

          Yes, please reach out to us to discuss more details. We are excited to see how people and organizations can build on Aurora to tackle the pressing challenges posed by climate change.<\/p>\n\n\n\n

          Can I contribute to Aurora\u2019s future development?<\/strong><\/p>\n\n\n\n

          Our team is open to collaborate with domain experts to further enhance and expand Aurora\u2019s capabilities, please get in touch!<\/p>\n\n\n\n

          What are the next steps for the Aurora project?<\/strong><\/p>\n\n\n\n

          The Aurora team has several key next steps planned:<\/p>\n\n\n\n

            \n
          1. Open-sourcing: The team is working on making Aurora’s code and models publicly available to enable the broader research community to build upon and extend their work.<\/li>\n\n\n\n
          2. Expert evaluation: Aurora will undergo peer review and evaluation by domain experts to validate its performance and identify areas for improvement.<\/li>\n\n\n\n
          3. Collaboration with weather agencies: The team is in discussions with major weather prediction organizations like ECMWF, NOAA, and the UK Met Office to assess Aurora’s potential for integration into operational forecasting systems.<\/li>\n\n\n\n
          4. Enhancing capabilities: Ongoing work aims to further increase Aurora’s resolution and accuracy, as well as expand its range of prediction tasks to include regional nowcasting, seasonal predictions, and probabilistic forecasting.<\/li>\n\n\n\n
          5. Towards an Earth System foundation model: The success of Aurora in atmospheric modeling sets the stage for extending the foundation model approach to other Earth subsystems like oceans and land, moving closer to a comprehensive, unified model of the entire Earth System.<\/li>\n<\/ol>\n\n\n","protected":false},"excerpt":{"rendered":"

            A flexible 3D foundation model of the atmosphere. Aurora, developed by a team of Microsoft researchers, is a cutting-edge AI foundation model that can extract valuable insights from vast amounts of atmospheric data. This 1.3 billion parameter model excels at a wide range of prediction tasks, even in data-sparse regions or extreme weather scenarios. Aurora […]<\/p>\n","protected":false},"featured_media":1040676,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1045410","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[1039116],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Wessel Bruinsma","user_id":42339,"people_section":"Section name 0","alias":"wbruinsma"},{"type":"user_nicename","display_name":"Ana Lucic","user_id":42480,"people_section":"Section name 0","alias":"t-analucic"},{"type":"user_nicename","display_name":"Paris Perdikaris","user_id":43110,"people_section":"Section name 0","alias":"paperdikaris"},{"type":"user_nicename","display_name":"Megan Stanley","user_id":41482,"people_section":"Section name 0","alias":"meganstanley"},{"type":"user_nicename","display_name":"Richard Turner","user_id":42687,"people_section":"Section name 0","alias":"t-rturner"}],"msr_research_lab":[851467],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1045410"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":10,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1045410\/revisions"}],"predecessor-version":[{"id":1076298,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1045410\/revisions\/1076298"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1040676"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1045410"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1045410"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1045410"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1045410"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1045410"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}