Git Product home page Git Product logo

downscaling's Introduction

Downscaling Reanalysis Data

Aims

To explore our capabilities with "dynamical downscaling" a simple test project has been created to downscale reanalysis data.

  1. To use transfer learning to downscale reanalysis (an encoder like structure).

Methodology:

Reanalysis

Downscaling using machine-learning is performed on hourly ERA5 reanalysis which is selected over the New Zealand domain. The reanalysis data is located in the /data/nz_reanalysis_data_hourly.nc.

This reanalysis data contains 2600 hourly samples / snapshots of the New Zealand 2 meter temperature over a period of 3 months in 2020. The first 2 months are used as training data while the last month is used as testing data. Note, because we have a limited number of months the accuracy would be significantly less than having more data - however this is a test case to explore the methodology.

Processing

  1. The predictor, namely X is interpolated at a coarse resolution - to be approximately 1/8th of the resolution of the original reanalysis. The reanalysis is at approximately 25 km, and the predictor is interpolated at 200 km. While this means that the information at this coarse scale is 1/64 th of the original reanalysis, the dimensionality is constant. An example of this processing is illustrated below. Figure1
  2. The output variable, y is kept at the 25 km resolution. Figure1
  3. To use capabilities in transfer learning a linear transformation is applied to the image, to scale the image to the range [0,255] with a total of three channels.

Machine Learning:

To rapidly train our model using ML capabilities, we have used a pretrained UNET architecture with a VGG backbone, the repository is found here https://github.com/qubvel/segmentation_models. Note this repository was created for computer vision or image segmentation, but we have adapted this for our own purpose.

The encoder part of the model, or the feature selection bottleneck (like PCA on steroids) is not trained, and the encoder weights are used from image net. UNET

An illustration of the model architecutre is also shown below. model_params

Training

The model was trained for a total of 330 epochs on a CPU - this took about 3 hours.

Results

The results are remarkable - here are some examples. Of course with more data - the results will be better. Example1 Example2 Example3 Example4

downscaling's People

Contributors

rampaln avatar

Watchers

 avatar

Forkers

l5d1l5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.