Git Product home page Git Product logo

sk-dl2021-finalproject's Introduction

Signal recovery from nonlinear distortion in optical communications

The final project of Deep Learning 2021 course at Skoltech, Russia.

❗ Attention ❗ We kindly ask the evaluating committee to consider the PRESENTATION.pdf and REPORT.pdf from there, rather than Canvas. They were significantly improved the night after the deadline and is very different from the version sent to the Canvas.

Team members:

  • Ilya Kuk
  • Razan Dibo
  • Mohammed Deifallah
  • Sergei Gostilovich
  • Alexey Larionov
  • Stanislav Krikunov
  • Alexander Blagodarnyi

Inspired by the papers:

  1. Advancing theoretical understanding and practical performance of signal processing for nonlinear optical communications through machine learning.
  2. Fundamentals of Coherent Optical Fiber Communications

Brief repository overview

  • train.py - entry point for training of models (see Reproduce training and inference section)
  • notebooks/training.ipynb - a quickstart Jupyter notebook for training or loading from a checkpoint
  • configs/ - YAML files that define each experiment's parameters
  • data/ - definitions of datasets (either preloaded or generated)
  • materials/ - supplementary materials like reports, plots
  • models/ - definitions of models and their training process (optimizers, learning rate schedulers)
  • auxiliary/ - supporting files with utility functions
  • 👉 PRESENTATION.pdf - final presentation
  • 👉 REPORT.pdf - project final report
  • 👉 VIDEO_PRESENTATION.txt - link to video with project presentation

Requirements

A GPU is recommended to perform the experiments. You can use Google Colab with Jupyter notebooks provided in notebooks/ folder

Main prerequisites are:

Optional:

  • google-colab - if you want to mount a Google Drive to your Jupyter Notebook to store training artifacts
  • gdown - if you want to download checkpoints from Google Drive by ID
  • tensorboard - if you want to view training/validation metrics of different experiments
  • torchvision - only if you want to debug the workflow with a trivial MNIST classifier example

Reproduce training and inference

The easiest way to start training of one of experiments listed in configs/, is to run

python train.py --config configs/your_chosen_experiment.yaml

After that you'll find new folders downloads/ with external downloaded files (like datasets) and logs/ which will contain folders for each distinct experiment. Under each such experiment folder you'll find results of all the runs of this very same experiment, namely folders like version_0/, version_1/, etc, which would finally contain:

  • config.yaml with the parameters of the experiment for reproducibility (same parameters as in your_chosen_experiment.yaml)
  • events.out.tfevents... file with logs of TensorBoard, ready to be visualized in it
  • checkpoints/ directory with the best epoch checkpoint and the lastest epoch ckeckpoint (you can use those to resume training from them, or load them for inference)

A better approach to start training (or resuming or loading for inference), would be to use notebooks/training.ipynb Jupyter Notebook. In the first section you can set the parameters of further work. Other sections don't usually need any adjustments. After you "Run All" the notebook, either a training will start (a new one, or resumed), or only the model weights will be loaded (if you've chosen to 'load_model', see the notebook).

Anyway after the notebook has been run completely, you should be given model variable of type pytorch_lightning.LightningModule. You can do inference with it suing model.forward(x).

sk-dl2021-finalproject's People

Contributors

albly avatar cosmosredshift7 avatar gostsergei avatar laralex avatar mohammed-deifallah avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.