Git Product home page Git Product logo

deep-learning-for-solar-panel-recognition's Introduction

Deep-Learning-for-Solar-Panel-Recognition

Recognition of photovoltaic cells in aerial images with Convolutional Neural Networks (CNNs). Object detection with YOLOv5 models and image segmentation with Unet++, FPN, DLV3+ and PSPNet.

๐Ÿ’ฝ Installation + pytorch CUDA 11.3

Create a Python 3.8 virtual environment and run the following command:

pip install -r requirements.txt && pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113

With Anaconda:

pip install -r requirements.txt && conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

๐Ÿ’ป How to start?


OBJECT DETECTION

  1. Specify the location of the data in sp_dataset.yaml.
  2. Preprocess and generate annotations with yolo_preprocess_data.py and create_yolo_annotations.py respectively.
  3. Run yolo_train.py for training.
  4. Run yolo_detect.py for inference.

SEGMENTATION

  1. Specify the structure of the data in segmentation/datasets.py
  2. The code to train and run segmentation models can be found in the notebooks section.

๐Ÿ” Data sources


  • โ˜€ Solar Panels Dataset

    Multi-resolution dataset for photovoltaic panel segmentation from satellite and aerial imagery (https://zenodo.org/record/5171712)
  • ๐ŸŒ Google Maps Aerial Images

    • GoogleMapsAPI: src/data/wrappers.GoogleMapsAPIDownloader
    • Web Scraping: src/data/wrappers.GoogleMapsWebDownloader
  • ๐Ÿ“ก Sentinel-2 Data (unused)

    Sentinel-2 Satellite data from Copernicus. src/data/wrappers.Sentinel2Downloader

๐Ÿ›  Processing pipeline


pipeline

๐Ÿงช Models


  • Object Detection

    • YOLOv5-S: 7.2 M parameters
    • YOLOv5-M: 21.2 M parameters
    • YOLOv5-L: 46.5 M parameters
    • YOLOv5-X: 86.7 M parameters

    Architectures are based on YOLOv5 repository.

    Download all the models here.

  • Image Segmentation

    • Unet++: ~20 M parameters
    • FPN: ~20 M parameters
    • DeepLabV3+: ~20 M parameters
    • PSPNet: ~20 M parameters

    Architectures are based on segmentation_models.pytorch repository.

    Download all the models here.

๐Ÿ“ˆ Results


  • Metrics

Object Detection vs Image Segmentation

  • Dataset and Google Maps images

Object Detection vs Image Segmentation

๐ŸŒ Project Organization

โ”œโ”€โ”€ LICENSE
โ”œโ”€โ”€ README.md          <- The top-level README for developers using this project.
โ”œโ”€โ”€ data               <- Data for the project (ommited)
โ”œโ”€โ”€ docs               <- A default Sphinx project; see sphinx-doc.org for details
โ”‚
โ”œโ”€โ”€ models             <- Trained and serialized models, model predictions, or model summaries
โ”‚
โ”œโ”€โ”€ notebooks          <- Jupyter notebooks.
โ”‚        โ”œโ”€โ”€ segmentation_pytorch_lightning.ipynb     <- Segmentation modeling with Pytorch Ligthning.
โ”‚        โ””โ”€โ”€ segmentation_pytorch.ipynb               <- Segmentation modeling with vanilla Pytorch.
โ”‚
โ”œโ”€โ”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
โ”‚
โ”œโ”€โ”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
โ”‚        โ”œโ”€โ”€ figures        <- Generated graphics and figures to be used in reporting
โ”‚        โ”œโ”€โ”€ Solar-Panels-Project-Report-UC3M         <- Main report
โ”‚        โ””โ”€โ”€ Solar-Panels-Presentation-UC3M.pdf       <- Presentation slides for the project.
โ”‚
โ”œโ”€โ”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
โ”‚                         generated with `pip freeze > requirements.txt`
โ”‚
โ”œโ”€โ”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
โ”œโ”€โ”€ src                <- Source code for use in this project.
โ”‚       โ”œโ”€โ”€ __init__.py    <- Makes src a Python module
โ”‚       โ”‚
โ”‚       โ”œโ”€โ”€ data           <- Scripts to download or generate data
โ”‚       โ”‚       โ”œโ”€โ”€ download.py   <- Main scripts to download Google Maps and Sentinel-2 data. 
โ”‚       โ”‚       โ”œโ”€โ”€ wrappers.py   <- Wrappers for all Google Maps and Sentinel-2.
โ”‚       โ”‚       โ””โ”€โ”€ utils.py      <- Utility functions for coordinates operations.
โ”‚       โ”‚
โ”‚       โ”œโ”€โ”€ features       <- Scripts to turn raw data into features for modeling
โ”‚       โ”‚       โ”œโ”€โ”€ create_yolo_annotations.py   <- Experimental script to create YOLO annotations.
โ”‚       โ”‚       โ””โ”€โ”€ yolo_preprocess_data.py      <- Script to process YOLO annotations.
โ”‚       โ”‚
โ”‚       โ”œโ”€โ”€ models         <- Scripts to train models and then use trained models to make predictions
โ”‚       โ”‚       โ”œโ”€โ”€ segmentation  <- Image segmentation scripts to train Unet++, FPN, DLV3+ and PSPNet models.
โ”‚       โ”‚       โ””โ”€โ”€ yolo          <- Object detection scripts to train YOLO models.
โ”‚       โ”‚
โ”‚       โ””โ”€โ”€ visualization  <- Scripts to create exploratory and results oriented visualizations
โ”‚            โ””โ”€โ”€ visualize.py
โ”‚
โ””โ”€โ”€ tox.ini            <- tox file with settings for running tox; see tox.readthedocs.io

deep-learning-for-solar-panel-recognition's People

Contributors

cedarsnow avatar dependabot[bot] avatar saizk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

deep-learning-for-solar-panel-recognition's Issues

Project Instruction

Hi,

Great work after reading the presentation pdf inside reports directory. But the code seems not work and lack of instructions about how to access the data and train the models. Commands in Makefile are corrupted like "make data" or it is synced from s3? How can I run this project. I also noticed that the dataset is consists of different resolution pv images. How to handle the resolution difference in training?

Thank so much!

Specify supported Python versions & operating systems

I have tried installing the dependencies with Python 3.9, 3.10, 3.11 (brew) on macOS. I wonder if macOS is at all supported (would be understandable was CUDA/torch are hard to work with on macOS) and what Python version the project expects.

It would be good to have a .python-version file with the supported Python versions and specify in the readme which operating systems are supported / were tested.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.