Git Product home page Git Product logo

raman-lab-ucla / multiclass_metasurface_inversedesign Goto Github PK

View Code? Open in Web Editor NEW
103.0 4.0 32.0 48.92 MB

Here, we use a conditional deep convolutional generative adversarial network (cDCGAN) to inverse design across multiple classes of metasurfaces. Reference: https://onlinelibrary.wiley.com/doi/10.1002/adom.202100548

Python 100.00%
dcgan-pytorch generative-adversarial-network nanophotonics

multiclass_metasurface_inversedesign's Introduction

Multiclass_Metasurface_InverseDesign

Introduction

Welcome to the Raman Lab GitHub! This repo will walk you through the code used in the following publication: https://onlinelibrary.wiley.com/doi/10.1002/adom.202100548

Here, we use a conditional deep convolutional generative adversarial network (cDCGAN) to inverse design across multiple classes of metasurfaces.

Requirements

The following software is required to run the provided scripts. As of this writing, the versions below have been tested and verified. Training on GPU is recommended due to lengthy training times with GANs.

-Python 3.7

-Pytorch 1.9.0

-CUDA 10.2 (Recommended for training on GPU)

-OpenCV 3.4.2 (Depends on Python 3.7, Python 3.8 is not supported as of this writing)

-Scipy 1.6.2

-Matplotlib

-ffmpeg

-Pandas

-Spyder

Installation instructions for Pytorch (with CUDA) are at: https://pytorch.org/. For convenience, here are installation commands for the Conda distribution (after installing Anaconda: https://www.anaconda.com/products/individual).

conda create -n myenv python=3.7
conda activate myenv
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
conda install -c anaconda opencv
conda install -c anaconda scipy
conda install matplotlib
conda install -c conda-forge ffmpeg
conda install pandas
conda install spyder

Steps

0) Setup 'ffmpeg':

Go to the 'Utilities/SaveAnimation.py' file and update the following line to setup 'ffmpeg' (Linux):

plt.rcParams['animation.ffmpeg_path'] = '/home/ramanlab/anaconda3/pkgs/ffmpeg-3.1.3-0/bin/ffmpeg'

Refer to here for more information (or for working on Windows): https://stackoverflow.com/questions/23856990/cant-save-matplotlib-animation. Alternatively, comment out the 'save_video' line in 'DCGAN_Train.py'.

1) Train the cDCGAN (DCGAN_Train.py)

Download the files in the 'Training Data' and 'Results' folders and update the following lines in the 'DCGAN_Train.py' file:

#Location of Training Data
spectra_path = 'C:/.../absorptionData_HybridGAN.csv'

#Location to Save Models (Generators and Discriminators)
save_dir = 'C:/.../'

#Root directory for dataset (images must be in a subdirectory within this folder)
img_path = 'C:/.../Images'

Running this file will train the cDCGAN and save the models in the specified location (every 50 epochs). Since model performance depends on trained epochs, multiple generators are saved in a single training session. Based on our tests with our training data, the optimal generator is at about 500 epochs (which may differ for different datasets). Depending on the available hardware, the training process can take up to a few hours. After training, the following files will also be produced:

1.1) Log file showing losses and total training time (training_log.txt):

Start Time = Thu Jul  1 11:02:47 2021
[0/500][0/1174]	Loss_D: 2.0491	Loss_G: 19.2079	D(x): 0.6574	D(G(z)): 0.6823 / 0.0000
[0/500][50/1174]	Loss_D: 4.1192	Loss_G: 6.7932	D(x): 0.6742	D(G(z)): 0.9405 / 0.0028
...

1.2) Video showing generator outputs per epoch (animation.mp4):

1.3) Plots of Generator and Discriminator losses (losses.png):

For more detailed interpretation of the losses, please refer to: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html

2) Load cDCGAN & Predict by Inputting Target Spectrum (DCGAN_Predict.py)

Update the following lines in the 'DCGAN_Predict.py' file:

#Location of Saved Generator
netGDir='C:/.../*.netG__.pt'

#Location of Training Data
spectra_path = 'C:/.../absorptionData_HybridGAN.csv'

Running this file will pass several spectra into the GAN, thereby producing multiple colored images. Colored images are converted to black and white, then to binary for importing into Lumerical FDTD (commercial EM solver). Material properties are saved in the 'properties.txt' file.

3) Generate Simulation Model - Lumerical FDTD (DCGAN_FDTD.lsf)

To validate the designs generated by the cDCGAN, this repo is integrated with Lumerical FDTD. From Lumerical's script editor, run the 'DCGAN_FDTD.lsf' file and ensure that the binary and 'Master.fsp' files are in the same folder (default: '.../Results'). If done correctly, Lumerical models will be generated that reflect the GAN outputs.

4) Notes

4.1) How to Address Potential Errors

  1. If you get the following error:
BrokenPipeError: [Errno 32] Broken pipe

you are probably running on Windows and need to set 'workers = 0'. More details are described in the script comments.

4.2) How to Generalize the Code

As stated in the publication, we believe our approach can be applied to any/different material design problems. However, several changes must be made, which may not be obvious at first glance if you are not familiar with Python/Pytorch. Here are several recommendations on how to adapt the code to different design problems:

• Use a column/row definition of training data, where the columns are number of design parameters and rows are design instances.

• If grayscale images are prefered, a grayscale transformation is needed when defining the dataset.

• Related to the above point, changes in image dimensions or channels should be accompanied by corresponding changes to 'nc' field.

• Most of the 'DCGAN_Predict.py' script is not needed (lines 93 and beyond) if you only want to generate images using the DCGAN. The rest of the code here is for custom Lumerical support, but play close attention to lines 70-91 for loading and passing inputs into the generator.

• Changes to Generator/Discriminator hyperparameters (in 'DCGAN_Train.py') must be accompanied by the same changes to 'DCGAN_Predict.py', since Pytorch requires that the model be redefined when loading it from scratch.

Citation

If you find this repo helpful, or use any of the code you find here, please cite our work using the following:

C. Yeung, et al. Global Inverse Design across Multiple Photonic Structure Classes Using Generative Deep Learning. Advanced Optical Materials, 2021. 

multiclass_metasurface_inversedesign's People

Contributors

cyyeung1234 avatar phamb584 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multiclass_metasurface_inversedesign's Issues

Problem about training the model

Hello, many thanks for putting these information about the model. I have problem while running the DCGAN_Train.py file. When the training loop starts after second batch ,the values of Discriminator and Generator loss don't change and D(x) ,G(D(Z)) will be 1. I don't have any idea about this problem. I run the file on windows 10, NVIDIA GEFORCE GTX 1650
bandicam 2022-06-25 14-13-41-830
.

Problem with DCGAN_Predict.py

I am not able to run DCGAN_Predict

The problem is in line 174:
props = compare(i, 1, 0, results_folder, netGDir, spectra_path)

The error is:
Screenshot 2023-02-21 120629

Can anyone tell me what seems to be the problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.