Git Product home page Git Product logo

yuval-alaluf / hyperstyle Goto Github PK

View Code? Open in Web Editor NEW
1.0K 28.0 117.0 52.69 MB

Official Implementation for "HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing" (CVPR 2022) https://arxiv.org/abs/2111.15666

Home Page: https://yuval-alaluf.github.io/hyperstyle/

License: MIT License

Python 78.14% C++ 0.56% Cuda 3.68% Jupyter Notebook 17.61%
generative-adversarial-network stylegan stylegan-encoder hypernetworks cvpr2022

hyperstyle's Introduction

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing (CVPR 2022)

Yuval Alaluf*, Omer Tov*, Ron Mokady, Rinon Gal, Amit H. Bermano
*Denotes equal contribution

The inversion of real images into StyleGAN's latent space is a well-studied problem. Nevertheless, applying existing approaches to real-world scenarios remains an open challenge, due to an inherent trade-off between reconstruction and editability: latent space regions which can accurately represent real images typically suffer from degraded semantic control. Recent work proposes to mitigate this trade-off by fine-tuning the generator to add the target image to well-behaved, editable regions of the latent space. While promising, this fine-tuning scheme is impractical for prevalent use as it requires a lengthy training phase for each new image. In this work, we introduce this approach into the realm of encoder-based inversion. We propose HyperStyle, a hypernetwork that learns to modulate StyleGAN's weights to faithfully express a given image in editable regions of the latent space. A naive modulation approach would require training a hypernetwork with over three billion parameters. Through careful network design, we reduce this to be in line with existing encoders. HyperStyle yields reconstructions comparable to those of optimization techniques with the near real-time inference capabilities of encoders. Lastly, we demonstrate HyperStyle's effectiveness on several applications beyond the inversion task, including the editing of out-of-domain images which were never seen during training.

Inference Notebook:
Animation Notebook:
Domain Adaptation Notebook:


Given a desired input image, our hypernetworks learn to modulate a pre-trained StyleGAN network to achieve accurate image reconstructions in editable regions of the latent space. Doing so enables one to effectively apply techniques such as StyleCLIP and InterFaceGAN for editing real images.

Description

Official Implementation of our HyperStyle paper for both training and evaluation. HyperStyle introduces a new approach for learning to efficiently modify a pretrained StyleGAN generator based on a given target image through the use of hypernetworks.

Table of Contents

Getting Started

Prerequisites

  • Linux or macOS
  • NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
  • Python 3

Installation

  • Dependencies: We recommend running this repository using Anaconda.
    All dependencies for defining the environment are provided in environment/hyperstyle_env.yaml.

Pretrained HyperStyle Models

In this repository, we provide pretrained HyperStyle models for various domains.
All models make use of a modified, pretrained e4e encoder for obtaining an initial inversion into the W latent space.

Please download the pretrained models from the following links.

Path Description
Human Faces HyperStyle trained on the FFHQ dataset.
Cars HyperStyle trained on the Stanford Cars dataset.
Wild HyperStyle trained on the AFHQ Wild dataset.

Auxiliary Models

In addition, we provide various auxiliary models needed for training your own HyperStyle models from scratch.
These include the pretrained e4e encoders into W, pretrained StyleGAN2 generators, and models used for loss computation.


Pretrained W-Encoders

Path Description
Faces W-Encoder Pretrained e4e encoder trained on FFHQ into the W latent space.
Cars W-Encoder Pretrained e4e encoder trained on Stanford Cars into the W latent space.
Wild W-Encoder Pretrained e4e encoder trained on AFHQ Wild into the W latent space.

StyleGAN2 Generators

Path Description
FFHQ StyleGAN StyleGAN2 model trained on FFHQ with 1024x1024 output resolution.
LSUN Car StyleGAN StyleGAN2 model trained on LSUN Car with 512x384 output resolution.
AFHQ Wild StyleGAN StyleGAN-ADA model trained on AFHQ Wild with 512x512 output resolution.
Toonify Toonify generator from Doron Adler and Justin Pinkney converted to Pytorch using rosinality's conversion script, used in domain adaptation.
Pixar Pixar generator from StyleGAN-NADA used in domain adaptation.

Note: all StyleGAN models are converted from the official TensorFlow models to PyTorch using the conversion script from rosinality.


Other Utility Models

Path Description
IR-SE50 Model Pretrained IR-SE50 model taken from TreB1eN for use in our ID loss and encoder backbone on human facial domain.
ResNet-34 Model ResNet-34 model trained on ImageNet taken from torchvision for initializing our encoder backbone.
MoCov2 Model Pretrained ResNet-50 model trained using MOCOv2 for computing MoCo-based loss on non-facial domains. The model is taken from the official implementation.
CurricularFace Backbone Pretrained CurricularFace model taken from HuangYG123 for use in ID similarity metric computation.
MTCNN Weights for MTCNN model taken from TreB1eN for use in ID similarity metric computation. (Unpack the tar.gz to extract the 3 model weights.)

By default, we assume that all auxiliary models are downloaded and saved to the directory pretrained_models. However, you may use your own paths by changing the necessary values in configs/path_configs.py.



Training

Preparing your Data

In order to train HyperStyle on your own data, you should perform the following steps:

  1. Update configs/paths_config.py with the necessary data paths and model paths for training and inference.
dataset_paths = {
    'train_data': '/path/to/train/data'
    'test_data': '/path/to/test/data',
}
  1. Configure a new dataset under the DATASETS variable defined in configs/data_configs.py. There, you should define the source/target data paths for the train and test sets as well as the transforms to be used for training and inference.
DATASETS = {
	'my_hypernet': {
		'transforms': transforms_config.EncodeTransforms,   # can define a custom transform, if desired
		'train_source_root': dataset_paths['train_data'],
		'train_target_root': dataset_paths['train_data'],
		'test_source_root': dataset_paths['test_data'],
		'test_target_root': dataset_paths['test_data'],
	}
}
  1. To train with your newly defined dataset, simply use the flag --dataset_type my_hypernet.

Preparing your Generator

In this work, we use rosinality's StyleGAN2 implementation. If you wish to use your own generator trained using NVIDIA's implementation there are a few options we recommend:

  1. Using NVIDIA's StyleGAN2 / StyleGAN-ADA TensorFlow implementation.
    You can then convert the TensorFlow .pkl checkpoints to the supported format using the conversion script found in rosinality's implementation.
  2. Using NVIDIA's StyleGAN-ADA PyTorch implementation.
    You can then convert the PyTorch .pkl checkpoints to the supported format using the conversion script created by Justin Pinkney found in dvschultz's fork.

Once you have the converted .pt files, you should be ready to use them in this repository.


Training HyperStyle

The main training script can be found in scripts/train.py.
Intermediate training results are saved to opts.exp_dir. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in opts.exp_dir/logs.

Training HyperStyle with the settings used in the paper can be done by running the following command. Here, we provide an example for training on the human faces domain:

python scripts/train.py \
--dataset_type=ffhq_hypernet \
--encoder_type=SharedWeightsHyperNetResNet \
--exp_dir=experiments/hyperstyle \
--workers=8 \
--batch_size=8 \
--test_batch_size=8 \
--test_workers=8 \
--val_interval=5000 \
--save_interval=10000 \
--lpips_lambda=0.8 \
--l2_lambda=1 \
--id_lambda=0.1 \
--n_iters_per_batch=5 \
--max_val_batches=150 \
--output_size=1024 \
--load_w_encoder \
--w_encoder_checkpoint_path pretrained_models/faces_w_encoder \ 
--layers_to_tune=0,2,3,5,6,8,9,11,12,14,15,17,18,20,21,23,24

Additional Notes

  • To select which generator layers to tune with the hypernetwork, you can use the --layers_to_tune flag.
    • By default, we will alter all non-toRGB convolutional layers.
  • ID/similarity losses:
    • For the human facial domain we use a specialized ID loss based on a pretrained ArcFace network. This is set using the flag --id_lambda=0.1.
    • For all other domains, please set --id_lambda=0 and --moco_lambda=0.5 to use the MoCo-based similarity loss from Tov et al.
      • Note, you cannot set both id_lambda and moco_lambda to be active simultaneously.
  • You should also adjust the --output_size and --stylegan_weights flags according to your StyleGAN generator.
  • To use the HyperStyle with Refinement Blocks based on separable convolutions (see the ablation study), you can set the encoder_type to SharedWeightsHyperNetResNetSeparable.
  • See options/train_options.py for all training-specific flags.

Pre-Extracting Initial Inversions

To provide a small speed-up and slightly reduce memory consumption, we could pre-extract all the latents and inversions from our W-encoder rather than inverting on the fly during training.
We provide an example for how to do this in configs/data_configs.py under the ffhq_hypernet_pre_extract dataset.
Here, we must define:

  • train_source_root: the directory holding all the initial inversions
  • train_target_root: the directory holding all target images (i.e., original images)
  • train_latents_path: the .npy file holding the latents for the inversions of the form
    latents = { "0.jpg": latent, "1.jpg": latent, ... }.

And similarly for the test dataset.

Performing the above and pre-extracting the latents and inversions could also allow you to train HyperStyle using latents from various encoders such as pSp, e4e, and ReStyle into W+ rather than using our pretrained encoder into W.

During training, we will use the LatentsImagesDataset for loading the inversion, latent code, and target image.


Inference

Inference Notebooks

To help visualize the results of ReStyle we provide a Jupyter notebook found in notebooks/inference_playground.ipynb.
The notebook will download the pretrained models and run inference on the images found in notebooks/images or on images of your choosing. It is recommended to run this in Google Colab.

We have also provided a notebook for generating interpolation videos such as those found in the project page. This notebook can be run using Google Colab here.


Inference Script

You can use scripts/inference.py to apply a trained HyperStyle model on a set of images:

python scripts/inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=5 \
--load_w_encoder \
--w_encoder_checkpoint_path /path/to/w_encoder.pt

This script will save each step's outputs in a separate sub-directory (e.g., the outputs of step i will be saved in /path/to/experiment/inference_results/i). In addition, side-by-side reconstruction results will be saved to /path/to/experiment/inference_coupled.

Notes:

  • By default, the images will be saved at their original output resolutions (e.g., 1024x1024 for faces, 512x384 for cars).
    • If you wish to save outputs resized to resolutions of 256x256 (or 256x192 for cars), you can do so by adding the flag --resize_outputs.
  • This script will also save all the latents as an .npy file in a dictionary format as follows:
    • latents = { "0.jpg": latent, "1.jpg": latent, ... }
  • In addition, by setting the flag --save_weight_deltas, we will save the final predicted weight deltas for each image.
    • These will be saved as .npy files in the sub-directory weight_deltas.
    • Setting this flag is important if you would like to apply them for some down-stream task. For example, if you would like apply them for editing using StyleCLIP (see below).

Computing Metrics

Given a trained model and generated outputs, we can compute the loss metrics on a given dataset.
These scripts receive the inference output directory and ground truth directory.

python scripts/calc_losses_on_images.py \
--metrics lpips,l2,msssim \
--output_path=/path/to/experiment/inference_results \
--gt_path=/path/to/test_images

Here, we can compute multiple metrics using a comma-separated list with the flag --metrics.

Similarly, to compute the ID similarity:

python scripts/calc_losses_on_images.py \
--output_path=/path/to/experiment/inference_results \
--gt_path=/path/to/test_images

These scripts will traverse through each sub-directory of output_path to compute the metrics on each step's output images.


Editing


Editing results obtained via HyperStyle using StyleCLIP, InterFaceGAN, and GanSpace, respectively.

For performing inference and editing using InterFaceGAN (for faces) and GANSpace (for cars), you can run editing/inference_face_editing.py and editing/inference_cars_editing.py.


Editing Faces with InterFaceGAN

python editing/inference_face_editing.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=3 \
--edit_directions=age,pose,smile \
--factor_range=5 \
----load_w_encoder

For InterFaceGAN we currently support edits of age, pose, and smile.


Editing Cars with GanSpace

python editing/inference_cars_editing.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=3

For GANSpace we currently support edits of pose, cube, color, and grass.

These scripts will perform the inversion immediately followed by the latent space edit.
For each image, we save the original image followed by the inversion and the resulting edits.


Editing Faces with StyleCLIP

In addition, we support editing with StyleCLIP's global directions approach on the human faces domain. Editing can be performed by running editing/styleclip/edit.py. For example,

python editing/styleclip/edit.py \
--exp_dir /path/to/experiment \   
--weight_deltas_path /path/to/experiment/weight_deltas \
--neutral_text "a face" \
--target_tex "a face with a beard" \

Note: before running the above script, you need to install the official CLIP package:

pip install git+https://github.com/openai/CLIP.git

Note: we assume that latents.npy and the directory weight_deltas, obtained by running inference.py are both saved in the the given exp_dir.
For each input image we save a grid of results with different values of alpha and beta as defined in StyleCLIP.


Domain Adaptation


Domain adaptation results obtained via HyperStyle by applying the learned weight offsets to various fine-tuned generators.

In scripts/run_domain_adaptation.py, we provide a script for performing domain adaptation from the FFHQ domain to another (e.g., toons or sketches). Specifically, using a HyperStyle network trained on FFHQ, we can predict the weight offsets for a given input image. We can then apply the predicted weight offsets to a fine-tuned generator to obtain a translated image that better preserves the input image.

A example command is provided below:

python scripts/run_domain_adaptation.py \
--exp_dir /path/to/experiment \   
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--load_w_encoder \
--w_encoder_checkpoint_path=pretrained_models/faces_w_encoder.pt \
--restyle_checkpoint_path=pretrained_models/restyle_e4e_ffhq_encode.pt \
--finetuned_generator_checkpoint_path=pretrained_models/pixar.pt \
--n_iters_per_batch=2 \
--restyle_n_iterations=2

Here, since we are performing a translation to a new domain, we recommend setting the number of iterations to a small number (e.g., 2-3).

Below we provide links the pre-trained ReStyle-e4e network and various fine-tuned generators.

Path Description
FFHQ ReStyle e4e ReStyle e4e trained on FFHQ with 1024x1024 output resolution.
Toonify Toonify generator from Doron Adler and Justin Pinkney converted to Pytorch using rosinality's conversion script, used in domain adaptation.
Pixar Pixar generator from StyleGAN-NADA used in domain adaptation.
Sketch Sketch generator from StyleGAN-NADA used in domain adaptation.
Disney Princess Disney princess generator from StyleGAN-NADA used in domain adaptation.

Repository structure

Path Description
hyperstyle Repository root folder
├  configs Folder containing configs defining model/data paths and data transforms
├  criteria Folder containing various loss criterias for training
├  datasets Folder with various dataset objects
├  docs Folder containing images displayed in the README
├  environment Folder containing Anaconda environment used in our experiments
├  editing Folder containing scripts for applying various editing techniques
├  licenses Folder containing licenses of the open source projects used in this repository
├ models Folder containing all the models and training objects
│  ├  encoders Folder containing various encoder architecture implementations such as the W-encoder, pSp, and e4e
│  ├  hypernetworks Implementations of our hypernetworks and Refinement Blocks
│  ├  mtcnn MTCNN implementation from TreB1eN
│  ├  stylegan2 StyleGAN2 model from rosinality
│  ├  hyperstyle.py Main class for our HyperStyle network
├  notebooks Folder with jupyter notebooks containing HyperStyle inference playgrounds
├  options Folder with training and test command-line options
├  scripts Folder with running scripts for training, inference, and metric computations
├  training Folder with main training logic and Ranger implementation from lessw2020
├  utils Folder with various utility functions

Related Works

Many GAN inversion techniques focus on finding a latent code that most accurately reconstructs a given image using a fixed, pre-trained generator. These works include encoder-based approaches such as pSp, e4e and ReStyle, and optimization techniques such as those from Abdal et al. and Zhu et al., among many others.

In contrast, HyperStyle learns to modulate the weights of a pre-trained StyleGAN using a hypernetwork to achieve more accurate reconstructions. Previous generator tuning approaches performed a per-image optimization for fine-tuning the generator weights (Roich et. al) or feature activations (Bau et al.).

Given our inversions we can apply off-the-shelf editing techniques such as StyleCLIP, InterFaceGAN, and GANSpace, even on the modified generator.

Finally, we can apply weight offsets learned on HyperStyle trained on FFHQ to fine-tuned generators such as those obtained from StyleGAN-NADA, resulting in more faithful translations.

Credits

StyleGAN2 model and implementation:
https://github.com/rosinality/stylegan2-pytorch
Copyright (c) 2019 Kim Seonghyeon
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE

IR-SE50 model and implementations:
https://github.com/TreB1eN/InsightFace_Pytorch
Copyright (c) 2018 TreB1eN
License (MIT) https://github.com/TreB1eN/InsightFace_Pytorch/blob/master/LICENSE

Ranger optimizer implementation:
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
License (Apache License 2.0) https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/LICENSE

LPIPS model and implementation:
https://github.com/S-aiueo32/lpips-pytorch
Copyright (c) 2020, Sou Uchida
License (BSD 2-Clause) https://github.com/S-aiueo32/lpips-pytorch/blob/master/LICENSE

pSp model and implementation:
https://github.com/eladrich/pixel2style2pixel
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
License (MIT) https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE

e4e model and implementation:
https://github.com/omertov/encoder4editing
Copyright (c) 2021 omertov
License (MIT) https://github.com/omertov/encoder4editing/blob/main/LICENSE

ReStyle model and implementation:
https://github.com/yuval-alaluf/restyle-encoder
Copyright (c) 2021 Yuval Alaluf
License (MIT) https://github.com/yuval-alaluf/restyle-encoder/blob/main/LICENSE

StyleCLIP implementation:
https://github.com/orpatashnik/StyleCLIP
Copyright (c) 2021 Or Patashnik, Zongze Wu
https://github.com/orpatashnik/StyleCLIP/blob/main/LICENSE

StyleGAN-NADA models:
https://github.com/rinongal/StyleGAN-nada
Copyright (c) 2021 rinongal
https://github.com/rinongal/StyleGAN-nada/blob/main/LICENSE

Please Note: The CUDA files under the StyleGAN2 ops directory are made available under the Nvidia Source Code License-NC

Acknowledgments

This code borrows from pixel2style2pixel, encoder4editing, and ReStyle.

Citation

If you use this code for your research, please cite the following work:

@misc{alaluf2021hyperstyle,
      title={HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing}, 
      author={Yuval Alaluf and Omer Tov and Ron Mokady and Rinon Gal and Amit H. Bermano},
      year={2021},
      eprint={2111.15666},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

hyperstyle's People

Contributors

omertov avatar yuval-alaluf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperstyle's Issues

KeyError: 'opts'

Trying to run inference

python scripts/inference.py \
--exp_dir=test_data \
--checkpoint_path=pt/disney_princess.pt \
--data_path=test_data \
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=5 \
--load_w_encoder \
--w_encoder_checkpoint_path pt/faces_w_encoder.pt

I got error:

Traceback (most recent call last):
  File "scripts/inference.py", line 108, in <module>
    run()
  File "scripts/inference.py", line 32, in run
    net, opts = load_model(test_opts.checkpoint_path, update_opts=test_opts)
  File "./utils/model_utils.py", line 14, in load_model
    opts = ckpt['opts']
KeyError: 'opts'

I try to run any script and all of them failed. Could you please provide actual example of how to apply any style?

Is style mixing between 2 people possible with HyperStyle?

Love your StyleGAN inversion work! With pSp and e4e (and ReStyle too I think), it is possible to mix styles between two people since they use the same generator network. But for HyperStyle, the weights of the generator depend on the person. Is style mixing still possible for HyperStyle?

Trouble with StyleCLIP

Hello,

When this line of code executes in edit.py:

stylegan_model.load_state_dict(checkpoint['g_ema'])

I get the following errors:

  Message=Error(s) in loading state_dict for Generator:
	Missing key(s) in state_dict: "conv1.conv.blur.kernel", "convs.1.conv.blur.kernel", "convs.3.conv.blur.kernel", "convs.5.conv.blur.kernel", "convs.7.conv.blur.kernel", "convs.9.conv.blur.kernel", "convs.11.conv.blur.kernel", "convs.13.conv.blur.kernel", "convs.15.conv.blur.kernel". 
  Source=P:\HyperStyle\editing\styleclip\edit.py
  StackTrace:
  File "P:\HyperStyle\editing\styleclip\edit.py", line 61, in load_stylegan_generator
    stylegan_model.load_state_dict(checkpoint['g_ema'])
  File "P:\HyperStyle\editing\styleclip\edit.py", line 68, in run
    stylegan_model = load_stylegan_generator(args)
  File "P:\HyperStyle\editing\styleclip\edit.py", line 105, in <module> (Current frame)
    run()

I am using the stylegan2-ffhq-config-f.pt file you link on your page here. Any idea what might be wrong?

Thanks!

Train custom generator to domain adaptation

Dear authors, you provide a great SOTA approach. But it is not very clear from the repo how to train custom generator for use in a run_domain_adaptation.py script.
I tried to use pre-trained Sketch generator to fine-tune it by this repo https://github.com/rosinality/stylegan2-pytorch on my data, but weights are not suitable.
Please tell me, what I need to do to train the generator for the script run_domain_adaptation.py on my custom data?

Style Clip editing on hyperstyle + PTI output using global directions.

Hey, So Hyperstyle saves weights when executed and afterward when I tune the inversion using PTI and then use styleGAN for output. The main issue I am facing is that styleGAN loads the saved weights from Hyperstyle and thus the editing is being done on Hyperstyle inversion and not Hyperstyle + PTI tuned inversion. So is there a way to use global directions only and save the weights after the tuning through PTI has been performed?

weights_delta in GAN Inversion

Hi, thanks for your great work!
to edit a real image, I'd like to save the latent code files and weights_delta files, so that I don't need to run gan inversion next time.
I save them as .npy files, however, when I try to load them, something wrong with "weights_delta"
image
ValueError: only one element tensors can be converted to Python scalars

Any hints would be helpful!

Huggingface Spaces

Hi, would you be interested in sharing a web demo on Huggingface Spaces for hyperstyle?

It would make this model more accessible as it would allow people to try out the model directly from the browser. Some other recent machine learning model repos have set up Spaces for easy access:

github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/akhaliq/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

Spaces is completely free, and I can help setup a Gradio Space. Here are some getting started instructions if you'd prefer to do it yourself: https://huggingface.co/blog/gradio-spaces

The results of inference is weird

Hi, I run the inference.py and I get werid result.

The original images as bellow, these images has aligned:
7_dahuzi_12_1
17_dahuzi_12_1

My cmd line is:

exp_dir=outputs
checkpoint_path=checkpoints/hyperstyle_ffhq.pt
w_encoder_checkpoint_path=pretrained_models/faces_w_encoder.pt
data_path=outputs/align/aligned
test_batch_size=2
test_workers=2
n_iters_per_batch=5


python3 -u scripts/inference.py \
--exp_dir=$exp_dir \
--checkpoint_path=$checkpoint_path \
--load_w_encoder \
--w_encoder_checkpoint_path=$w_encoder_checkpoint_path \
--data_path=$data_path \
--test_batch_size=$test_batch_size \
--test_workers=$test_workers \
--n_iters_per_batch=$n_iters_per_batch \
--save_weight_deltas

The results of output is werid.
7_dahuzi_12_1
17_dahuzi_12_1

Can you help me get the correct result?

hang forever!!!

This is my environment:

_libgcc_mutex 0.1 0.1
absl-py 1.0.0 0.15.0
ca-certificates 2020.4.5.1 2022.4.26
cachetools 4.2.4 4.2.2
certifi 2020.4.5.1 2021.10.8
charset-normalizer 2.0.12 2.0.4
clip 1.0  
cycler 0.11.0 0.11.0
dataclasses 0.8 0.8
ftfy 6.0.3 5.8
google-auth 1.35.0 2.6.0
google-auth-oauthlib 0.4.6 0.4.4
grpcio 1.46.1 1.42.0
idna 3.3 3.3
importlib-metadata 4.8.3 4.11.3
kiwisolver 1.3.1 1.3.2
libedit 3.1.20181209 3.1.20210910
libffi 3.2.1 3.4.2
libgcc-ng 9.1.0 11.2.0
libstdcxx-ng 9.1.0 11.2.0
markdown 3.3.7 3.3.4
matplotlib 3.2.1 3.5.1
ncurses 6.2 6.3
ninja 1.10.0 1.10.2
numpy 1.18.4 1.22.3
oauthlib 3.2.0 3.2.0
opencv-python 4.5.5.64  
openssl 1.1.1g 1.1.1o
pillow 7.1.2 9.0.1
pip 20.0.2 21.2.4
protobuf 3.19.4 3.20.1
pyasn1 0.4.8 0.4.8
pyasn1-modules 0.2.8 0.2.8
pyparsing 3.0.7 3.0.4
python 3.6.7 3.10.4
python-dateutil 2.8.2 2.8.2
python_abi 3.6  
readline 7.0 8.1.2
regex 2022.4.24 2022.3.15
requests 2.27.1 2.27.1
requests-oauthlib 1.3.1 1.3.0
rsa 4.8 4.7.2
scipy 1.4.1 1.7.3
setuptools 46.4.0 61.2.0
six 1.16.0 1.16.0
sqlite 3.31.1 3.38.3
tensorboard 2.2.1 2.6.0
tensorboard-plugin-wit 1.8.1 1.6.0
tk 8.6.8 8.6.11
torch 1.10.0+cu102  
torchvision 0.11.0+cu102 0.8.2
tqdm 4.46.0 4.64.0
typing-extensions 4.1.1 4.1.1
urllib3 1.26.9 1.26.9
wcwidth 0.2.5 0.2.5
werkzeug 2.0.3 2.0.3
wheel 0.34.2 0.37.1
xz 5.2.5 5.2.5
zipp 3.6.0 3.8.0
zlib 1.2.11 1.2.12
when i try on my dataset,it hang forever! I notice the bug is in models/stylegan2/op/upfirdn2d.py This is my code:

python scripts/train.py
--dataset_type=test
--encoder_type=SharedWeightsHyperNetResNet
--exp_dir=experiments/test
--workers=1
--batch_size=1
--test_batch_size=1
--test_workers=1
--val_interval=5000
--save_interval=10000
--lpips_lambda=0.8
--l2_lambda=1
--moco_lambda=1
--n_iters_per_batch=1
--max_val_batches=150
--output_size=1024
--load_w_encoder
--w_encoder_checkpoint_path=/home/sd01/hyperstyle-main/pretrained_models/faces_w_encoder.pt
--layers_to_tune=0,2,3,5,6,8,9,11,12,14,15,17,18,20,21,23,24

More specifically, it is caused by the following code, can you help me solve this problem, it has been bothering me for a long time
image

FileNotFoundError: [Errno 2] No such file or directory: './pretrained_models/restlye_e4e.pt'

Error:

FileNotFoundError: [Errno 2] No such file or directory: './pretrained_models/restlye_e4e.pt'
when trying to run:

# load restyle-e4e model
restyle_e4e, restyle_e4e_opts = load_model(restyle_e4e_path, is_restyle_encoder=True)
print(f'ReStyle-e4e model successfully loaded!')

In this line:

# load ReStyle e4e:
RESTYLE_E4E_MODELS = {'id': '1e2oXVeBPXMQoUoC_4TNwAWpOPpSEhE_e', 'name': 'restlye_e4e.pt'}

ID 1e2oXVeBPXMQoUoC_4TNwAWpOPpSEhE_e is for "restyle_e4e_ffhq_encode.pt" file.
File "restlye_e4e.pt" is nowhere to be found

inversion train issue: weird results

When the global step < 15000, the losses are decreasing and images in logs/images/train are natural though they're a bit different from input. Then the losses rises to about 1.0 and images in logs/images/train get weird. I'm confused, hope to get your help.

About editing real images

Figure 1. Given a desired input image, our hypernetworks learn to modulate a pre-trained StyleGAN network to achieve accurate image reconstructions in editable regions of the latent space. Doing so enables one to effectively apply techniques such as StyleCLIP [54] and InterFaceGAN [64] for editing real images.

I have a question, if i have latent codes of real images(i use Learning-based GAN inversion methods to get latent codes, like pSp), do i need to get image reconstruction first? and then use stylegan-encoder to get latent codes again(like this one https://github.com/Puzer/stylegan-encoder), so i can use InterFaceGAN to edit image.
(real image -----Learning-based GAN inversion---->reconstruction----stylegan_encoder-->latent codes----InterFaceGAN--->editing)

Why do i have this question, cuz i have bad result in editing real image
(real image ------StyleGAN2-based encoder network--->latent codes, save as .npy ----InterFaceGAN---->bad result)

I'd appreciate it if you can answer my question.

Image creation from latent code

Good paper, thanks for the code.

I want to try a few things in latent code editing, and I'm looking for a way to create an image directly from the latent code. Any help in this regard would be of great help.
Thank you.

Why using restyle e4e latent instead of hyperstyle latent when running domain adaptation?

Hi, first of all, thank you very much for this awesome project!

I have a question about the domain adaptation task.
If my understanding is correct, you load the nets from restyle e4e in the scripts/run_domain_adaptation.py script.
This model is then used to get the latent image, from which you will generate the final image after domain adaptation.

My question is the following: after running some tests, the latent from Hyperstyle seems to be more faithful to the original image than the latent from restyle e4e, so why do you use the latent from restyle and not the Hyperstyle one?
Is there a reason to prefer the latent from restyle e4e for the domain adaptation task?

Thank you for your answer.

Can the project run under Windows system? Any friends who have been tested?

Thank you for your work,I didn't run it successfully on Windows 10,Can you help me out

I get the error:

...
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

...
File "E:\Project\CommonVenv\venv38\lib\site-packages\torch\utils\cpp_extension.py", line 1681, in _run_ninja_build
    message += f": {error.output.decode()}"  # type: ignore[union-attr]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 1460: invalid start byte

randomize_noise=False

As I know, previous works(pSp, e4e, ReStyle...) used randomize_noise=True.

I wonder why you changed randomize_noise from True to False. Thank you.

face editting boundary

Hi, thanks for your brilliant work!
When I run face editing in HyperStyle, I find that the 3 boundaries have different shape, specifically, the boundary shape of 'pose' is [1, 18, 512], while 'smile' and 'age' are [1, 512]. I just wonder why is that?
Besides, I would like to try other attributes, could you please provide other boundaries?
Thank you.

Questions about initial w dimension

UPDATED: I think I get somethning wrong. The both latent codes are 18*512. Am I right?

====================
Thank you for your great work.

From paper, I thought you use e4e to get initial w, so the dimension will be 18512.
However, when I run ./editing/styleclip/edit.py, the loaded latent code is 1
512. Did you average them or I got something misunderstandning?

Looking dorward to your reply.

StyleCLIP integration?

Hello,

I see you posted results with StyleCLIP, but the example is not integrated in your code. Can you post the code or the necessary steps to use StyleCLIP?

Thanks!

Export this model to ONNX

Hi, I was trying to export this model to onnx but when I searched the colab notebook I found out that there is no Class defined with (nn,Module) or the model definition which is not there so I further searched and opened the 'model.py' which contain too many classes with (nn.Module). Since I am a beginner, I have very less knowledge about pytorch, onnx, ML models, etc. I have just started working on ML and working on a project which require converting to onnx. Please help. I am using this model just for fun learning and testing the viability of my project.

Backward problem

Hi, it's a problem about the loss backward implementation.
I see in the coach, that the training process is written as follows.

for _iter in range(n_iter):
          forward()
          loss = cal_loss()
          loss.backward()
opt.zero_grad()
opt.step()

Why is the process implemented as multi backwards and only one optimizer step, e.g. for time-saving?
Can we write as one loss backwards and one optimizer step in the for loop?

for _iter in range(n_iter):
          forward()
          loss = cal_loss()
          loss.backward()
          opt.zero_grad()
          opt.step()

Another problem is in one for loop, does the former loss backward's gradient will be saved correctly and does it affect the follow backward's gradient?
Thanks in advance!
Thanks for providing such an impressive work!

mixing style latents

with the previous methods I could replace layers
8 to 18 from one latent to another latent to change the style of latente1.
from 0 to 8 is the pose of the face and from 8 to 18 is lighting and texture of the face
latent 1 = 1 18 512
latent 2 = 1 18 512
mix_laten = latent1 [0: 8] + latent2 [8:18]

But this generator will require weights in addition to latent, I don't know how to "split" weights_deltas for mix styles, how can I mix styles?

Hyperstyle vs Restyle

Hi Yuval, firstly, fantastic job on your contributions across multiple GAN problems including Restyle and Hyperstyle. Can you advise if there are clear benefits for real image inversion in Hyperstyle in comparison to Restyle?

Thanks!

Is there a way to fine tune the inversion?

Hello,
I'm currently testing your implementation by inferencing FFHQ images. It works really well but there is still a noticeable difference between the original image and the image from the inversion. I tried to set n_iters_per_batch to a higher value but this leads to blurred images. Any hint would be really helpful!

the generated image is a little blur

tbq
Thanks for your great work. It give me the best performance as far as I know for reconstruction of real image.
But I found the reconstruct image is a litte blur. The left of upload image is real image, and the right is the reconstructed one.
How can I get more clear image? They are all 1024x1024 without resizing from 256 to 1024.

tbq
the left of the next image is aligned and resize to 1024x1024, and the right is reconstructed one.

SeFa

Hello,
Very nice work!By the way, do you tried the editing effect on SeFa?

Training for Font Restyle

Thanks for sharing another amazing work of yours

I am wondering if i can train it for another domain such as font style

I have followed some of the steps of facebook paper:
TextStyleBrush: Transfer of Text Aesthetics from a Single Example
https://arxiv.org/pdf/2106.08385
By training stylegan2 with some additions for non-square images and loss function.
I got good results but of course far from what facebook are getting in their paper.
However the biggest problem was the inversion of the image by projection to get the latent code. It was very slow.

Do you think this or psp can work for another domain such as font restyle?

weird results

Hi, I am a totally new starter and my question could be too simple to you.
When I run the inference.py and I get this werid result, could you please tell me the reason?
My cmd line is:

python scripts/inference.py --exp_dir=exp_0308_001/ --checkpoint_path=pretrained_models/hyperstyle_ffhq.pt --data_path=./exp_data/ --test_batch_size=8 --test_workers=8 --n_iters_per_batch=5 --load_w_encoder

Input:
Euro_0001
Output:

Euro_0001

Improve the editability.

First of all, I want to thank you for your wonderful project! Another step forward in the StyleGAN Inversion problem.

The reconstruction quality is very good and close to optimization approaches. However, I found that the editability is not really good. When I change the latents in the age direction, the kid still have beards. And it doesn't even work with gender direction.
While I tried with PTI, it worked fine in both directions.

So I was wondering, is there a way to improve the editability better while sacrificing the reconstruction quality? Thank you.

code is not same as paper said for layers choosed for refine

Thanks for your great work. It gives the amazing reconstructed result. So I read your paper and code carefully.
I found you said in papers that
"we restrict ourselves to modifying only the non-toRGB convolutions"
but actually I found some modify in toRGB convolution.

skip = self.to_rgb1(out, latent[:, 1], weights_delta=weights_deltas[1])

Can you give me some help?

Training time of HyperStyle

Hi,

I'm currently trying to train HyperStyle from scratch for FFHQ. I use the training command suggested in the "Training HyperStyle" section in the readme. I run it on one A100 Nvidia GPU. From training, I see that 1k steps take approximately one hour. So, as I understand to finish the full training (500k steps), I need to wait 500 hours (20 days). Is it right?

Could you please share how much time you spent training HyperStyle and which resources you used?

Thanks!

HI! input size!

Hello!
I'm really impressed with your paper.
I have a question.
Do you put 1024 as it is? Or is it downsample to 256?

Training Process Problem

During the training process, we first achieved a rough inversion result from e4e, which is a high-fidelity result. Then we use the hypernet to predict the weight deltas of stylegan convolution kernels.
During the training process, will the results first be blurry and then gradually be clear? or the image results are kept in the stylegan domain with high fidelity? Since the convolution weight deltas are hard to learn.

Thanks in advance!

Error when finetune the hypernet with hyperstyle_ffhq.pt

I follow the default setting as readme while trying to add --checkpoint_path=pretrained-models/hyperstyle_ffhq.pt in the training command, but it returns

Missing key(s) in state_dict: "refinement_blocks.0.convs.0.weight", "refinement_blocks.0.convs.0.bias", "refinement_blocks.0.convs.2.weight", "refinement_blocks.0.convs.2.bias", "refinement_blocks.0.convs.4.weight", "refinement_blocks.0.convs.4.bias", "refinement_blocks.0.convs.6.weight", "refinement_blocks.0.convs.6.bias", "refinement_blocks.0.convs.8.weight", "refinement_blocks.0.convs.8.bias", "refinement_blocks.0.linear.weight", "refinement_blocks.0.linear.bias", "refinement_blocks.0.hypernet.w1", "refinement_blocks.0.hypernet.b1", "refinement_blocks.0.hypernet.w2", "refinement_blocks.0.hypernet.b2", "refinement_blocks.2.convs.0.weight", "refinement_blocks.2.convs.0.bias", "refinement_blocks.2.convs.2.weight", "refinement_blocks.2.convs.2.bias", "refinement_blocks.2.convs.4.weight", "refinement_blocks.2.convs.4.bias", "refinement_blocks.2.convs.6.weight", "refinement_blocks.2.convs.6.bias", "refinement_blocks.2.convs.8.weight", "refinement_blocks.2.convs.8.bias", "refinement_blocks.2.linear.weight", "refinement_blocks.2.linear.bias", "refinement_blocks.2.hypernet.w1", "refinement_blocks.2.hypernet.b1", "refinement_blocks.2.hypernet.w2", "refinement_blocks.2.hypernet.b2", "refinement_blocks.3.convs.0.weight", "refinement_blocks.3.convs.0.bias", "refinement_blocks.3.convs.2.weight", "refinement_blocks.3.convs.2.bias", "refinement_blocks.3.convs.4.weight", "refinement_blocks.3.convs.4.bias", "refinement_blocks.3.convs.6.weight", "refinement_blocks.3.convs.6.bias", "refinement_blocks.3.convs.8.weight", "refinement_blocks.3.convs.8.bias", "refinement_blocks.3.linear.weight", "refinement_blocks.3.linear.bias", "refinement_blocks.3.hypernet.w1", "refinement_blocks.3.hypernet.b1", "refinement_blocks.3.hypernet.w2", "refinement_blocks.3.hypernet.b2".

Is there any specific configuration when getting hyperstyle_ffhq.pt you provided?

FileNotFoundError: [Errno 2] No such file or directory: './pretrained_models/hyperstyle_ffhq.pt'

FileNotFoundError: [Errno 2] No such file or directory: './pretrained_models/hyperstyle_ffhq.pt'

when trying to run

if not os.path.exists(EXPERIMENT_ARGS['model_path']) or os.path.getsize(EXPERIMENT_ARGS['model_path']) < 1000000:
    print(f'Downloading HyperStyle model for {experiment_type}...')
    os.system(hyperstyle_download_command)
    # if google drive receives too many requests, we'll reach the quota limit and be unable to download the model
    if os.path.getsize(EXPERIMENT_ARGS['model_path']) < 1000000:
        raise ValueError("Pretrained model was unable to be downloaded correctly!")
    else:
        print('Done.')
else:
    print(f'HyperStyle model for {experiment_type} already exists!')

Variable Height Width Input Support

First of all, Love your work on GAN Inversion and Editing.

Was experimenting with this codebase. I had a question.
We were trying to train this codebase with say a variable input StyleGAN like (1024,512) but this was throwing some errors (Shape Mismatch Errors). Like the codebase expects fixed inputs like (1024,1024)

Is there a way to alleviate this by say small changes to the code? I am posting the error below.
Thanks

""" self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Generator:
size mismatch for input.input: copying a param with shape torch.Size([1, 512, 4, 2]) from checkpoint, the shape in current model is torch.Size([1, 512, 4, 4]).
size mismatch for noises.noise_0: copying a param with shape torch.Size([1, 1, 4, 2]) from checkpoint, the shape in current model is torch.Size([1, 1, 4, 4]).
size mismatch for noises.noise_1: copying a param with shape torch.Size([1, 1, 8, 4]) from checkpoint, the shape in current model is torch.Size([1, 1, 8, 8]).
size mismatch for noises.noise_2: copying a param with shape torch.Size([1, 1, 8, 4]) from checkpoint, the shape in current model is torch.Size([1, 1, 8, 8]).
size mismatch for noises.noise_3: copying a param with shape torch.Size([1, 1, 16, 8]) from checkpoint, the shape in current model is torch.Size([1, 1, 16, 16]).
size mismatch for noises.noise_4: copying a param with shape torch.Size([1, 1, 16, 8]) from checkpoint, the shape in current model is torch.Size([1, 1, 16, 16]).
size mismatch for noises.noise_5: copying a param with shape torch.Size([1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([1, 1, 32, 32]).
size mismatch for noises.noise_6: copying a param with shape torch.Size([1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([1, 1, 32, 32]).
size mismatch for noises.noise_7: copying a param with shape torch.Size([1, 1, 64, 32]) from checkpoint, the shape in current model is torch.Size([1, 1, 64, 64]).
size mismatch for noises.noise_8: copying a param with shape torch.Size([1, 1, 64, 32]) from checkpoint, the shape in current model is torch.Size([1, 1, 64, 64]).
size mismatch for noises.noise_9: copying a param with shape torch.Size([1, 1, 128, 64]) from checkpoint, the shape in current model is torch.Size([1, 1, 128, 128]).
size mismatch for noises.noise_10: copying a param with shape torch.Size([1, 1, 128, 64]) from checkpoint, the shape in current model is torch.Size([1, 1, 128, 128]).
size mismatch for noises.noise_11: copying a param with shape torch.Size([1, 1, 256, 128]) from checkpoint, the shape in current model is torch.Size([1, 1, 256, 256]).
size mismatch for noises.noise_12: copying a param with shape torch.Size([1, 1, 256, 128]) from checkpoint, the shape in current model is torch.Size([1, 1, 256, 256]).
size mismatch for noises.noise_13: copying a param with shape torch.Size([1, 1, 512, 256]) from checkpoint, the shape in current model is torch.Size([1, 1, 512, 512]).
size mismatch for noises.noise_14: copying a param with shape torch.Size([1, 1, 512, 256]) from checkpoint, the shape in current model is torch.Size([1, 1, 512, 512]).
size mismatch for noises.noise_15: copying a param with shape torch.Size([1, 1, 1024, 512]) from checkpoint, the shape in current model is torch.Size([1, 1, 1024, 1024]).
size mismatch for noises.noise_16: copying a param with shape torch.Size([1, 1, 1024, 512]) from checkpoint, the shape in current model is torch.Size([1, 1, 1024, 1024]).
"""

query in the inference process

Hi, it got stuck when I input the script in the terminal:

python scripts/inference.py --exp_dir=exp_0308_001/ --checkpoint_path=pretrained_models/hyperstyle_ffhq.pt --data_path=./exp_data/ --load_w_encoder ./pretrained_models/faces_w_encoder.pt --save_weight_deltas

I have already downloaded and stored all models mentioned in readme in the pretrained_models/, the exp_0308_001/ is empty and my test images are in ./exp_data/ .

When I input the script mentioned before, nothing happens and it got stuck like this:
image

Could you please tell me if I run this right?

Thanks in advance.

Problem while editing afhq-wild via StyleCLIP

Hello, this is really a great repo! I'm trying to reproduce your interesting demo. I used the afhq-wild weights provided in this repo, but failed to get the edited results. The result image doesn't seem to be changed when prompt text is changed. My testing options are as follows. I sincerely hope to get your help and tell me where the problem is. Thank you so much!

{
    "alpha": 4.1,
    "beta": 0.18,
    "delta_i_c": "editing/styleclip/global_directions/ffhq/fs3.npy",
    "exp_dir": "results/",
    "n_images": null,
    "neutral_text": "a tiger",
    "num_alphas": 1,
    "num_betas": 1,
    "s_statistics": "editing/styleclip/global_directions/ffhq/S_mean_std",
    "stylegan_size": 512,
    "stylegan_truncation": 1.0,
    "stylegan_truncation_mean": 4096,
    "stylegan_weights": "pretrained_models/afhqwild.pt",
    "target_text": "a lion",
    "text_prompt_templates": "editing/styleclip/global_directions/templates.txt",
    "weight_deltas_path": "results/"
}

The images are as follows:
input
image
output
image

AssertionError: Cannot infer latent code when e4e isn't loaded


python editing/inference_face_editing.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=pt/hyperstyle_ffhq.pt \
--data_path=test_data \
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=3 \
--edit_directions=age,pose,smile \

Loading HyperStyle from checkpoint: pt/hyperstyle_ffhq.pt
Loading dataset for ffhq_hypernet
  0%|                                                                                                                                   | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "editing/inference_face_editing.py", line 109, in <module>
    run()
  File "editing/inference_face_editing.py", line 58, in run
    result_batch = run_on_batch(input_cuda, net, latent_editor, opts)
  File "editing/inference_face_editing.py", line 93, in run_on_batch
    y_hat, _, weights_deltas, codes = run_inversion(inputs, net, opts)
  File "./utils/inference_utils.py", line 22, in run_inversion
    return_weight_deltas_and_codes=True)
  File "./models/hyperstyle.py", line 70, in forward
    assert self.opts.load_w_encoder, "Cannot infer latent code when e4e isn't loaded."
AssertionError: Cannot infer latent code when e4e isn't loaded.

Please help to run it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.