Git Product home page Git Product logo

sjc's Introduction

Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation

Haochen Wang*, Xiaodan Du*, Jiahao Li*, Raymond A. Yeh†, Greg Shakhnarovich (* indicates equal contribution)

TTI-Chicago, †Purdue University

Abstract: A diffusion model learns to predict a vector field of gradients. We propose to apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field. This setup aggregates 2D scores at multiple camera viewpoints into a 3D score, and repurposes a pretrained 2D model for 3D data generation. We identify a technical challenge of distribution mismatch that arises in this application, and propose a novel estimation mechanism to resolve it. We run our algorithm on several off-the-shelf diffusion image generative models, including the recently released Stable Diffusion trained on the large-scale LAION dataset.

Many thanks to dvschultz for the colab, and AmanKishore for the hugging face demo.

SJC is now integrated in threestudio as well.

Updates

  • We have added subpixel rendering script for final high quality vis. The jittery videos you might have seen should be significantly better now. Please run python /path/to/sjc/highres_final_vis.py in the exp folder after the training is complete. There are a few toggles in the script you can play with, but the default is ok. It takes about 5 minutes / 11GB on an A5000, and the extra time is mainly due to SD Decoder.
  • If you are running SJC with a DreamBooth fine-tuned model: the model's output distribution is already significantly narrowed. It might help to use a lower guidance scale --sd.scale 50.0 for example. Intense mode-seeking is one cause for multi-face problem. We have internally tried DreamBooth with view-dependent prompt fine-tuning. But by and large DreamBooth integration is not ready.

TODOs

  • make seeds configurable. So far all seeds are hardcoded to 0.
  • add script to reproduce 2D experiments in Fig 4. The Fig might need change once it's tied to seeds. Note that for a simple aligned domain like faces, simple scheduling like using a single σ=1.5 could already generate some nice images. But not so for bedrooms; it's too diverse and annealing seems still needed.
  • main paper figures did not use subpix rendering; appendix figures did. Replace the main paper figures to make them consistent.

License

Since we use Stable Diffusion, we are releasing under their OpenRAIL license. Otherwise we do not identify any components or upstream code that carry restrictive licensing requirements.

Structure

In addition to SJC, the repo also contains an implementation of Karras sampler, and a customized, simple voxel nerf. We provide the abstract parent class based on Karras et. al. and include a few types of diffusion model here. See adapt.py.

Installation

Install Pytorch according to your CUDA version, for example:

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

Install other dependencies by pip install -r requirements.txt.

Install taming-transformers manually

git clone --depth 1 [email protected]:CompVis/taming-transformers.git && pip install -e taming-transformers

Downloading checkpoints

We have bundled a minimal set of things you need to download (SD v1.5 ckpt, gddpm ckpt for LSUN and FFHQ) in a tar file, made available at our download server here. It is a single file of 12GB, and you can use wget or curl.

Remember to update env.json to point at the new checkpoint root where you have uncompressed the files.

Usage

Make a new directory to run experiments (the script generates many logging files. Do not run at the root of the code repo, else risk contamination.)

mkdir exp
cd exp

Run the following command to generate a new 3D asset. It takes about 25 minutes / 10GB GPU mem on a single A5000 GPU for 10000 steps of optimization.

python /path/to/sjc/run_sjc.py \
--sd.prompt "A zoomed out high quality photo of Temple of Heaven" \
--n_steps 10000 \
--lr 0.05 \
--sd.scale 100.0 \
--emptiness_weight 10000 \
--emptiness_step 0.5 \
--emptiness_multiplier 20.0 \
--depth_weight 0 \
--var_red False

sd.prompt is the prompt to the stable diffusion model

n_steps is the number of gradient steps

lr is the base learning rate of the optimizer

sd.scale is the guidance scale for stable diffusion

emptiness_weight is the weighting factor of the emptiness loss

emptiness_step indicates after emptiness_step * n_steps update steps, the emptiness_weight is multiplied by emptiness_multiplier.

emptiness_multipler see above

depth_weight the weighting factor of the center depth loss

var_red whether to use Eq. 16 vs Eq. 15. For some prompts such as Obama we actually see better results with Eq. 15.

Visualization results are stored in the current directory. In directories named test_* there are images (under view) and videos (under view_seq) rendered at different iterations.

To Reproduce the Results in the Paper

First create a clean directory for your experiment, then run one of the following scripts from that folder:

Trump

python /path/to/sjc/run_sjc.py --sd.prompt "Trump figure" --n_steps 30000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Obama

python /path/to/sjc/run_sjc.py --sd.prompt "Obama figure" --n_steps 30000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0 --var_red False

Biden

python /path/to/sjc/run_sjc.py --sd.prompt "Biden figure" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Temple of Heaven

python /path/to/sjc/run_sjc.py --sd.prompt "A zoomed out high quality photo of Temple of Heaven" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Burger

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a delicious burger" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Icecream

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a chocolate icecream cone" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 10

Ficus

python /path/to/sjc/run_sjc.py --sd.prompt "A ficus planted in a pot" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 100

Castle

python /path/to/sjc/run_sjc.py --sd.prompt "A zoomed out photo a small castle" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 50

Sydney Opera House

python /path/to/sjc/run_sjc.py --sd.prompt "A zoomed out high quality photo of Sydney Opera House" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Rose

python /path/to/sjc/run_sjc.py --sd.prompt "a DSLR photo of a rose" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 50

School Bus

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a yellow school bus" --n_steps 30000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0 --var_red False

Rocket

python /path/to/sjc/run_sjc.py --sd.prompt "A wide angle zoomed out photo of Saturn V rocket from distance" --n_steps 30000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0  --var_red False

French Fries

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of french fries from McDonald's" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 10

Motorcycle

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a toy motorcycle" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Car

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a classic silver muscle car" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Tank

python /path/to/sjc/run_sjc.py --sd.prompt "A product photo of a toy tank" --n_steps 20000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Chair

python /path/to/sjc/run_sjc.py --sd.prompt "A high quality photo of a Victorian style wooden chair with velvet upholstery" --n_steps 50000 --lr 0.01 --sd.scale 100.0 --emptiness_weight 7000

Duck

python /path/to/sjc/run_sjc.py --sd.prompt "a DSLR photo of a yellow duck" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 10

Horse

python /path/to/sjc/run_sjc.py --sd.prompt "A photo of a horse walking" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

Giraffe

python /path/to/sjc/run_sjc.py --sd.prompt "A wide angle zoomed out photo of a giraffe" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 50

Zebra

python /path/to/sjc/run_sjc.py --sd.prompt "A photo of a zebra walking" --n_steps 10000 --lr 0.02 --sd.scale 100.0 --emptiness_weight 30000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0 --var_red False

Printer

python /path/to/sjc/run_sjc.py --sd.prompt "A product photo of a Canon home printer" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0 --var_red False

Zelda Link

python /path/to/sjc/run_sjc.py --sd.prompt "Zelda Link" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0 --var_red False

Pig

python /path/to/sjc/run_sjc.py --sd.prompt "A pig" --n_steps 10000 --lr 0.05 --sd.scale 100.0 --emptiness_weight 10000 --emptiness_step 0.5 --emptiness_multiplier 20.0 --depth_weight 0

To Test the Voxel NeRF

python /path/to/sjc/run_nerf.py

Our bundle contains a tar ball for the lego bulldozer dataset. Untar it and it will work.

To Sample 2D images with the Karras Sampler

python /path/to/sjc/run_img_sampling.py

Use help -h to see the options available. Will expand the details later.

Bib

@article{sjc,
      title={Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation},
      author={Wang, Haochen and Du, Xiaodan and Li, Jiahao and Yeh, Raymond A. and Shakhnarovich, Greg},
      journal={arXiv preprint arXiv:2212.00774},
      year={2022},
}

sjc's People

Contributors

duxiaodan avatar w-hc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sjc's Issues

Multi-face Janus issue

Hi,
This is great work. Have read your paper the results looked good.
I tried to reproduce the obama caption as suggested in the readme but ended with the output having multiface janus issue. Is that expected? any pointers to fix the same?

Fantastic Work

Just wanted to say--

This is incredibly interesting work and seems to have a robustness to Janus issues (front identity applied to many angles) and quality of color that we've not yet seen from open source implementations of Dreamfusion.

Colab ModuleNotFoundError: No module named 'torchtext.legacy'

when running Colab it report error list:

Loading model from /content/drive/MyDrive/sjc/release/diffusion_ckpts/stable_diffusion/sd-v1-5.ckpt
Traceback (most recent call last):
File "/content/drive/MyDrive/sjc/run_sjc.py", line 297, in
dispatch(SJC)
File "/content/drive/MyDrive/sjc/my/config.py", line 76, in dispatch
mod.run()
File "/content/drive/MyDrive/sjc/run_sjc.py", line 77, in run
model = getattr(self, family).make()
File "/content/drive/MyDrive/sjc/run_img_sampling.py", line 39, in make
model = StableDiffusion(**args)
File "/content/drive/MyDrive/sjc/adapt_sd.py", line 89, in init
self.model, H, W = load_sd1_model(self.checkpoint_root())
File "/content/drive/MyDrive/sjc/adapt_sd.py", line 58, in load_sd1_model
model = load_model_from_config(config, str(ckpt_fname))
File "/content/drive/MyDrive/sjc/adapt_sd.py", line 34, in load_model_from_config
pl_sd = torch.load(ckpt, map_location="cpu")
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1042, in find_class
return super().find_class(mod_name, name)
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/init.py", line 20, in
from pytorch_lightning import metrics # noqa: E402
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/metrics/init.py", line 15, in
from pytorch_lightning.metrics.classification import ( # noqa: F401
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/metrics/classification/init.py", line 14, in
from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in
from pytorch_lightning.metrics.utils import deprecated_metrics, void
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in
from pytorch_lightning.utilities import rank_zero_deprecation
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/init.py", line 18, in
from pytorch_lightning.utilities.apply_func import move_data_to_device # noqa: F401
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in
from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'

Different angles in SD

I'm curious how you get the same image for each angle? If I were to write "chair front view", "chair side view", chair back view" etc in SD it will give me entirely different chairs in each image I generate. So how does this system generate a chair that looks the same in each reference image from different angles?

question about formula

hello, i have a question about the code for score calculation:
in the paper:
grad=(Ds-y)/chosen_σs**2
but in the code:
grad=(Ds-y)/chosen_σs
look forward to your reply, thank you

Question about train_eye_with_prompts

It might be a dumb question but I'm new to 3D. I'm trying to understand the purpose of φ = np.arccos(1 - 2 * (vs / π)) in pose.py. As far as I know, φ is the increased version of vs if vs is lower than π/2, and decreased one if vs is greater than π/2. I would really appreciate it if you could tell me the purpose of that line!
Great work by the way!

Confused about sampled poses

Hi, I fetched some generated poses by Poser() and some examples:

n_steps = 10
poser = Poser(H=128, W=256, fov=60., R=1.5)
Ks, poses, prompt_prefixes = poser.sample_train(n_steps)
print(Ks)
print(poses)

# Ks
[array([[ 287.60829222,    0.        , -127.5       ],
       [   0.        , -287.60829222,  -63.5       ],
       [   0.        ,    0.        ,   -1.        ]]), ...]
# poses
[[[-0.37924771 -0.74374956  0.55046142  0.82569212]
  [ 0.          0.59490358  0.80379707  1.20569561]
  [-0.92529518  0.3048382  -0.22561582 -0.33842373]
  [ 0.          0.          0.          1.        ]], ...]

So I was wondering why the intrinsic matrix has a negative value of fy, cx, cy, I think it is just because the speciality of the different coordinates systems of your definition? And if I replace the generation with some given poses (e.g. real Ks and poses captured in reality), it should also work.

Question about paper

In paper, page 5, (26), what dose it mean that imposes severe penalties at small weights?

If weights are small, which means transparent, the shape of log(1+x) pass(0, 0) and I think the panelty is small..

Do you have more detailed description?

Drop the duplicate running process in the queue on Hugging Face

Thanks for the amazing job.

When starting to train a model, after refreshing the page, the original running process cannot be found. How to find it?
Run one again and find that multiple processes (including the original one) are running simultaneously, leading to extremely slow training.

Do you have any suggestions?

Colab

Hi. Is it possible to run this code in the colab pro?
Are you guys about to release some colab? Very nice work.

About OOD issues.

In your paper, you said the render image X which is out of distribution of 2D stable-diffusion.

To address this issue, you add Gaussian noises into X to obtain X_bar set. Next you take X_bar set as the diffusion inputs.
However, as you said X is out of distribution, does X_bar is not out of distribution of diffusion model?

In my opinion, X_bar = X+ sigma * N is also out of distribution. Hence the results of denoise(X_bar, sigma) is also incorrect.
I am very confused about this issue.
Could you help me solve this problem

Details about the eq19

Thanks for your work.
I am curious about the eq19 in the paper, can you provide a more detailed derivation or some material? Thanks!

How to train the model with a new dataset?

Dear authors, I want to explore the finetune of your model, however, I didn't find the training usage in the readme file. Can you provide the related training usage? Thank you very much.

How to get obj files

Hey it is a wonderful repo

I wonder how I can get the output as a 3d model in the obj or fbx format

Where is the jacobian implementation for the backpropogation?

I could not find the jacobian implementation for the backpropagation. I need to use the jacobian loss function which is shared in the paper, however I cannot find it. Could anyone please help me find where the loss function is implemented?
image_6487327
Above is a screenshot taken from the paper. I am searching for the implemetation of the chain rule in code form.
Could you please help me find where that specific function is?
Thank you for your time,

question of the provement of the formula17

As far as I personally understand,the formula 17 comes from the formula18 by derivation,while the inequality in formula 18 establish,but it doesn‘t means that the inequality still establish after derivation.Would you please explain this question?

Another viewpoint to interpret the score

Thanks so much for your excellent work!!

Recent I just realized that there is a seemingly better way to interpret the score.

Nocicing that most of the pre-trained diffusion models are VP-SDE diffusion models like DDPM, and the relationship between score and noise prediction $$\boldsymbol{s}_\theta(\boldsymbol{x}_t,t) \approx \nabla _{\boldsymbol{x}_t} \log p(\boldsymbol{x}_t|\boldsymbol{x}_0) = -\frac{\boldsymbol{x}_t - \bar{\alpha}_t\boldsymbol{x}_0}{{1-\bar{\alpha}_t}} = -\frac{\boldsymbol{\varepsilon}}{\sqrt{1-\bar{\alpha}_t}} \approx -\frac{\boldsymbol{\varepsilon} _\theta (\boldsymbol{x}_t,t) }{\sqrt{1-\bar{\alpha}_t}}$$, there is no need to intepret the score from a denoiser point of view.

Hence, a more intuitive but simple implementation would be https://github.com/yuanzhi-zhu/sjc/blob/main/adapt_sd.py#L137

About bs=1 and the use of the Monte-Carlo estimate

Hello! Thank you for sharing your great work.

In the paper, it says "An additional contribution of ours beyond DreamFusion [41] is our analysis of the effect that the OOD problem has when using a denoiser on rendered images (Claim 1), and the PAAS method to address it. For the variance reduction technique, namely the use of the Monte-Carlo estimate ˆ on Eq. (16), or − (in DreamFusion), vs. on Eq. (15), we observe comparable performance between the two methods empirically for 3D generation."

But in the code, it seems that only one noise is created (bs=1). If I understand correctly, to perturb multiple random noises and average them, the variable 'bs' should be more than one. I'm curious if I understand it well. Thanks!

Replacing the Stable Diffusion model for a user-trained model

Thank you for making this available to all.

I have not an issue per say as your code is running successfully on my Windows 10/RTX3060 12GB, but is there a way to run the 3d generation based on a user-provided, trained, stable diffusion model?

I tried a simple replacement for what I would consider a working trained (dreambooth) SD model but I received the error at below (yes, I just renamed my model to sd-v1-5.ckpt):

Is there a way to modify the adapt_sd.py script to run custom models?

============error==============================
Loading model from ..\release\diffusion_ckpts\stable_diffusion\sd-v1-5.ckpt
Traceback (most recent call last):
File "D:\3dconversion\sjc\run_sjc.py", line 297, in
dispatch(SJC)
File "D:\3dconversion\sjc\my\config.py", line 76, in dispatch
mod.run()
File "D:\3dconversion\sjc\run_sjc.py", line 77, in run
model = getattr(self, family).make()
File "D:\3dconversion\sjc\run_img_sampling.py", line 39, in make
model = StableDiffusion(**args)
File "D:\3dconversion\sjc\adapt_sd.py", line 90, in init
self.model, H, W = load_sd1_model(self.checkpoint_root())
File "D:\3dconversion\sjc\adapt_sd.py", line 59, in load_sd1_model
model = load_model_from_config(config, str(ckpt_fname))
File "D:\3dconversion\sjc\adapt_sd.py", line 38, in load_model_from_config
sd = pl_sd["state_dict"]
KeyError: 'state_dict'

Query : 4 channels in nerf output

Hi, I have a query pertaining to the voxnerf implementation, the features generated 4 channels, whereas the density feature is computed separately, does the last channel correspond to alpha ?

3D Reconstruction?

Hi this is great work!
Since Sparsefusion's code is not out yet what do you recommend for 3D Reconstruction? PixelNerf?

Thanks!

Questions about center depth loss in the paper

image
The equation gives NaN when

$$ \frac 1 {|\mathcal{B}|}\sum_{p\in\mathcal{B}}D(p) < \frac 1 {|\mathcal{B}^\complement|} \sum_{q\notin\mathcal{B}} D(q) $$

Does that mean the loss is only applied when ?

$$ \frac 1 {|\mathcal{B}|}\sum_{p\in\mathcal{B}}D(p) > \frac 1 {|\mathcal{B}^\complement|} \sum_{q\notin\mathcal{B}} D(q) $$

But the loss encourages the average center depth to be large, which means you hope the object to be away from the scene center. Where did I go wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.