Git Product home page Git Product logo

collaborative-diffusion's Introduction

Collaborative Diffusion (CVPR 2023)

Paper Project Page Video Visitor

This repository contains the implementation of the following paper:

Collaborative Diffusion for Multi-Modal Face Generation and Editing
Ziqi Huang, Kelvin C.K. Chan, Yuming Jiang, Ziwei Liu
IEEE/CVF International Conference on Computer Vision (CVPR), 2023

From MMLab@NTU affiliated with S-Lab, Nanyang Technological University

📖 Overview

We propose Collaborative Diffusion, where users can use multiple modalities to control face generation and editing. (a) Face Generation. Given multi-modal controls, our framework synthesizes high-quality images consistent with the input conditions. (b) Face Editing. Collaborative Diffusion also supports multi-modal editing of real images with promising identity preservation capability.


We use pre-trained uni-modal diffusion models to perform multi-modal guided face generation and editing. At each step of the reverse process (i.e., from timestep t to t − 1), the dynamic diffuser predicts the spatial-varying and temporal-varying influence function to selectively enhance or suppress the contributions of the given modality.

✔️ Updates

  • [10/2023] Collaborative Diffusion can support FreeU now. See here for how to run Collaborative Diffusion + FreeU.
  • [09/2023] We provide inference script of face generation driven by single modality, and the scripts and checkpoints of 256x256 resolution.
  • [09/2023] Editing code is released.
  • [06/2023] We provide the preprocessed multi-modal annotations here.
  • [05/2023] Training code for Collaborative Diffusion (512x512) released.
  • [04/2023] Project page and video available.
  • [04/2023] Arxiv paper available.
  • [04/2023] Checkpoints for multi-modal face generation (512x512) released.
  • [04/2023] Inference code for multi-modal face generation (512x512) released.

🔨 Installation

  1. Clone repo

    git clone https://github.com/ziqihuangg/Collaborative-Diffusion
    cd Collaborative-Diffusion
  2. Create conda environment.
    If you already have an ldm environment installed according to LDM, you do not need to go throught this step (i.e., step 2). You can simply conda activate ldm and jump to step 3.

     conda env create -f environment.yaml
     conda activate codiff
  3. Install dependencies

     pip install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0
     conda install -c anaconda git
     pip install git+https://github.com/arogozhnikov/einops.git

⬇️ Download

Download Checkpoints

  1. Download the pre-trained models from Google Drive or OneDrive.

  2. Put the models under pretrained as follows:

    Collaborative-Diffusion
    └── pretrained
        ├── 256_codiff_mask_text.ckpt
        ├── 256_mask.ckpt
        ├── 256_text.ckpt
        ├── 256_vae.ckpt
        ├── 512_codiff_mask_text.ckpt
        ├── 512_mask.ckpt
        ├── 512_text.ckpt
        └── 512_vae.ckpt
    

Download Datasets

We provide preprocessed data used in this project (see Acknowledgement for data source). You need download them only if you want to reproduce the training of Collaborative Diffusion. You can skip this step if you simply want to use our pre-trained models for inference.

  1. Download the preprocessed training data from here.

  2. Put the datasets under dataset as follows:

    Collaborative-Diffusion
    └── dataset
        ├── image
        |   └──image_512_downsampled_from_hq_1024
        ├── text
        |   └──captions_hq_beard_and_age_2022-08-19.json
        ├── mask
        |   └──CelebAMask-HQ-mask-color-palette_32_nearest_downsampled_from_hq_512_one_hot_2d_tensor
        └── sketch
            └──sketch_1x1024_tensor
    

For more details about the annotations, please refer to CelebA-Dialog.

🖼️ Generation

Multi-Modal-Driven Generation

You can control face generation using text and segmentation mask.

  1. mask_path is the path to the segmentation mask, and input_text is the text condition.

    python generate.py \
    --mask_path test_data/512_masks/27007.png \
    --input_text "This man has beard of medium length. He is in his thirties."
    python generate.py \
    --mask_path test_data/512_masks/29980.png \
    --input_text "This woman is in her forties."
  2. You can view different types of intermediate outputs by setting the flags as 1. For example, to view the influence functions, you can set return_influence_function to 1.

    python generate.py \
    --mask_path test_data/512_masks/27007.png \
    --input_text "This man has beard of medium length. He is in his thirties." \
    --ddim_steps 10 \
    --batch_size 1 \
    --save_z 1 \
    --return_influence_function 1 \
    --display_x_inter 1 \
    --save_mixed 1

    Note that producing intermediate results might consume a lot of GPU memory, so we suggest setting batch_size to 1, and setting ddim_steps to a smaller value (e.g., 10) to save memory and computation time.

  3. Our script synthesizes 512x512 resolution by default. You can generate 256x256 images by changing the config and ckpt:

    python generate.py \
    --mask_path test_data/256_masks/29980.png \
    --input_text "This woman is in her forties." \
    --config_path "configs/256_codiff_mask_text.yaml" \
    --ckpt_path "pretrained/256_codiff_mask_text.ckpt" \
    --save_folder "outputs/inference_256_codiff_mask_text"

Text-to-Face Generation

  1. Give the text prompt and generate the face image:

    python text2image.py \
    --input_text "This man has beard of medium length. He is in his thirties."
    python text2image.py \
    --input_text "This woman is in her forties."
  2. Our script synthesizes 512x512 resolution by default. You can generate 256x256 images by changing the config and ckpt:

    python text2image.py \
    --input_text "This man has beard of medium length. He is in his thirties." \
    --config_path "configs/256_text.yaml" \
    --ckpt_path "pretrained/256_text.ckpt" \
    --save_folder "outputs/256_text2image"

Mask-to-Face Generation

  1. Give the face segmentation mask and generate the face image:

    python mask2image.py \
    --mask_path "test_data/512_masks/29980.png"
    python mask2image.py \
    --mask_path "test_data/512_masks/27007.png"
  2. Our script synthesizes 512x512 resolution by default. You can generate 256x256 images by changing the config and ckpt:

    python mask2image.py \
    --mask_path "test_data/256_masks/29980.png" \
    --config_path "configs/256_mask.yaml" \
    --ckpt_path "pretrained/256_mask.ckpt" \
    --save_folder "outputs/256_mask2image"

🎨 Editing

You can edit a face image according to target mask and target text. We achieve this by collaborating multiple uni-modal edits. We use Imagic to perform the uni-modal edits.

  1. Perform text-based editing.

    python editing/imagic_edit_text.py
  2. Perform mask-based editing. Note that we adapted Imagic (the text-based method) to mask-based editing.

    python editing/imagic_edit_mask.py
  3. Collaborate the text-based edit and the mask-based edit using Collaborative Diffusion.

    python editing/collaborative_edit.py

🏃 Training

We provide the entire training pipeline, including training the VAE, uni-modal diffusion models, and our proposed dynamic diffusers.

If you are only interested in training dynamic diffusers, you can use our provided checkpoints for VAE and uni-modal diffusion models. Simply skip step 1 and 2 and directly look at step 3.

  1. Train VAE.

    LDM compresses images to the VAE latents to save computational cost, and later train UNet diffusion models on the VAE latents. This step is to reproduce the pretrained/512_vae.ckpt.

    python main.py \
    --logdir 'outputs/512_vae' \
    --base 'configs/512_vae.yaml' \
    -t  --gpus 0,1,2,3,
  2. Train the uni-modal diffusion models.

    (1) train text-to-image model. This step is to reproduce the pretrained/512_text.ckpt.

    python main.py \
    --logdir 'outputs/512_text' \
    --base 'configs/512_text.yaml' \
    -t  --gpus 0,1,2,3,

    (2) train mask-to-image model. This step is to reproduce the pretrained/512_mask.ckpt.

    python main.py \
    --logdir 'outputs/512_mask' \
    --base 'configs/512_mask.yaml' \
    -t  --gpus 0,1,2,3,
  3. Train the dynamic diffusers.

    The dynamic diffusers are the meta-networks that determine how the uni-modal diffusion models collaborate together. This step is to reproduce the pretrained/512_codiff_mask_text.ckpt.

    python main.py \
    --logdir 'outputs/512_codiff_mask_text' \
    --base 'configs/512_codiff_mask_text.yaml' \
    -t  --gpus 0,1,2,3,

🖋️ Citation

If you find our repo useful for your research, please consider citing our paper:

 @InProceedings{huang2023collaborative,
     author = {Huang, Ziqi and Chan, Kelvin C.K. and Jiang, Yuming and Liu, Ziwei},
     title = {Collaborative Diffusion for Multi-Modal Face Generation and Editing},
     booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
     year = {2023},
 }

💜 Acknowledgement

The codebase is maintained by Ziqi Huang.

This project is built on top of LDM. We trained on data provided by CelebA-HQ, CelebA-Dialog, CelebAMask-HQ, and MM-CelebA-HQ-Dataset. We also make use of the Imagic implementation.

collaborative-diffusion's People

Contributors

ziqihuangg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

collaborative-diffusion's Issues

Inquiry about Training Time and RuntimeError in Diffuser Code

Hello,

Thank you for your nice job. I recently encountered an issue while running the training code for Diffuser on GitHub, and I would appreciate your guidance.

During training, I encountered the following error:

Diffusion/ldm/models/diffusion/ddpm_compose.py", line 1237, in p_losses
logvar_t = self.logvar[t].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (CPU)

I managed to resolve the issue by moving 't' to the CPU. However, I noticed that the training time for a single epoch is quite long, nearly an hour. I am unsure if this training time is normal or if my actions, such as training on the CPU, are causing the slowdown.

Could you please share your typical training time for a single epoch, so I can better understand if my situation is unusual? Additionally, if you suspect that there may be issues with my setup, I would greatly appreciate any suggestions or solutions you can offer.

Thank you very much for your assistance.

About training time

Hi,

Thank you for your nice work.

I would like to know the time required for their training, including vae, uni-modal and dynamic model.

I train the vae model for about 3 hours using 4 gpus. But I still find the sampled image is poor.

image

The recon image looks ok.

image

Collaborative_edit

Excellent work! I am very interested in Multi-Modal Collaborative Editing. I have a question: why do the results of Mask_edit and Text_edit show a significant difference in skin tone compared to the input image, while the result of Collaborative_edit has a skin tone very similar to the input image? I look forward to your response, thank you !
image

NameError: name 'trainer' is not defined

(codiff) ubuntu@ubun:~/lixiaoyi/Collaborative-Diffusion$ python main.py --logdir 'outputs/512_vae' --base 'configs/512_vae.yaml' -t --gpus 0,1,2,3,
2023-06-04 20:28:56.401572: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-04 20:28:56.460492: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-06-04 20:28:56.711893: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ubuntu/plumed-2.8.2:/usr/local/cuda-11.3/lib64:
2023-06-04 20:28:56.711925: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ubuntu/plumed-2.8.2:/usr/local/cuda-11.3/lib64:
2023-06-04 20:28:56.711928: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Global seed set to 23
Running on GPUs 0,1,2,3,
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 3, 64, 64) = 12288 dimensions.
making attention of type 'vanilla' with 512 in_channels
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
Monitoring val/rec_loss as checkpoint metric.
Merged modelckpt-cfg:
{'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'outputs/512_vae/2023-06-04T20-28-57_512_vae/checkpoints', 'filename': '{epoch:06}', 'verbose': True, 'save_last': True, 'monitor': 'val/rec_loss', 'save_top_k': 10}}
Traceback (most recent call last):
File "main.py", line 672, in
trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/trainer/properties.py", line 421, in from_argparse_args
return from_argparse_args(cls, args, **kwargs)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py", line 52, in from_argparse_args
return cls(**trainer_kwargs)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 40, in insert_env_defaults
return fn(self, **kwargs)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 346, in init
gpu_ids, tpu_cores = self._parse_devices(gpus, auto_select_gpus, tpu_cores)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1251, in _parse_devices
gpu_ids = device_parser.parse_gpu_ids(gpus)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/utilities/device_parser.py", line 91, in parse_gpu_ids
return _sanitize_gpu_ids(gpus)
File "/home/ubuntu/anaconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/utilities/device_parser.py", line 163, in _sanitize_gpu_ids
raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: You requested GPUs: [0, 1, 2, 3]
But your machine only has: [0]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 753, in
if trainer.global_rank == 0:
NameError: name 'trainer' is not defined

Text-to-Face.

Hi, great work. We find this code seems only work on your given text "This man has beard of medium length. He is in his thirties.". when i use "He doesn't have any bangs and has an extremely mild smile, no beard at all, and no glasses. this person is in the thirties.", the results are far from the given text. The results almost are female. So is there any bug here?

about the cross attention in Dynamic Diffuser

Hi, you have done a nice work. I 'm interested in your work but I wonder why can we do cross attention between the feature of the x_t and the context.

context: mask -(resize)-> [bz, 32, 32] -(one hot)-> [bz, 19, 32, 32] -(flattern)-> [bz, 19, 1024] -(linear layer)-> [bz, 19, 640]

def forward(self, x, context=None, mask=None):

def forward(self, x, context=None, mask=None):
        h = self.heads   # x.shape: [bz, 1024, 384], context.shape: [4, 19, 640]

        q = self.to_q(x)         # q.shape: [bz, 1024, 384]
        context = default(context, x)
        k = self.to_k(context)   # k.shape: [bz, 19, 384]
        v = self.to_v(context)   # v.shape: [bz, 19, 384]

        q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) # q.shape: [128, 1024, 12], k[128, 19, 12], v[128, 19, 12]

        sim = einsum('b i d, b j d -> b i j', q, k) * self.scale    # sim: [248, 1024, 19]

        if exists(mask):
            mask = rearrange(mask, 'b ... -> b (...)')
            max_neg_value = -torch.finfo(sim.dtype).max
            mask = repeat(mask, 'b j -> (b h) () j', h=h)
            sim.masked_fill_(~mask, max_neg_value)

        # attention, what we cannot get enough of
        attn = sim.softmax(dim=-1)         # attn: [128, 1024, 19]

        out = einsum('b i j, b j d -> b i d', attn, v)   # out.shape: [128, 1024, 12]
        out = rearrange(out, '(b h) n d -> b n (h d)', h=h)   # out.shape: [4, 1024, 384]
        return self.to_out(out)

So why can we do cross attention between x_t(exactly z_t) and the feature of the mask, what does it mean?
Thank you very much if you could solve my problem.

env error

I follow the turtors to create env,but when I run the code,the error accurs.
Traceback (most recent call last):
File "main.py", line 6, in
import pytorch_lightning as pl
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/init.py", line 20, in
from pytorch_lightning import metrics # noqa: E402
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/metrics/init.py", line 15, in
from pytorch_lightning.metrics.classification import ( # noqa: F401
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/metrics/classification/init.py", line 14, in
from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/pytorch_lightning/metrics/classification/accuracy.py", line 16, in
from torchmetrics import Accuracy as _Accuracy
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/torchmetrics/init.py", line 14, in
from torchmetrics import functional # noqa: E402
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/torchmetrics/functional/init.py", line 68, in
from torchmetrics.functional.text.bert import bert_score
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/torchmetrics/functional/text/bert.py", line 28, in
from transformers import AutoModel, AutoTokenizer
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/transformers/init.py", line 43, in
from . import dependency_versions_check
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/transformers/dependency_versions_check.py", line 41, in
require_version_core(deps[pkg])
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/transformers/utils/versions.py", line 94, in require_version_core
return require_version(requirement, hint)
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/transformers/utils/versions.py", line 85, in require_version
if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/packaging/version.py", line 54, in parse
return Version(version)
File "/home/wpx/miniconda3/envs/codiff/lib/python3.8/site-packages/packaging/version.py", line 200, in init
raise InvalidVersion(f"Invalid version: '{version}'")
packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'

About GPU

Hi,how much GPU memory is required,can I run on RTX3090 ?

about segmentation masks

Hello, I think your work is excellent and inspiring. May I ask if you have the code to convert facial images into segmentation masks? Thank you very much!

FID

Hi, I want to ask how you calculate the FID. Generating 3000 image for the 3000 testing data and calculate the FID between these 3000 images and the training dataset or the whole dataset?

Questions about differences between Multi-ControlNet

Hi, thanks for your excellent work!
I understand that ControlNet was released after the CVPR 2023 deadline, but I'm curious about the differences between your work and Multi-ControlNet and any additional advantages in your work. It appears that Multi-ControlNet can also handle multi-modal generation.

condition resolution

Nice work! It seems the resolution of condition input is just 32, [19,1024] for mask and [1,1024] for sketch? Is it right?

preprocessing code for sketch and mask

I am very interested in your work, thank you for sharing the codes of Collaborative Diffusion.

I want to generate images from my own sketch and mask. Could you share the preprocessing code for sketch and mask?

I noticed that sketches and masks you used are 19*1024 tensors, but the raw sketches and masks are 512x512 px images.

How can I convert my own mask or sketch from 512x512 px images to 19*1024 tensors? Can you give me some advice?

Thank you very much!

Which image shoulw I use in image editing

image
Hello Ziqi Huang, I am very interested in your work, thank you for sharing the codes of Collaborative Diffusion.
I am a little bit confused about the image editing results. I think 0C_interText_optDM_?_alpha is the generated result. But there are results from different alpha, In this case, I think the image name "0C_interText_optDM_0_alpha=-1.0" should be the best one. But if you automatically select an image which alpha is possibly the best?

Empty Condition for training unmodal image generation

I am very interested in your work, thank you for sharing the codes of Collaborative Diffusion.
I have two questions, what is the empty condition when you train text/mask to image generation? If I understand correctly, for text-to-image generation, the empty condition is [""], as your code shows "uc = model.get_learned_conditioning(n_samples * [""])". However, it's the empty condition for training mask to image generation. Is it a zero mask? Thank you very much for your attention.

about Training Time

It seems that max_epochs will default to 1000 for training single modal. And It costs about 25min for one epoch using 8 gpus. Does it mean it costs about 1000 * 25 / 60 / 24 ~ 17days for training text2img model?

About the training epoch of VAE model and uni-model for text to face

Hello! Based on the instructions you provided, I am trying to retrain the VAE model and uni-model for text to face on RTX3090, may I ask what is the epoch for training these two models respectively? Or are you judging whether to end the model training process based on the visualization results of reconstructions_gs-xxxxxx_e-xxxxxx_b-xxxxxxx.png and samples_gs-xxxxxx_e-xxxxxx_b-xxxxxxx.png?
Looking forward to your answer.

Image editing

Could you release the instruction for image editing?

Pre-training model download from Google drive always fail

A fantastic work, very interested in it.

  1. The pre-training model shared by Google drive is too unstable to download in China. I've been downloading all day and it always gets interrupted automatically and the download fails. Could you share another network disk, such as Baidu network disk.

pre-processed data

Hi~,
could you provide the code of how to get the pre-processed data, i.e., the mask image (0.pt, 1.pt, ...) files.
Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.