Git Product home page Git Product logo

unicontrol's Introduction

This repository is for the paper:

UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
Can Qin 1,2, Shu Zhang1, Ning Yu 1, Yihao Feng1, Xinyi Yang1, Yingbo Zhou 1, Huan Wang 1, Juan Carlos Niebles1, Caiming Xiong 1, Silvio Savarese 1, Stefano Ermon 3, Yun Fu 2, Ran Xu 1
1 Salesforce AI 2 Northeastern University 3 Stanford University
Work done when Can Qin was an intern at Salesforce AI Research.

img

Introduction

We introduce UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework, while still allowing for arbitrary language prompts. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. To equip UniControl with the capacity to handle diverse visual conditions, we augment pretrained text-to-image diffusion models and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. Experimental results show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes. This control versatility positions UniControl as a significant advancement in the realm of controllable visual generation.

img

Updates

  • 05/18/23: UniControl paper uploaded to arXiv.
  • 05/26/23: UniControl inference code and checkpoint open to public.
  • 05/28/23: Latest UniControl model checkpoint (1.4B #params, 5.78GB) updated.
  • 06/08/23: Latest UniControl model checkpoint updated which supports 12 tasks now (Canny, HED, Sketch, Depth, Normal, Skeleton, Bbox, Seg, Outpainting, Inpainting, Deblurring and Colorization) !
  • 06/08/23: Training dataset (MultiGen-20M) is fully released.
  • 06/08/23: Training code is public.๐Ÿ˜Š
  • 07/06/23: Latest UniControl model v1.1 checkpoint updated which supports 12 tasks now (Canny, HED, Sketch, Depth, Normal, Skeleton, Bbox, Seg, Outpainting, Inpainting, Deblurring and Colorization) !
  • 07/25/23: Huggingface Demo API is available! HuggingFace space
  • 07/25/23: Safetensors model is available! checkpoint
  • 09/21/23: UniControl is accepted to NeurIPS 2023.๐Ÿ˜Š

MultiGen-20M Datasets

There are more than 20M image-prompt-condition triplets here with total size > 2TB. It includes all 12 tasks (Canny, HED, Sketch, Depth, Normal, Skeleton, Bbox, Seg, Outpainting, Inpainting, Deblurring, Colorization) which are fully released.

Instruction

Environment Preparation

Setup the env first (need to wait a few minutes).

conda env create -f environment.yaml
conda activate unicontrol

Checkpoint Preparation

The checkpoint of pre-trained UniControl model is saved at ./ckpts/unicontrol.ckpt.

cd ckpts
wget https://storage.googleapis.com/sfr-unicontrol-data-research/unicontrol.ckpt 

You can also use the latest trained model (ckpt and safetensors)

wget https://storage.googleapis.com/sfr-unicontrol-data-research/unicontrol_v1.1.ckpt
wget https://storage.googleapis.com/sfr-unicontrol-data-research/unicontrol_v1.1.st

If you want to train from scratch, please follow the ControlNet to prepare the checkpoint initialization. ControlNet provides a simple script for you to achieve this easily. If your SD filename is ./ckpts/v1-5-pruned.ckpt and you want the script to save the processed model (SD+ControlNet) at location ./ckpts/control_sd15_ini.ckpt, you can just run:

python tool_add_control.py ./ckpts/v1-5-pruned.ckpt ./ckpts/control_sd15_ini.ckpt

Data Preparation

Please download the training dataset (MultiGen-20M) to ./multigen20m. Please:

cd multigen20m
gsutil -m cp -r gs://sfr-unicontrol-data-research/dataset ./

Then unzip the all the files.

Model Training (CUDA 11.0 and Conda 4.12.0 work)

Training from Scratch:

python train_unicontrol.py --ckpt ./ckpts/control_sd15_ini.ckpt --config ./models/cldm_v15_unicontrol_v11.yaml --lr 1e-5

Model Finetuning:

python train_unicontrol.py --ckpt ./ckpts/unicontrol.ckpt  --config ./models/cldm_v15_unicontrol.yaml --lr 1e-7

Model Inference (CUDA 11.0 and Conda 4.12.0 work)

For different tasks, please run the code as follows. If you meet OOM error, please decrease the "--num_samples".

If you use safetensors model, you can load the model following ./load_model/load_safetensors_model.py

Canny to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task canny

HED Edge to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task hed 

HED-like Skech to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task hedsketch

Depth Map to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task depth 

Normal Surface Map to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task normal

Segmentation Map to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task seg

Human Skeleton to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task openpose

Object Bounding Boxes to Image Generation:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task bbox

Image Outpainting:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task outpainting

Image Inpainting:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task inpainting

Image Deblurring:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task blur

Image Colorization:

python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task grayscale

Gradio Demo (App Demo Video, CUDA 11.0 and Conda 4.12.0 work)

We have provided gradio demos for different tasks to use. The example images are saved at ./test_imgs.

For all the tasks (Canny, HED, Sketch, Depth, Normal, Human Pose, Seg, Bbox, Outpainting, Colorization, Deblurring, Inpainting) please run the following code:

python app/gradio_all_tasks.py

We support the direct condition-to-image generation (as shown above). Please unmark the Condition Extraction in UI if you want to upload condition image directly.


Or, we provide the task-specifc gradio demos:

Canny to Image Generation:

python app/gradio_canny2image.py

HED Edge to Image Generation:

python app/gradio_hed2image.py

HED-like Skech to Image Generation:

python app/gradio_hedsketch2image.py

Depth Map to Image Generation:

python app/gradio_depth2image.py

Normal Surface Map to Image Generation:

python app/gradio_normal2image.py

Segmentation Map to Image Generation:

python app/gradio_seg2image.py

Human Skeleton to Image Generation:

python app/gradio_pose2image.py

Object Bounding Boxes to Image Generation:

python app/gradio_bbox2image.py

Image Outpainting:

python app/gradio_outpainting.py

Image Colorization:

python app/gradio_colorization.py

Image Deblurring:

python app/gradio_deblur.py

Image Inpainting:

python app/gradio_inpainting.py

To Do

  • Data Preparation
  • Pre-training Tasks Inference
  • Gradio Demo
  • Model Training
  • HF Space
  • Colab

Tips

  • Negative prompts are very useful sometimes: monochrome, lowres, bad anatomy, worst quality, low quality are example negative prompts.

  • UniControl can work well on some tasks (ie, Colorization and Deblurring) without ANY text prompts.

  • If OOM, let --num_samples 1 may be helpful

Citation

If you find this project useful for your research, please kindly cite our paper:

@article{qin2023unicontrol,
  title={UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild},
  author={Qin, Can and Zhang, Shu and Yu, Ning and Feng, Yihao and Yang, Xinyi and Zhou, Yingbo and Wang, Huan and Niebles, Juan Carlos and Xiong, Caiming and Savarese, Silvio and others},
  journal={arXiv preprint arXiv:2305.11147},
  year={2023}
}

Acknowledgement

This project is built upon the gaint sholders of ControlNet and Stable Diffusion. Great thanks to them!

Stable Diffusion https://github.com/CompVis/stable-diffusion

ControlNet https://github.com/lllyasviel/ControlNet

StyleGAN3 https://github.com/NVlabs/stylegan3

unicontrol's People

Contributors

canqin001 avatar shugerdou avatar yihaocs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unicontrol's Issues

About trained model

I have thoroughly reviewed both your paper and the associated code. In the paper, it is mentioned that the model was trained on nine tasks. However, upon examining the code, it appears that the model is capable of being trained on twelve tasks, including a zero-shot task which is currently commented out.

Could you please clarify if the unicontrol.ckpt and unicontrol_v1.1.ckpt models were trained using the nine datasets specified in your paper, or if there were additional datasets involved?

How is it different from multi-controlnet?

Architecture might be different but I am having trouble understanding if this is different from multi-controlnet for application purpose? Does it enable any new functionality or does it improve resultant image quality as compared to multi-controlnet?

About multi-gpus training

That's definitely an impressive work!

I'm trying to reproduce some results on inpainting task and had some concern about the data_parallel mode.
Referring to the codes, batch_size is 4 for single GPU, total pairs of inpainting data is about 2.8m, thus the total log step is 700k.
When I training it on 8-GPUs, the total step still log as 700k, then I've checked the GPU-memory usage -- all the GPU are nearly fully used.
So I just wondering the training batch_size for 8-GPU is 4*8 or not? Or say there are some misalignment in logging?

Thanks for your time.

we need UniControl to work with automatic1111

peace, thanks a lot for this great project and very great work, we need UniControl to work with automatic1111 and we need separate models instead of one big model, so we can have Deblurring or Colorization....etc separate small models like controlnet in automatic1111, adding UniControl to automatic1111 will allow a lot of people to try it because a lot of people have automatic1111 already installed, this will be very great ๐Ÿคž๐Ÿคž๐Ÿคž๐Ÿคž๐Ÿคž

test dataset

Hi, the paper claimed "We have additionally collected a testing dataset for evaluation with 100-300 image condition-prompt triplets for each task. The source data is collected from Laion and COCO."
Is these test dataset available to the public?
Looking forward to your reply!

Can't draw a mask on Inpainting tab of Gradio Demo

Hello!

Thanks for this great work. I got some good results.

Btw, I would like to try the inpainting with your Gradio Demo in HF. But it seems it's no visual tools to create the mask on the original image? How to try inpainting in Gradio HF?

How can I incorporate textual inversion into the UniControl? Will unicontrol support textual inversion?

Hello, I'm aiming to incorporate textual inversion into your impressive project. To start, my initial objective is to extract the stable diffusion weights from the unicontrol_v1.1.ckpt model and then load these weights to the Hugging Face diffusers pipeline for training textual inversion. Subsequently, I'll need to introduce the requisite code to enable the model to employ the trained embedding.

Does this approach seem viable? If it does, could you guide me on the precise steps to successfully extract the stable diffusion weights from the unicontrol model?

defference between cldm_v15_unicontrol_v11.yaml and cldm_v15_unicontrol.yaml

Thanks for your great work! I notice that there are two yamls and two ckpts. For the yamls, it seems that they corresponds to cldm_unicontrol and cldm_unicontrol_v11. The main differences are: (1)whether use zero_conv for self.input_hint_block_zeroconv_0 and self.input_hint_block_zeroconv_1 (2)whether use self.input_hint_block_zeroconv_0[1] and self.input_hint_block_zeroconv_1[1] for (guided_hint) guided_hint (3) whether task_id_layernet and task_id_hypernet. cldm_unicontrol_v11 uses the zero_convs and self.input_hint_block_zeroconv_x[1], and open the network structures about task_id.

I have two questions: (1) does that mean unicontrol.ckpt correspond to cldm_v15_unicontrol.yaml and unicontrol_v1.1.ckpt correspond to cldm_v15_unicontrol_v1.1.yaml ? (2) I am trying adapting the deblur task to another datastet, which yaml and ckpt shoule I use? (3) what's the special purpose for the above design? (4) Using the released ckpt for debluring, I got following results which seem to collapse, is it reasonable?

input image:
image

prompt:
""

output:
image

Error when running the demo

Hi, thanks for your grear job!
When I run "python gradio_all_tasks.py", it can launch successfully, but if I upload an image and click 'run', the error occurs like this :
image
I wonder whether this is the CUDA version problem. I just installed CUDAtoolkit 11.3 as in the environment.yaml.
Thanks!!

GPU memory for inference

Hey guys,
I encountered OOM during inference with V100 of 16G memory. It seems that the memory requirement is very large. Can you provide some suggestions for lower memory??

mult condition mult task input same time

how to perform multi task condition in one infer?

for example:

# condition 1 canny
        img = resize_image(HWC3(input_image), image_resolution)
        H, W, C = img.shape
        if condition_mode == True:
            detected_map = apply_canny(img, low_threshold, high_threshold)
            detected_map = HWC3(detected_map)
        else:
            detected_map = 255 - img

        control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
        control = torch.stack([control for _ in range(num_samples)], dim=0)
        control = einops.rearrange(control, 'b h w c -> b c h w').clone()
      
       control1 = einops.rearrange(control, 'b h w c -> b c h w').clone()

# condition 2  depth 
        img = resize_image(input_image, image_resolution)
        H, W, C = img.shape
        if condition_mode == True:
            detected_map = apply_hed(resize_image(input_image, detect_resolution))
            detected_map = HWC3(detected_map)
        else:
            detected_map = img
            
        detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)

        control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
        control = torch.stack([control for _ in range(num_samples)], dim=0)
        control2 = einops.rearrange(control, 'b h w c -> b c h w').clone()

      cond = {"c_concat": [control1, control2], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)], "task": task_dic}

DDIM Sampler: 0%| | 0/31 [00:00<?, ?it/s]
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ in :53 โ”‚
โ”‚ โ”‚
โ”‚ 50 โ”‚ if config.save_memory: โ”‚
โ”‚ 51 โ”‚ โ”‚ model.low_vram_shift(is_diffusing=True) โ”‚
โ”‚ 52 โ”‚ โ”‚
โ”‚ โฑ 53 โ”‚ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples, โ”‚
โ”‚ 54 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ shape, cond, verbose=False, eta=eta, โ”‚
โ”‚ 55 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ unconditional_guidance_scale=scale, โ”‚
โ”‚ 56 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ unconditional_conditioning=un_cond) โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/.conda/envs/sd/lib/python3.8/site-packages/torch/utils/_contextlib.py:115 in โ”‚
โ”‚ decorate_context โ”‚
โ”‚ โ”‚
โ”‚ 112 โ”‚ @functools.wraps(func) โ”‚
โ”‚ 113 โ”‚ def decorate_context(*args, **kwargs): โ”‚
โ”‚ 114 โ”‚ โ”‚ with ctx_factory(): โ”‚
โ”‚ โฑ 115 โ”‚ โ”‚ โ”‚ return func(*args, **kwargs) โ”‚
โ”‚ 116 โ”‚ โ”‚
โ”‚ 117 โ”‚ return decorate_context โ”‚
โ”‚ 118 โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/workspace/UniControl/cldm/ddim_unicontrol_hacked.py:113 in sample โ”‚
โ”‚ โ”‚
โ”‚ 110 โ”‚ โ”‚ size = (batch_size, C, H, W) โ”‚
โ”‚ 111 โ”‚ โ”‚ print(f'Data shape for DDIM sampling is {size}, eta {eta}') โ”‚
โ”‚ 112 โ”‚ โ”‚ โ”‚
โ”‚ โฑ 113 โ”‚ โ”‚ samples, intermediates = self.ddim_sampling(conditioning, size, โ”‚
โ”‚ 114 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ callback=callback, โ”‚
โ”‚ 115 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ img_callback=img_callback, โ”‚
โ”‚ 116 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ quantize_denoised=quantize_x0, โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/.conda/envs/sd/lib/python3.8/site-packages/torch/utils/_contextlib.py:115 in โ”‚
โ”‚ decorate_context โ”‚
โ”‚ โ”‚
โ”‚ 112 โ”‚ @functools.wraps(func) โ”‚
โ”‚ 113 โ”‚ def decorate_context(*args, **kwargs): โ”‚
โ”‚ 114 โ”‚ โ”‚ with ctx_factory(): โ”‚
โ”‚ โฑ 115 โ”‚ โ”‚ โ”‚ return func(*args, **kwargs) โ”‚
โ”‚ 116 โ”‚ โ”‚
โ”‚ 117 โ”‚ return decorate_context โ”‚
โ”‚ 118 โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/workspace/UniControl/cldm/ddim_unicontrol_hacked.py:173 in ddim_sampling โ”‚
โ”‚ โ”‚
โ”‚ 170 โ”‚ โ”‚ โ”‚ โ”‚ assert len(ucg_schedule) == len(time_range) โ”‚
โ”‚ 171 โ”‚ โ”‚ โ”‚ โ”‚ unconditional_guidance_scale = ucg_schedule[i] โ”‚
โ”‚ 172 โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ โฑ 173 โ”‚ โ”‚ โ”‚ outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddi โ”‚
โ”‚ 174 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ quantize_denoised=quantize_denoised, temperature=t โ”‚
โ”‚ 175 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ noise_dropout=noise_dropout, score_corrector=score โ”‚
โ”‚ 176 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ corrector_kwargs=corrector_kwargs, โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/.conda/envs/sd/lib/python3.8/site-packages/torch/utils/_contextlib.py:115 in โ”‚
โ”‚ decorate_context โ”‚
โ”‚ โ”‚
โ”‚ 112 โ”‚ @functools.wraps(func) โ”‚
โ”‚ 113 โ”‚ def decorate_context(*args, **kwargs): โ”‚
โ”‚ 114 โ”‚ โ”‚ with ctx_factory(): โ”‚
โ”‚ โฑ 115 โ”‚ โ”‚ โ”‚ return func(*args, **kwargs) โ”‚
โ”‚ 116 โ”‚ โ”‚
โ”‚ 117 โ”‚ return decorate_context โ”‚
โ”‚ 118 โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/workspace/UniControl/cldm/ddim_unicontrol_hacked.py:211 in p_sample_ddim โ”‚
โ”‚ โ”‚
โ”‚ 208 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ if k == 'task': โ”‚
โ”‚ 209 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ continue โ”‚
โ”‚ 210 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ if isinstance(c[k], list): โ”‚
โ”‚ โฑ 211 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ c_in[k] = [torch.cat([ โ”‚
โ”‚ 212 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ unconditional_conditioning[k][i], โ”‚
โ”‚ 213 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ c[k][i]]) for i in range(len(c[k]))] โ”‚
โ”‚ 214 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚
โ”‚ โ”‚
โ”‚ /home/dell/workspace/UniControl/cldm/ddim_unicontrol_hacked.py:212 in โ”‚
โ”‚ โ”‚
โ”‚ 209 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ continue โ”‚
โ”‚ 210 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ if isinstance(c[k], list): โ”‚
โ”‚ 211 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ c_in[k] = [torch.cat([ โ”‚
โ”‚ โฑ 212 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ unconditional_conditioning[k][i], โ”‚
โ”‚ 213 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ c[k][i]]) for i in range(len(c[k]))] โ”‚
โ”‚ 214 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ else: โ”‚
โ”‚ 215 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ c_in[k] = torch.cat([ โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
TypeError: 'NoneType' object is not subscriptable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.