Git Product home page Git Product logo

clipasso's People

Contributors

spookyuser avatar yael-vinker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clipasso's Issues

freeze_support() Error om Mac M1

Hi,
I'm trying to get Picasso to draw on my MacBook M1.

RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.`

The full error is here: https://pastebin.com/eTQE7YSZ

python: command not found

While i pulled the docker, and run the demo, it occurs the 'python: command not found' problem.
It seems that the python version in the docker is not right.

how to mount file while running docker?

using the recommended command
docker run --name clipsketch -it yaelvinker/clipasso_docker /bin/bash
won't mount files from a local path into the path of a docker image.
But I need the mount path in order to update my files.

In other word, how to realize the docker run -v path1:path2 xxx

SystemExit: 1 Error

When testing out the default settings in the Colab notebook I am getting the following error in the 'Start Sketching' stage:

Results will be saved to
[/content/CLIPasso/output_sketches/rose/] ...

GPU: True
An exception has occurred, use %tb to see the full traceback.
SystemExit: 1

I've run the notebook in the past successfully so this seems to be a new development.

Screen Shot 2022-05-07 at 2 31 41 PM

Running colab as is raises exception

Running the base settings on the colab causes the following exception: torch.nn.modules.module.ModuleAttributeError: 'ModifiedResNet' object has no attribute 'relu'. This can be solved by replacing m.relu with nn.ReLU(inplace=True), but the resulting images are much lower quality than the ones published in the paper.

Screen Shot 2022-05-03 at 10 04 27 PM

.

Diffvg not compiled with GPU

image

Hi, I would like to ask how to solve the problem. I followed the installation guidance in your github repo. But it occurs this error. Do you know how to fix it? Thanks!

Problems with pulling the image

Hello, when I tried to upload a docker image on my server I found out that the requirement is that the image needs to be less than 5G, can I ask why the mirror is 20G? Is it because there are datasets in it? Is it possible for me to make it smaller?
Thx~

Demo on Windows - RuntimeError with freeze_support

Hi,

I created an environment following the Readme and I this following error on Windows 10, with GPU:

raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

   This probably means that you are not using fork to start your
  child processes and you have forgotten to use the proper idiom
   in the main module:

      if __name__ == '__main__':
           freeze_support()
           ...

   The "freeze_support()" line can be omitted if the program
   is not going to be frozen to produce an executable.

Thanks in advance!

render_warp not return

Thanks for your instresting work, we run demo, but the code is blocked at line of 157 in painter_params.py,
img = _render(self.canvas_width, # width
self.canvas_height, # height
2, # num_samples_x
2, # num_samples_y
0, # seed
None,
*scene_args)

Colab FileNotFoundError

Hey, Yael-Vinker!
I faced NotFound error running Colab :
FileNotFoundError: [Errno 2] No such file or directory: '/content/CLIPasso/output_sketches/camel//camel_32strokes_seed0/config.npy'
and wonder what is supposed to be there?

Huggingface Hub

Hi, would you be interested in adding CLIPasso to Hugging Face Hub? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. We can setup a organization or a user under which clipasso can be added similar to github.

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/akhaliq/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

Also when trying out the model in colab I get this error

Access denied with the following error:

Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

 https://drive.google.com/uc?id=1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ 

another benefit of the huggingface hub is that you dont need to host the models in google drive to avoid these issues

I have added the U2Net model here https://huggingface.co/akhaliq/CLIPasso/blob/main/u2net.pth which can be used in spaces and downloaded for the colab notebook

Thanks

problem with cmake

subprocess.CalledProcessError: Command '['cmake', 'E:\CLIPasso\diffvg', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=E:\CLIPasso\diffvg\build\lib.win-amd64-3.7', '-DPYTHON_INCLUDE_PATH=C:\APPS\Anaconda3\envs\CLIPASSO\Include', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=E:\CLIPasso\diffvg\build\lib.win-amd64-3.7', '-DCMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE=E:\CLIPasso\diffvg\build\lib.win-amd64-3.7', '-A', 'x64', '-DDIFFVG_CUDA=1']'
returned non-zero exit status 1.

Does anyone have the same problem?

ModuleAttributeError: 'ModifiedResNet' object has no attribute 'relu'

I found the output results on docker and on local computer is very different with the same environment and code.

When I run it in docker, it can run successfully without any bugs. But when I run the exactly same code without change on my own computer. it run out an error:

torch.nn.modules.module.ModuleAttributeError: 'ModifiedResNet' object has no attribute 'relu'

It is about the line 455 in loss.py:

image

If I change the line m.relu into m.relu1 or nn.relu(). The bug will be fixed. However, the result will be very wired. It seems the network didn't keep updating.

best_iter

I don't know why it happens and how to solve this kind of problem. The version of python packages like torch and PyTorch are exactly the same. The only difference may be the Cuda is 11.6 on the local computer and the Cuda is 10.3 on docker. Is that will affect the result?

Could you give me some advice?

Many Thanks!

build failing colab

/content/CLIPasso
Collecting git+https://github.com/openai/CLIP.git
Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-elu69obq
Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-elu69obq
Resolved https://github.com/openai/CLIP.git to commit 40f5484c1c74edd83cb9cf687c6ab92b28d8b656
Requirement already satisfied: ftfy in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (6.0.3)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (2021.11.10)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (4.62.1)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (1.7.1+cu101)
Requirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (0.8.2+cu101)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from ftfy->clip==1.0) (0.2.5)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->clip==1.0) (3.10.0.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch->clip==1.0) (1.20.3)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision->clip==1.0) (8.2.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
fatal: destination path 'diffvg' already exists and is not an empty directory.
/content/CLIPasso/diffvg
running install
running bdist_egg
running egg_info
writing diffvg.egg-info/PKG-INFO
writing dependency_links to diffvg.egg-info/dependency_links.txt
writing requirements to diffvg.egg-info/requires.txt
writing top-level names to diffvg.egg-info/top_level.txt
adding license file 'LICENSE'
writing manifest file 'diffvg.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
-- pybind11 v2.6.0 dev
-- Using pybind11: (version "2.6.0" dev)
-- Build with CUDA support
-- Configuring done
-- Generating done
-- Build files have been written to: /content/CLIPasso/diffvg/build/temp.linux-x86_64-3.7
[ 9%] Building CXX object pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/data_ptr.cc.o
[ 18%] Building CXX object pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/data_ptr.cc.o
[ 27%] Linking CXX shared module ../lib.linux-x86_64-3.7/diffvg.so
[ 81%] Built target diffvg
In file included from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.h:25:0,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value_util.h:24,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/node_def_util.h:23,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/full_type_util.h:24,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/op.h:25,
from /content/CLIPasso/diffvg/pydiffvg_tensorflow/custom_ops/data_ptr.cc:8:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h: In member function ‘void tensorflow::internal::MaybeWith32BitIndexingImplEigen::GpuDevice::operator()(Func, Args&& ...) const’:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:176:25: error: use of ‘auto’ in lambda parameter declaration only available with -std=c++14 or -std=gnu++14
auto all = [](const auto&... bool_vals) {
^~~~
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:176:34: error: expansion pattern ‘const int&’ contains no argument packs
auto all = [](const auto&... bool_vals) {
^~~~~~~~~
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h: In lambda function:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:177:22: error: ‘bool_vals’ was not declared in this scope
for (bool b : {bool_vals...}) {
^~~~~~~~~
In file included from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.h:25:0,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value_util.h:24,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/node_def_util.h:23,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/full_type_util.h:24,
from /usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/op.h:25,
from /content/CLIPasso/diffvg/pydiffvg_tensorflow/custom_ops/data_ptr.cc:8:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h: In member function ‘void tensorflow::internal::MaybeWith32BitIndexingImplEigen::GpuDevice::operator()(Func, Args&& ...) const’:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:176:25: error: use of ‘auto’ in lambda parameter declaration only available with -std=c++14 or -std=gnu++14
auto all = [](const auto&... bool_vals) {
^~~~
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:176:34: error: expansion pattern ‘const int&’ contains no argument packs
auto all = [](const auto&... bool_vals) {
^~~~~~~~~
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h: In lambda function:
/usr/local/lib/python3.7/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_types.h:177:22: error: ‘bool_vals’ was not declared in this scope
for (bool b : {bool_vals...}) {
^~~~~~~~~
pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/build.make:62: recipe for target 'pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/data_ptr.cc.o' failed
make[2]: *** [pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/data_ptr.cc.o] Error 1
CMakeFiles/Makefile2:184: recipe for target 'pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/all' failed
make[1]: *** [pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_cxx11_abi.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/build.make:62: recipe for target 'pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/data_ptr.cc.o' failed
make[2]: *** [pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/data_ptr.cc.o] Error 1
CMakeFiles/Makefile2:147: recipe for target 'pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/all' failed
make[1]: *** [pydiffvg_tensorflow/custom_ops/CMakeFiles/diffvg_tf_data_ptr_no_cxx11_abi.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 98, in
zip_safe = False)
File "/usr/local/lib/python3.7/dist-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/bdist_egg.py", line 164, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/bdist_egg.py", line 150, in call_command
self.run_command(cmdname)
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/usr/lib/python3.7/distutils/command/install_lib.py", line 109, in build
self.run_command('build_ext')
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 31, in run
super().run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/lib/python3.7/dist-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/usr/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/usr/local/lib/python3.7/dist-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/usr/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/usr/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "setup.py", line 65, in build_extension
subprocess.check_call(['cmake', '--build', '.'] + build_args, cwd=self.build_temp)
File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j8']' returned non-zero exit status 2.

Some errors on Google Colab.

Hi, yael-vinker nice work CLIPasso! Thank you.
I tried to run it on google colab but I got some errors, so I will share some changes until it worked.

  1. Down grade Tensorflow to version 1.X.
    %tensorflow_version 1.x
    import tensorflow as tf
    print(tf.__version__)

  2. Add "pydiffvg" path to c compiled file?
    import sys
    sys.path.append("/content/CLIPasso/diffvg/build/lib.linux-x86_64-3.7")

  3. Upgrade Pillow.
    !pip3 install --upgrade Pillow

RuntimeError: radix_sort: failed on 1st step: cudaErrorInvalidDevice: invalid device ordinal

I met an error report when running the code:
Unexpected error occurred:
radix_sort: failed on 1st step: cudaErrorInvalidDevice: invalid device ordinal
Traceback (most recent call last):
File "painterly_rendering.py", line 208, in
configs_to_save = main(args)
File "painterly_rendering.py", line 130, in main
loss.backward()
File "/home/wyf/anaconda3/envs/py37/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/wyf/anaconda3/envs/py37/lib/python3.7/site-packages/torch/autograd/init.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
File "/home/wyf/anaconda3/envs/py37/lib/python3.7/site-packages/torch/autograd/function.py", line 89, in apply
return self._forward_cls.backward(self, *args) # type: ignore
File "/home/wyf/anaconda3/envs/py37/lib/python3.7/site-packages/diffvg-0.0.1-py3.7-linux-x86_64.egg/pydiffvg/render_pytorch.py", line 709, in backward
eval_positions.shape[0])

I have installed all the libraries as required and trained on 2080ti. And I tried to fix it by BachiLi/diffvg#29 but I failed.
Can anyone help me?

No such file or directory: '..../camel_16strokes_seed0/config.npy' (Docker installation)

Encountered same issue. Docker installation as per instructions but when >python run_object,,,,, the following happens:

Processing [camel.png] ...
Results will be saved to
[/home/vinker/CLIPSketch/output_sketches/camel/] ...

GPU: True
100%|████████████████████████████████████████| 278M/278M [04:57<00:00, 982kiB/s]
Traceback (most recent call last):
File "run_object_sketching.py", line 99, in
run(seed, wandb_name)
File "run_object_sketching.py", line 83, in run
allow_pickle=True)[()]
File "/home/vinker/miniconda/envs/habitat/lib/python3.7/site-packages/numpy/lib/npyio.py", line 417, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/home/vinker/CLIPSketch/output_sketches/camel//camel_16strokes_seed0/config.npy'

Originally posted by @jthteo in #1 (comment)

The Python version in the Docker container of this project is too low

Use Docker image to install dependencies and successfully run this project container before executing Python run_ object_ sketching.py --target_ file "flamingo.png" --num_ strokes 8

(Sketching the flamingo with higher level of abstraction, using 8 strokes) Error File "run_oject_sketching. py", line 46

Target=f "{abs_path}/target_images/{args. target_file}". By checking the Python version in the Docker container, version number 2.7.5, it is inferred that the Python version in the Python dependency is incorrect. The correct version should be 3.6 or higher
error

ClipLoss - RuntimeError: cannot register a hook on a tensor that doesn't require gradient

Hi, thanks for the nice work and great repo!
I changed config to train_with_clip=1 to include ClipLoss.

Then, I am getting the following error in the eval step:
image

File "/app/CLIP_/clip/model.py", line 347, in encode_image
return self.visual(image.type(self.dtype))
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/app/CLIP
/clip/model.py", line 238, in forward
x = self.transformer(x)
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/app/CLIP
/clip/model.py", line 209, in forward
return self.resblocks(x)
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/app/CLIP
/clip/model.py", line 196, in forward
x = x + self.attention(self.ln_1(x))
File "/app/CLIP
/clip/model.py", line 193, in attention
attention_probs_backwards_hook=self.set_attn_grad)[0]
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/app/CLIP
/clip/auxilary.py", line 422, in forward
attention_probs_backwards_hook=attention_probs_backwards_hook)
File "/app/CLIP_/clip/auxilary.py", line 250, in multi_head_attention_forward
attn_output_weights.register_hook(attention_probs_backwards_hook)
File "/home/miniconda/envs/habitat/lib/python3.7/site-packages/torch/tensor.py", line 257, in register_hook
raise RuntimeError("cannot register a hook on a tensor that "
RuntimeError: cannot register a hook on a tensor that doesn't require gradient

Am I missing something? Thanks!

Error with Google Colab

When I run part 1 of "Install Dependencies and Clone the Repo"
I get these errors
image
image

I ignored it and got another error in part 3:
image

And ultimately, when i try to run Start Sketching is get stuck on this:
image

u2net.pth unable to automatically download due to permission issue

When I run this command from the provided docker image
python run_object_sketching.py --target_file "camel.png"

It shows this error:

Access denied with the following error:

        Cannot retrieve the public link of the file. You may need to change
        the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

         https://drive.google.com/uc?id=1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ 

代码问题

class CLIPVisualEncoder(nn.Module):
def init(self, clip_model):
super().init()
self.clip_model = clip_model
self.featuremaps = None

    for i in range(12):  # 12 resblocks in VIT visual transformer
        self.clip_model.visual.transformer.resblocks[i].register_forward_hook(
            self.make_hook(i))

def make_hook(self, name):
    def hook(module, input, output):
        if len(output.shape) == 3:
            self.featuremaps[name] = output.permute(
                1, 0, 2)  # LND -> NLD bs, smth, 768
        else:
            self.featuremaps[name] = output

    return hook

这个函数无法获得特征图

Editing the Brush Style on SVGs

Congratulations, CLIPasso is a very rewarding job. I have a few questions to ask: How to change the definition of primitives or the effects of brushes displayed on your project home page. Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.