Git Product home page Git Product logo

paints-undo's Introduction

paints-undo's People

Contributors

lllyasviel avatar mohamedalirashad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paints-undo's Issues

Generate video error

Load to GPU: UNet3DModel
Unload to CPU: UNet3DModel
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1908, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Paints-UNDO/gradio_app.py", line 222, in process_video
frames, im1, im2 = process_video_inner(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Paints-UNDO/gradio_app.py", line 196, in process_video_inner
latents = video_pipe(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Paints-UNDO/diffusers_vdm/pipeline.py", line 183, in call
results = dynamic_tsnr_model(latent_shape, steps, extra_args=sampler_kwargs, progress_tqdm=progress_tqdm)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Paints-UNDO/diffusers_vdm/dynamic_tsnr_sampler.py", line 140, in forward
model_output = self.model_apply(x, t * s_in, **extra_args)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Paints-UNDO/diffusers_vdm/dynamic_tsnr_sampler.py", line 173, in model_apply
p = self.unet(x, t, **extra_args['positive'])
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/app/Paints-UNDO/diffusers_vdm/unet.py", line 595, in forward
emb = self.time_embed(t_emb)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

This is my experience of deploying and running successfully

Open Anaconda Powershell Prompt (miniconda3)

Create and activate the Conda environment

conda create -n paints_undo python=3.10

Expected result: After successfully creating the environment, you will see output similar to the following:

...

done

# To activate this environment, use

# $ conda activate paints_undo

# To deactivate an active environment, use

# $ conda deactivate

conda activate paints_undo

Expected result: After successfully activating the environment, the command prompt will start with (paints_undo).

Install PyTorch and related libraries

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Expected result: After successful installation, you will see output similar to the following:

...

Proceed ([y]/n)? y

...

done

Clone and enter the project directory

git clone https://github.com/lllyasviel/Paints-UNDO.git
cd Paints-UNDO

Expected result: After successfully cloning, you will see output similar to the following:

Cloning into 'Paints-UNDO'...

...

Install project dependencies

pip install xformers

Expected result: After successful installation, you will see output similar to the following:

...

Successfully installed xformers-

pip install -r requirements.txt

Expected result: After successful installation, you will see output similar to the following:

...

Successfully installed

Resolve dependency conflicts

pip uninstall huggingface_hub gradio gradio-client transformers diffusers peft tokenizers httpx httpcore

Expected result: After successful uninstallation, you will see output similar to the following:

...

Successfully uninstalled

pip install huggingface_hub==0.23.2
pip install gradio==3.1.4
pip install gradio_client==0.1.3
pip install transformers==4.23.1
pip install diffusers==0.3.0
pip install peft==0.2.0
pip install tokenizers==0.13.2
pip install httpx==0.23.0
pip install httpcore==0.15.0

Expected result: After successful installation, you will see output similar to the following:

...

Successfully installed

Verify if CUDA is available

python -c "import torch; print(torch.cuda.is_available())"

Expected result: If CUDA is available, you will see the output True:

True

Run the application

python gradio_app.py

Expected result: If everything is correct, the Gradio application will start, and a local server address such as http://127.0.0.1:7860 will be displayed in the command line, which you can open in a browser to access the application.

Unable to running step 1 generate prompt

I am located in mainland China and cannot directly access huggingface. I used a mirror source to download the 19 required files. The download has been completed and read from the local, but I cannot running step 1
image

(sd) linjl@bme-server:~/Paints-UNDO$ python gradio_app.py
/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
The config attributes {'shift_factor': None, 'use_post_quant_conv': True, 'use_quant_conv': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
Fetching 19 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 19/19 [00:00<00:00, 90972.35it/s]
Loading weights from local directory
Loading weights from local directory
Loading weights from local directory
Unload to CPU: AutoencoderKL
Unload to CPU: UNet3DModel
Unload to CPU: CLIPTextModel
Unload to CPU: ModifiedUNet
Unload to CPU: ImprovedCLIPVisionModelWithProjection
Unload to CPU: VideoAutoencoderKL
Unload to CPU: CLIPTextModel
Unload to CPU: Resampler
/home/linjl/Paints-UNDO/diffusers_helper/k_diffusion.py:43: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
Running on local URL:  http://0.0.0.0:7861

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 4.27.0, however version 4.29.0 is available, please upgrade.
--------
Traceback (most recent call last):
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 1348, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 1283, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 1329, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 1278, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 1038, in _send_output
    self.send(msg)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 976, in send
    self.connect()
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 1448, in connect
    super().connect()
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/http/client.py", line 942, in connect
    self.sock = self._create_connection(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/socket.py", line 845, in create_connection
    raise err
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/socket.py", line 833, in create_connection
    sock.connect(sa)
OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/gradio/queueing.py", line 527, in process_events
    response = await route_utils.call_process_api(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/gradio/route_utils.py", line 261, in call_process_api
    output = await app.get_blocks().process_api(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/gradio/blocks.py", line 1788, in process_api
    result = await self.call_function(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/gradio/blocks.py", line 1340, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/gradio/utils.py", line 759, in wrapper
    response = f(*args, **kwargs)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/linjl/Paints-UNDO/gradio_app.py", line 115, in interrogator_process
    return wd14tagger.default_interrogator(x)
  File "/home/linjl/Paints-UNDO/wd14tagger.py", line 33, in default_interrogator
    model_onnx_filename = download_model(
  File "/home/linjl/Paints-UNDO/wd14tagger.py", line 23, in download_model
    download_url_to_file(url=url, dst=temp_path)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/torch/hub.py", line 622, in download_url_to_file
    u = urlopen(req)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 519, in open
    response = self._open(req, data)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 496, in _call_chain
    result = func(*args)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 1391, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/urllib/request.py", line 1351, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable>

RTX 2070S 8Gb running OK

I'd like to add that it takes about 12 minutes for the process to finish using default settings on a RTX 2070s GPU with 8Gb VRAM, OOM does not happen, but about 5-10% of run time VRAM gets extended to shared RAM with total load of ~12.6Gb.

Dataset and Training details ?

Would love to know how the training data was created for this model. This looks and works like a T2V model but curious as to how the layers/timesteps were curated for the training.

Thanks for destroying art

Are we now going to fuck artists over even more?

If anything no piece of art can be trusted of not being ai generated anymore. And if you try to prove you've made it, well nope. This now allows to fake anything. Nobody will trust you anymore. This is fucked up beyond recognition.

You killed art, forever, you dont understand the craft, yet you try to mimick it.

I wish for justice. May your own creation destroy you.

running step1, get error message

I successfully entered the gradio screen, but when running step1, I get the error message as follows:

**Traceback (most recent call last):
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api
result = await self.call_function(
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper
response = f(args, kwargs)
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "D:\paints-undo\gradio_app.py", line 115, in interrogator_process
return wd14tagger.default_interrogator(x)
File "D:\paints-undo\wd14tagger.py", line 48, in default_interrogator
model = InferenceSession(model_onnx_filename, providers=['CPUExecutionProvider'])
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 472, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./wd-v1-4-moat-tagger-v2.onnx failed:Protobuf parsing failed.

Strangely, step2 and step3 work fine. How to solve it?

unknown problem

model.safetensors: 100% 2.88G/2.88G [02:16<00:00, 21.0MB/s] Fetching 19 files: 100% 19/19 [02:18<00:00, 7.27s/it] Loading weights from local directory Loading weights from local directory ^C

From Use, These Are Some Issues

  1. Bug: When loading an image, the entire preview does not always fully populate. (Prompt populates properly during error state)
  2. UX: The download video button is very small and nondescript. (Clicked generate instead of download many times.)
  3. Feature: the select image button needs a select folder option for batching files. (Singleton generation is death)
  4. Please flesh out your API / documentation for automation users' workflows. (Could batch files on my own but...)

Great program, especially out-of-the-box brand new tech. Congrats on the fruits of your labor!

Thank you for choosing Gradio for menus; it provides a degree of familiarity for Automatic1111 users.

Error

Traceback (most recent call last):
File "D:\miniconda\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 70, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\fastapi\applications.py", line 1054, in call
await super().call(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "D:\miniconda\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, _send)
File "D:\miniconda\Lib\site-packages\gradio\route_utils.py", line 713, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "D:\miniconda\Lib\site-packages\gradio\route_utils.py", line 729, in simple_response
await self.app(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\middleware\exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:\miniconda\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:\miniconda\Lib\site-packages\starlette\routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\routing.py", line 297, in handle
await self.app(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "D:\miniconda\Lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:\miniconda\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:\miniconda\Lib\site-packages\starlette\routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\fastapi\routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\gradio\routes.py", line 795, in queue_join
return await queue_join_helper(body, request, username)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\gradio\routes.py", line 813, in queue_join_helper
success, event_id = await blocks._queue.push(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda\Lib\site-packages\gradio\queueing.py", line 225, in push
raise KeyError(
KeyError: 'Event not found in queue. If you are deploying this Gradio app with multiple replicas, please enable stickiness to ensure that all requests from the same user are routed to the same instance.'

Error generating Video

raise NotImplementedError(msg)

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(2, 2688, 1, 512) (torch.float16)
key : shape=(2, 2688, 1, 512) (torch.float16)
value : shape=(2, 2688, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 128
xFormers wasn't build with CUDA support
requires device with capability > (7, 0) but your GPU has capability (6, 1) (too old)
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
max(query.shape[-1] != value.shape[-1]) > 256
xFormers wasn't build with CUDA support
requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old)
operator wasn't built - see python -m xformers.info for more info
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 512

Error generating Video

I have this error when try to generate video
I had an error before with Cuda but execute this line of code
pip install torch==2.3.0 torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121

Unload to CPU: AutoencoderKL
Load to GPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Load to GPU: ModifiedUNet
Unload to CPU: ModifiedUNet
Load to GPU: AutoencoderKL
Unload to CPU: AutoencoderKL
Load to GPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Load to GPU: ImprovedCLIPVisionModelWithProjection
Load to GPU: Resampler
Traceback (most recent call last):
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api
    result = await self.call_function(
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\scool\AppData\Roaming\Python\Python310\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\scool\AppData\Roaming\Python\Python310\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Users\scool\AppData\Roaming\Python\Python310\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "I:\Paints-UNDO\gradio_app.py", line 222, in process_video
    frames, im1, im2 = process_video_inner(
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "I:\Paints-UNDO\gradio_app.py", line 183, in process_video_inner
    positive_image_cond = video_pipe.encode_clip_vision(input_frames)
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "I:\Paints-UNDO\diffusers_vdm\pipeline.py", line 93, in encode_clip_vision
    frames = einops.rearrange(frames, 'b c t h w -> (b t) c h w')
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\einops\einops.py", line 591, in rearrange
    return reduce(tensor, pattern, reduction="rearrange", **axes_lengths)
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\einops\einops.py", line 518, in reduce
    backend = get_backend(tensor)
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\einops\_backends.py", line 55, in get_backend
    if backend.is_appropriate_type(tensor):
  File "C:\Users\scool\.conda\envs\paints_undo\lib\site-packages\einops\_backends.py", line 408, in is_appropriate_type
    return isinstance(tensor, (self.tf.Tensor, self.tf.Variable))
AttributeError: module 'tensorflow' has no attribute 'Tensor'

Upload Keyframes!

Thanks for the great work as always!

How about allowing users to upload their own keyframes?

I'm expecting to be able to create some pretty cool animations.

a2433e31-b9bd-4927-a7bd-8caeed1fbcb0.mp4
f5a5df05-aa8f-440b-b71d-fc086ecac1eb.mp4

Are they idiots? This is evil

Really the assholes programmers want to disguise this as a way to support artists when it will simply be used to counterfeit art, they try to absolve themselves of all blame by saying that it is not their responsibility for how people use this shitty tool that was created specifically to counterfeit

I do blame you and I hope people do too

Torch not compiled with CUDA enabled?

when running python gradio_app.py, I encountered this error message:

Traceback (most recent call last):
File "D:\Paints-UNDO\gradio_app.py", line 15, in
import memory_management
File "D:\Paints-UNDO\memory_management.py", line 9, in
torch.zeros((1, 1)).to(gpu, torch.float32)
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\torch\cuda_init_.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(paints_undo) PS D:\Paints-UNDO> import torch

How to solve it?

src is not a numpy array, neither a scalar

Traceback (most recent call last):
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api
result = await self.call_function(
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "C:\Users\josep.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Paints-UNDO\gradio_app.py", line 124, in process
fg = resize_and_center_crop(input_fg, image_width, image_height)
File "G:\Paints-UNDO\diffusers_vdm\utils.py", line 13, in resize_and_center_crop
resized_image = cv2.resize(image, (new_width, new_height), interpolation=interpolation)
cv2.error: OpenCV(4.10.0) ๐Ÿ‘Ž error: (-5:Bad argument) in function 'resize'

Overload resolution failed:

  • src is not a numpy array, neither a scalar
  • Expected Ptrcv::UMat for argument 'src'

Generate video error

Unload to CPU: VideoAutoencoderKL
Load to GPU: AutoencoderKL
Unload to CPU: AutoencoderKL
Load to GPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Load to GPU: ModifiedUNet
Unload to CPU: ModifiedUNet
Load to GPU: AutoencoderKL
Unload to CPU: AutoencoderKL
Load to GPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Load to GPU: Resampler
Load to GPU: ImprovedCLIPVisionModelWithProjection
Unload to CPU: Resampler
Unload to CPU: ImprovedCLIPVisionModelWithProjection
Load to GPU: VideoAutoencoderKL
Traceback (most recent call last):
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api
result = await self.call_function(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\gradio_app.py", line 222, in process_video
frames, im1, im2 = process_video_inner(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\gradio_app.py", line 190, in process_video_inner
input_frame_latents, vae_hidden_states = video_pipe.encode_latents(input_frames, return_hidden_states=True)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\pipeline.py", line 102, in encode_latents
encoder_posterior, hidden_states = self.vae.encode(x, return_hidden_states=return_hidden_states)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 804, in encode
h, hidden = self.encoder(x, return_hidden_states)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 249, in forward
h = self.mid.attn_1(h)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1532, in wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1541, in call_impl
return forward_call(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 411, in forward
h
= self.attention(h
)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 397, in attention
out = chunked_attention(
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 36, in chunked_attention
out = xformers.ops.memory_efficient_attention(q, k, v)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 276, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 395, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 414, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw
return _run_priority_list(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(2, 2688, 1, 512) (torch.float16)
key : shape=(2, 2688, 1, 512) (torch.float16)
value : shape=(2, 2688, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 128
xFormers wasn't build with CUDA support
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
max(query.shape[-1] != value.shape[-1]) > 256
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 512
Unload to CPU: VideoAutoencoderKL
Load to GPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Load to GPU: Resampler
Load to GPU: ImprovedCLIPVisionModelWithProjection
Unload to CPU: Resampler
Unload to CPU: ImprovedCLIPVisionModelWithProjection
Load to GPU: VideoAutoencoderKL
Traceback (most recent call last):
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api
result = await self.call_function(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\gradio_app.py", line 222, in process_video
frames, im1, im2 = process_video_inner(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\gradio_app.py", line 190, in process_video_inner
input_frame_latents, vae_hidden_states = video_pipe.encode_latents(input_frames, return_hidden_states=True)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\pipeline.py", line 102, in encode_latents
encoder_posterior, hidden_states = self.vae.encode(x, return_hidden_states=return_hidden_states)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 804, in encode
h, hidden = self.encoder(x, return_hidden_states)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 249, in forward
h = self.mid.attn_1(h)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1532, in wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\torch\nn\modules\module.py", line 1541, in call_impl
return forward_call(*args, **kwargs)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 411, in forward
h
= self.attention(h
)
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 397, in attention
out = chunked_attention(
File "E:\UNDO\Paints-UNDO\diffusers_vdm\vae.py", line 36, in chunked_attention
out = xformers.ops.memory_efficient_attention(q, k, v)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 276, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 395, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha_init
.py", line 414, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw
return _run_priority_list(
File "C:\Users\DM.conda\envs\paints_undo\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(2, 2688, 1, 512) (torch.float16)
key : shape=(2, 2688, 1, 512) (torch.float16)
value : shape=(2, 2688, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 128
xFormers wasn't build with CUDA support
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
max(query.shape[-1] != value.shape[-1]) > 256
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 512

Windows 11 

13th Gen Intel(R) Core(TM) i7-13700K 3.40 GHz
RTX 4090

Pineline of the framework.

I'm wondering if PaintsUndo has corresponding paper or technical illustration of how it works and details of training / inference?

About huggingface path and local path

I have changed the huggingface path to the local path, but an error was reported as follows photo:
ๆˆชๅฑ2024-07-10 18 00 12

model_name = 'lllyasviel/paints_undo_single_frame' after modification for the local path can run, but
video_pipe = LatentVideoDiffusionPipeline.from_pretrained( 'lllyasviel/paints_undo_multi_frame', fp16=True )
not line, I tried to check the code, including from_pretrained and snapshot_download. In theory, it should support local loading, and the usage should be the same as normal from_pretrained, but in fact, the program reported an error

What's the actual use-case scenario of this?

It seems like the only usecase is scamming people for the sake of driving out artists from the industry.

However, if I'm mistaken, then let me know. Maybe I'm just wrong, and this will be the tool that finally makes generative AI a real tool for creating art.

Also the supposed point of AI image generation was to eliminate the artists from the process of making art. If I really wanted to employ an idea guy to type their ideas into the prompt instead of me, then I already have one friend like that, who has dumb videogame ideas like "GTA, but set in Budapest, and has VR support", "hack some popular FPS to have VR support, then sell it to the original publisher", and other stuff like that.

UnboundLocalError: local variable 'block_in' referenced before assignment

After fetching all the files,

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
The config attributes {'latents_mean': None, 'latents_std': None, 'shift_factor': None, 'use_post_quant_conv': True, 'use_quant_conv': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
Traceback (most recent call last):
  File "D:\diffusionai\Paints-UNDO\gradio_app.py", line 47, in <module>
    video_pipe = LatentVideoDiffusionPipeline.from_pretrained(
  File "D:\diffusionai\Paints-UNDO\diffusers_vdm\pipeline.py", line 73, in from_pretrained
    vae=VideoAutoencoderKL.from_pretrained(os.path.join(local_folder, "vae")),
  File "D:\diffusionai\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\diffusionai\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\hub_mixin.py", line 277, in from_pretrained
    instance = cls._from_pretrained(
  File "D:\diffusionai\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\hub_mixin.py", line 485, in _from_pretrained
    model = cls(**model_kwargs)
  File "D:\diffusionai\Paints-UNDO\diffusers_vdm\vae.py", line 792, in __init__
    self.encoder = Encoder(double_z=double_z, z_channels=z_channels, resolution=resolution, in_channels=in_channels,
  File "D:\diffusionai\Paints-UNDO\diffusers_vdm\vae.py", line 200, in __init__
    self.mid.block_1 = ResnetBlock(in_channels=block_in,
UnboundLocalError: local variable 'block_in' referenced before assignment

Black video output

The multi-frame model generated a fully black video. The single-frame model is working fine.

I am using a P40 to generate. This may be related as these cards cannot work with fp16 precision.

Can't start the UI. Error message

Hello all, for some reason, I can't use the user interface because when I enter this command: "python gradio_app.py" in the command line, I get this error message:

Traceback (most recent call last):
File "E:\AI\Paints-UNDO\gradio_app.py", line 15, in
import memory_management
File "E:\AI\Paints-UNDO\memory_management.py", line 9, in
torch.zeros((1, 1)).to(gpu, torch.float32)
File "C:\Users\user.conda\envs\paints_undo\lib\site-packages\torch\cuda_init_.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Does anyone know how to fix this problem?

My GPU is powerful enough to run this thing...

image

It's not opening at http://0.0.0.0:7860 but getting this

C:\Users\droly\Paints-UNDO>python gradio_app.py
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\droly\miniconda3\envs\paints_undo\lib\site-packages\xformers_init_.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "C:\Users\droly\miniconda3\envs\paints_undo\lib\site-packages\xformers\triton\softmax.py", line 14, in
from xformers.triton.k_softmax import _softmax, softmax_backward
File "C:\Users\droly\miniconda3\envs\paints_undo\lib\site-packages\xformers\triton\k_softmax.py", line 8, in
import triton.language as tl
ModuleNotFoundError: No module named 'triton.language'
The config attributes {'shift_factor': None, 'use_post_quant_conv': True, 'use_quant_conv': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
Fetching 19 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 19/19 [00:00<?, ?it/s]
Loading weights from local directory
Loading weights from local directory
Loading weights from local directory
Unload to CPU: ModifiedUNet
Unload to CPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Unload to CPU: UNet3DModel
Unload to CPU: VideoAutoencoderKL
Unload to CPU: ImprovedCLIPVisionModelWithProjection
Unload to CPU: Resampler
Unload to CPU: AutoencoderKL
C:\Users\droly\Paints-UNDO\diffusers_helper\k_diffusion.py:43: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad
(True), rather than torch.tensor(sourceTensor).
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

Precisions regarding the training data

Hello! I'm a big fan of the work you guys do.

I looked around a little, but I can't find any info about the data that was used to train the models.
I'm not sure where one can find enough data on the internet for this, and I can't imagine you guys paid real artists for this, because it would simply cost too much money for an open source project.

Can we have more details about the nature of the dataset?

Why would you pretend this is not AIGC if you are proud of AIGC?

It's unacceptable and irresponsible to develop and distribute tools that are designed to deceive. Such technology not only undermines the trust in digital content but also disrespects genuine artists and their skills. I urge you to reconsider the ethical implications of your project.

I demand an explanation: Why would you choose to hide that this is AIGC if you truly believe in the technologyโ€™s merits?

Torch not compiled with CUDA enabled

Paint by numbers on steroids!!!

noob with 3090 need help. I followed instructions how to install but getting "Torch not compiled with CUDA enabled"

git clone https://github.com/lllyasviel/Paints-UNDO.git
cd Paints-UNDO
conda create -n paints_undo python=3.10
conda activate paints_undo
pip install xformers
pip install -r requirements.txt
python gradio_app.py

can someone provide CMD line to replace with proper Torch with CUDA? or how to edited requirements.txt?

Mega Thread for Non-Technical Discussions

Hi Users,

We have been informed that GitHub has recently deleted several comments and posts due to violations of their Hate Speech and Discrimination Policy.

To foster a healthy environment for project development and ensure that diverse voices can share ideas, all future discussions unrelated to technical aspects of the codes must only take place in this thread.

Please note that non-technical comments posted elsewhere will be moved here or removed by the admin or GitHub if they violate GitHub's Terms of Service.

We value your feedback and are committed to listening to different perspectives to develop better tools and strengthen a healthy relationship between artists and AI techniques.

At the same time, ensure that your comments adhere to GitHub's Terms of Service, particularly this and this and this guidelines.

Thank you for your continued support!

OSError: Can't load tokenizer for 'lllyasviel/paints_undo_single_frame'.

(paints_undo) G:\Paints-UNDO>python gradio_app.py
Traceback (most recent call last):
File "G:\Paints-UNDO\gradio_app.py", line 39, in
tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer")
File "G:\miniconda3\envs\paints_undo\lib\site-packages\transformers\tokenization_utils_base.py", line 2094, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'lllyasviel/paints_undo_single_frame'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'lllyasviel/paints_undo_single_frame' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

how should i do...?

Step 2 error black picture

I've installed everything but for some reason I don't see anything in step 2, just a few black images
image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.