Git Product home page Git Product logo

painebenjamin / app.enfugue.ai Goto Github PK

View Code? Open in Web Editor NEW
670.0 14.0 64.0 16.3 MB

ENFUGUE is an open-source web app for making studio-grade images and video using generative AI.

License: GNU General Public License v3.0

Python 70.24% Makefile 0.28% Shell 0.56% CSS 2.50% Jinja 0.17% JavaScript 25.83% Batchfile 0.41%
ai generative-art stable-diffusion ai-image-generation docker-image linux macos mps nvidia portable-executable

app.enfugue.ai's People

Contributors

alicevie avatar boljoro avatar jmichael7 avatar mvnowak avatar painebenjamin avatar ropedro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

app.enfugue.ai's Issues

Create Advanced Configuration Menu for ControlNet Frontend ←→ Backend Mappings

Right now, constants.py contains variables describing what specific models to load from the HuggingFace hub when requested- for example, when you pass "canny" to the API or select "Canny Edge Detection" in the UI, it will load lllyasviel/sd-controlnet-canny for SD 1.5 pipelines and diffusers/controlnet-canny-sdxl-1.0 for SDXL pipelines.

As more ControlNet models get released for SDXL, the only thing necessary for Enfugue to use them, in theory, is to download the weights and configuration from the HF hub and load them into the usual ControlNetModel. It would be ideal if users did not have to download an entire distribution upgrade to load what amounts to a new URI, so the idea is to create an advanced configuration menu that permits the user to enter what model should be loaded for the 12 supported ControlNets.

There should be a button to revert to default settings, regardless of what the user has entered and saved, so the user can always get back to a known functioning configuration.

Additionally, there should be two separate mappings, one for SD 1.5, and one for SDXL.

Error in image generation: PytorchStreamReader failed reading zip archive: failed finding central directory

Issue

I get the following error when trying to generate picture: RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

General Information

enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.*
Archlinux - linux 6.4.3-zen1-2-zen
AMD RX 5700

Log

tail -f ~/.cache/enfugue.log 
2023-08-07 19:38:26,515 [cherrypy.error] INFO (_cplogging.py:213) [07/Aug/2023:19:38:26] ENGINE Bus STARTING
2023-08-07 19:38:26,622 [cherrypy.error] INFO (_cplogging.py:213) [07/Aug/2023:19:38:26] ENGINE Serving on https://0.0.0.0:45554
2023-08-07 19:38:26,622 [cherrypy.error] INFO (_cplogging.py:213) [07/Aug/2023:19:38:26] ENGINE Bus STARTED
2023-08-07 19:38:28,690 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:38:28,696 [pibble] ERROR (__init__.py:232) Error handler raised exception DetachedInstanceError(Instance <AuthenticationTokenDeclarative at 0x7f20137d6d10> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/14/bhk3))
2023-08-07 19:38:38,589 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:38:48,592 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:38:59,184 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:09,137 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:19,195 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:28,479 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:38,526 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:49,552 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

2023-08-07 19:39:52,055 [enfugue] ERROR (engine.py:259) Traceback (most recent call last):
  File "enfugue/diffusion/process.py", line 360, in run
  File "enfugue/diffusion/process.py", line 112, in execute_diffusion_plan
  File "enfugue/diffusion/plan.py", line 698, in execute
  File "enfugue/diffusion/plan.py", line 911, in execute_nodes
  File "enfugue/diffusion/plan.py", line 542, in execute
  File "enfugue/diffusion/plan.py", line 443, in execute
  File "enfugue/diffusion/manager.py", line 2819, in __call__
  File "enfugue/diffusion/manager.py", line 2181, in pipeline
  File "enfugue/diffusion/pipeline.py", line 204, in from_ckpt
  File "torch/serialization.py", line 995, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
  File "torch/serialization.py", line 449, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

2023-08-07 19:39:59,601 [enfugue] ERROR (gpu.py:145) Couldn't execute nvidia-smi (binary `nvidia-smi`): [Errno 2] No such file or directory: 'nvidia-smi'

Allow Override for Upscale Pipeline

Right now, the pipeline to use when re-diffusing upscaled samples is a simple rule: if there is a refiner, use it, otherwise use the main pipeline.

Allow an option for the user to designate that the main pipeline should always be used, regardless of the presence of a refiner.

Add Options for Optimized Inpainting

Most of the time, optimized inpainting is the right way to go.

Sometimes, though, especially when using multiple models, this can result in an image that seems out-of-place with the rest of the image in subtle ways, especially things like color temperature.

Add a couple options to tweak how it works:

  1. Enable/Disable - allow the user to completely disable optimized inpainting, reverting to inpainting the whole thing.
  2. Feather Size - how many pixels to feather around the inpainted area. Default is currently 16 px.

"Could not initialize NNPACK! Reason: Unsupported hardware."

After about 21 minutes and 30 seconds the server outputs
[W NNPACK.cpp:53] Could not initialize NNPACK! Reason: Unsupported hardware.
The web interface is then stuck on the 21m and 30s countup.

enfugue.log: https://pastebin.com/z17dbdG6

Specs:

cpu:                                                            
                       Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, 1866 MHz
                       Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, 2450 MHz
                       Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, 2772 MHz
                       Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, 2733 MHz
monitor:
                       IIYAMA PL2730H
                       A19-2A
graphics card:
                       nVidia GP107 [GeForce GTX 1050]
                       Intel 2nd Generation Core Processor Family Integrated Graphics Controller
``

Make Node Header Items and Forms Sticky

Right now, when zooming or scolling a noce item along the canvas, it's easy to scroll the buttons (top right) name (top left) and form (when expanded) out of view. This issue is even more annoying when the node is large, which can require scrolling a long way to either side to see buttons that are essential to use various features.

Make these items "sticky", such that they align themselves relative to the edge of the node when in view, and relative to the edge of the frame when out of view.

Permit Users to Select Cache Directory on Initialization

Users such as Oracle on reddit suggest they already have large quantities of model data.

During the initialization message, add an option for the user to select any directory on their filesystem for the relevant directories. They are:

  1. checkpoints
  2. lora
  3. inversion
  4. cache (huggingface cache for controlnet)
  5. models (diffusers-formatted model directory + tensorRT engine storage)
  6. other (edge-detection models, upscale models)

Add "Cancel Download" Button

Currently there is no way to cancel an active download without restarting the server. Add the ability to cancel an existing download, deleting any partial downloads.

Also improve the display of the downloads view in general; that was a a bit of a rush job.

Add NSFW filter for CivitAi browsing

Browsing CivitAi from enfugue allows NSFW models, LORA's, etc and their images to be visible by default. Directly on their site, such things are blurred or removed entirely from what is viewable without logging in first.

Create Patch Delta Revision Distribution

It's a pain to have to re-download several gigabytes of support libraries every time a user would need to update.

Starting from 0.1.3, additionally distribute an 'update' package, which should only contain the modified content.

Run containerised / docker-compose support?

Just wondering if there’s any plans to have this bundled up into a container image and perhaps docker-compose?

This would make it really easy for people to both try it out but also run it on their home servers etc…

Blank Page

When I open the site, I see the banner and stuff but the rest of the page is blank. the console also shows this error.
Screenshot 2023-06-28 131316

LayerNormKernelImpl Error When Enabling then Disabling TensorRT Within a Plan

When forced to disable TensorRT due to the dimensions of an individual execution within a plan (i.e. when a node has an X or Y dimension less than the engine size), we can receive an error saying LayerNormKernelImpl not implemented for bias type struct.Half, which is a bit of a red herring message that seems to indicate types are wrong, which is true, but the reason is because we're trying to inference on the CPU, which should only ever occur when there is no GPU to use. Somewhere along the way, one of the models is not getting loaded to the GPU.

Fix release v0.1.0 linux one-liner installation instruction

The installation instruction recommend the following for extracting the multi part archive:

cat enfugue-server-0.1.0*.part | tar -xvf

This seems incorrect as tar throws two different errors:

  • tar: option requires an argument -- 'f' (This is because it is piped)
  • tar: Archive is compressed. Use -z option

So the correct command would be:

cat enfugue-server-0.1.0*.part | tar -xvz

Add Menu Item Keyboard Shortcuts

A common pattern in user interface design is to allow a user to hold a modifier key (usually alt) and then type a letter that corresponds to an item in the top menu bar. This will either expand a parent menu, or perform the action of the menu item that was clicked.

Add a means for the menu items to describe what character should be the trigger character in it's name. Then, listen to global (unfocused) keypress events to trigger the appropriate action.

  • The character should be case-insensitive, and should be part of the text in the menu item (ideally the first letter.)
  • The character should be underscored, a typical indicator of this functionality.

Add "Other" option to VAE

Right now, there are four configured VAEs the user can choose from when overrriding the default. They are:

  1. EMA (stabilityai/sd-vae-ft-ema)
  2. MSE (stabilityai/sd-vae-ft-mse)
  3. XL (stabilityai/sdxl-vae)
  4. XL16 (madebyollin/sdxl-vae-fp16-fix)

Add an "Other" option that allows the user their own repo/name input, to their own risk.

Include a checkbox to indicate whether or not the VAE must operate in full precision (à la SDXL).

ROCm Support Needed

Issue

On clicking generate it loads and times out after a while.

Expected behaviour

It generates a picture using the prompt

Details

Enfugue v0.1.2 (Linux)
Installed using archive method

Engine Logs

$ tail -f ../.cache/enfugue-engine.log 
2023-06-30 12:46:43,276 [enfugue] DEBUG (process.py:315) Received instruction 1, action plan
2023-06-30 12:46:44,440 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): huggingface.co:443
2023-06-30 12:46:44,499 [http.client] DEBUG (log.py:118) send: b'HEAD /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1\r\nHost: huggingface.co\r\nUser-Agent: python-requests/2.30.0\r\nAccept-Encoding: gzip, deflate, br\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 302 Found\r\n'
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Content-Length: 1129
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:46:44 GMT
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb294-2707d14c7d5d6c591b57f687
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Vary: Origin, Accept
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: aa9ba505e1973ae5cd05f5aedd345178f52f8e6a
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Linked-Size: 7703807346
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Linked-ETag: "e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053"
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Location: https://cdn-lfs.huggingface.co/repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700%7Ec4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L%7EDaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv%7E1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv%7EGqq-L0XHsZXpTLF2JYt9A%7E%7EIPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Via: 1.1 cfd67353680316557643ad146b46d046.cloudfront.net (CloudFront)
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: gNAd_wCBeGd4og3NlQaxdH9niQilqMdc6GeujUlXfJw2v097KyCO1A==
2023-06-30 12:46:44,824 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1" 302 0
2023-06-30 12:46:44,826 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
2023-06-30 12:46:44,869 [http.client] DEBUG (log.py:118) send: b'HEAD /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700~c4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L~DaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv~1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv~Gqq-L0XHsZXpTLF2JYt9A~~IPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1\r\nHost: cdn-lfs.huggingface.co\r\nUser-Agent: python-requests/2.30.0\r\nAccept-Encoding: gzip, deflate, br\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:46:44,883 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Type: binary/octet-stream
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Length: 7703807346
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Last-Modified: Thu, 20 Oct 2022 12:04:48 GMT
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-storage-class: INTELLIGENT_TIERING
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-server-side-encryption: AES256
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-version-id: BFBjjeCwpKzphP69jHCsu0tXSVXyZiD0
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Disposition: attachment; filename*=UTF-8''v1-5-pruned.ckpt; filename="v1-5-pruned.ckpt";
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Server: AmazonS3
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Date: Thu, 29 Jun 2023 15:06:04 GMT
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: ETag: "37c7380e5122b52e5a82912076eff236-2"
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: X-Cache: Hit from cloudfront
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Via: 1.1 1599881f4fb8a11206232254d6f4ccb6.cloudfront.net (CloudFront)
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-P1
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: 6UWW96thEzzke2v2ep8WHSu0eut4ecIL2Y5CCW0lKKNkSX63ysHuJw==
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: Age: 70841
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:46:44,885 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://cdn-lfs.huggingface.co:443 "HEAD /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700~c4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L~DaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv~1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv~Gqq-L0XHsZXpTLF2JYt9A~~IPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 0
2023-06-30 12:46:44,944 [enfugue] DEBUG (manager.py:1145) Calling pipeline with arguments {'latent_callback': <function DiffusionPlan.execute_nodes.<locals>.node_image_callback at 0x7f960477d2d0>, 'width': 512, 'height': 512, 'chunking_size': 64, 'chunking_blur': 64, 'num_images_per_prompt': 1, 'progress_callback': <function DiffusionEngineProcess.create_progress_callback.<locals>.callback at 0x7f9615f53a30>, 'latent_callback_steps': 10, 'latent_callback_type': 'pil', 'prompt': 'Cat', 'negative_prompt': '', 'image': None, 'control_image': None, 'conditioning_scale': 1.0, 'strength': 0.8, 'num_inference_steps': 50, 'guidance_scale': 7.5}
2023-06-30 12:46:44,944 [enfugue] DEBUG (manager.py:711) Inferencing on CPU, using BFloat
2023-06-30 12:46:45,211 [enfugue] DEBUG (manager.py:970) Initializing pipeline from checkpoint at /home/lennart/.cache/enfugue/checkpoint/v1-5-pruned.ckpt. Arguments are {'cache_dir': '/home/lennart/.cache/enfugue/cache', 'engine_size': 512, 'chunking_size': 64, 'requires_safety_checker': False, 'controlnet': None, 'torch_dtype': torch.float32, 'load_safety_checker': False}
2023-06-30 12:47:01,341 [torch.distributed.nn.jit.instantiator] INFO (instantiator.py:21) Created a temporary directory at /tmp/tmpy4z3612b
2023-06-30 12:47:01,343 [torch.distributed.nn.jit.instantiator] INFO (instantiator.py:76) Writing /tmp/tmpy4z3612b/_remote_module_non_scriptable.py
2023-06-30 12:47:14,043 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): huggingface.co:443
2023-06-30 12:47:14,095 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:14,411 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Content-Length: 4519
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:14 GMT
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b2-674a2f6c559f85cc44a0c399
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Content-Security-Policy: default-src none; sandbox
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: ETag: "2c19f6666e0e163c7954df66cb901353fcad088e"
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:14,415 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:14,415 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: Jjx2SGGnraFH1M6xA_pcw_D-vH8ekygTlTRogAFSsc2ymNmcAduLNQ==
2023-06-30 12:47:14,415 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-06-30 12:47:14,423 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:14,747 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 404 Not Found\r\n'
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Content-Length: 15
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:14 GMT
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b2-6a0c791648a636810c0340b2
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: X-Error-Code: EntryNotFound
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Error-Message: Entry not found
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: ETag: W/"f-mY2VvLxuxB7KhsoOdQTlMTccuAQ"
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Cache: Error from cloudfront
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: EFN_qP9Z1N6gqlxzSjBDuBi4KpxAoGAGDbMa_qCBHn67Ut4wfNwaGA==
2023-06-30 12:47:14,751 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors HTTP/1.1" 404 0
2023-06-30 12:47:14,757 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors.index.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:15,078 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 404 Not Found\r\n'
2023-06-30 12:47:15,078 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Content-Length: 15
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:15 GMT
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b3-6ac2ed6609ea91d41dd708d1
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Error-Code: EntryNotFound
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Error-Message: Entry not found
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: ETag: W/"f-mY2VvLxuxB7KhsoOdQTlMTccuAQ"
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Cache: Error from cloudfront
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: XIGtN45F1Eb-AVrqr8YKnZ0c-K-ZVVe8VtQgeXrDOEWT_Ns3Sa6zoQ==
2023-06-30 12:47:15,081 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors.index.json HTTP/1.1" 404 0
2023-06-30 12:47:15,084 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:15,423 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 302 Found\r\n'
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Content-Length: 1103
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:15 GMT
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b3-68fcdb9846f2b3bb020e5c7d
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Vary: Origin, Accept
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Linked-Size: 1710671599
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Linked-ETag: "f1a17cdbe0f36fec524f5cafb1c261ea3bbbc13e346e0f74fc9eb0460dedd0d3"
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: Location: https://cdn-lfs.huggingface.co/openai/clip-vit-large-patch14/f1a17cdbe0f36fec524f5cafb1c261ea3bbbc13e346e0f74fc9eb0460dedd0d3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1688380725&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL29wZW5haS9jbGlwLXZpdC1sYXJnZS1wYXRjaDE0L2YxYTE3Y2RiZTBmMzZmZWM1MjRmNWNhZmIxYzI2MWVhM2JiYmMxM2UzNDZlMGY3NGZjOWViMDQ2MGRlZGQwZDM%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjg4MzgwNzI1fX19XX0_&Signature=fQno3FVgyh7TBdorASWaHxHtW9wUCtxJIXVmtUqU%7E0Nlg1iNIPW9yfYWLj72m8hFgsxKSx9dO5xYCZd2gm9CeHbfYZVHh%7EpiSTaEu%7EJ2Vi55y47q86Vk5Nw4VF08q7lRZrixrKfht5o%7Eo14njjfYWBUMoExE482kW36fnoyCM%7E2-yu18kQg9injli8DWi8Svlo5jWCIofrwVDrzuKeBHDkFaWR1mshP6seFm2le%7Ezb-aNKBaijnanEglAsc6kzuLZDjAKD7tpSS6y5itM5PLw11lJTIbZMPuwWh3SMGX4SlvDLPJql0LuSaDY2B97Wo3Etihqdn1fEr8ATOUfNCkZA__&Key-Pair-Id=KVTP0A1DKRTAX
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: w4ZGwf6qugQqqYvyuCkdRdP1cUVv9JdMZdBDiQNKy7dEZo0RuvKuCQ==
2023-06-30 12:47:15,427 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin HTTP/1.1" 302 0
2023-06-30 12:47:22,091 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:22,206 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Content-Length: 961143
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:22 GMT
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2ba-73b0294c693514851dd9b4e0
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Content-Security-Policy: default-src none; sandbox
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: ETag: "4297ea6a8d2bae1fea8f48b45e257814dcb11f69"
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: zKFNmuRSrQ1Itme9PKxsfoQlU32MAcqZx8JuUAS4tXuhs27mjLs07w==
2023-06-30 12:47:22,209 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-06-30 12:47:44,235 [enfugue] DEBUG (pipeline.py:505) Creating random latents of shape (1, 4, 64, 64) and type torch.float32
2023-06-30 12:47:44,236 [enfugue] DEBUG (pipeline.py:917) Denoising image in 50 steps (unchunked)
2023-06-30 12:47:44,420 [enfugue] DEBUG (process.py:368) stdout: In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag.

2023-06-30 12:47:44,420 [enfugue] ERROR (process.py:370) stderr: torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 
torch/_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f96048f1bd0>.
  warnings.warn(
torch/_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f96048f3e20>.
  warnings.warn(
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'visual_projection.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'text_projection.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'logit_scale', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.19.layer_norm2.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

2023-06-30 12:47:59,439 [enfugue] INFO (process.py:292) Reached maximum idle time after 15.0 seconds, exiting engine process
^C

Add MacOS+M1 Support

  1. Add mps to torch devices
  2. Add GitHub Actions to build on MacOS
mps_device = torch.device("mps")

# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")

Add SDXL Support

With SDXL 0.9 out, some slight modifications need to be made to add support.

  1. Add second text encoder
  2. Add second tokenizer
  3. Add ability to keep both base and refiner in memory
  4. Improve latent handoff between models (it is implemented but untested)

Feature Request: Runtime Merge Block Weights

Amazing app, and a great interface, really enjoying it!

I'm sure everyone has a few favourite extensions that they miss from other stable diffusion webui's... for me it is block merging models in real time in memory:

https://github.com/ashen-sensored/sd-webui-runtime-block-merge

If you're not familiar with merge block weights, this is what it was built upon (and included screenshots):

https://github.com/bbc-mc/sdweb-merge-block-weighted-gui

Using the runtime version makes the workflow of using two models super fast, just move a slider and click generate and you get instant results, no need to save and load a model each iteration. The most popular photo realism models seem to be converging, losing the incredible diversity of other models. I just wanted to highlight this extension, and ask if you think it's compatible with your vision!

Fix pip dependency issues

When trying to install with pip ive run into the following issues:

Collecting enfugue[tensorrt]
  Using cached enfugue-0.1.0.tar.gz (1.1 MB)
  Preparing metadata (setup.py) ... done
ERROR: Cannot install enfugue[tensorrt]==0.1.0 and enfugue[tensorrt]==0.1.1 because these package versions have conflicting dependencies.

The conflict is caused by:
    enfugue[tensorrt] 0.1.1 depends on polygraphy<0.48 and >=0.47
    enfugue[tensorrt] 0.1.0 depends on polygraphy<0.48 and >=0.47

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Which can be resolved as mentioned in this reddit comment
(pip install enfugue --extra-index-url https://pypi.ngc.nvidia.com)

Further on I've encountered another dependency issue:

ERROR: Cannot install enfugue==0.1.0 and enfugue==0.1.1 because these package versions have conflicting dependencies.

The conflict is caused by:
    enfugue 0.1.1 depends on latent-diffusion
    enfugue 0.1.0 depends on latent-diffusion

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Network Issues (Hosted on remote server)

(Apologies if this is a duplicate, GitHub is acting up)

I really want to like this, but whatever overly complicated setup you put in place (the whole 'app.enfugue.ai' or 'my.enfugue.ai') is preventing this from working. I'd love to see you drop all of that and simply tell people the IP and port it's listening on.

In my environment, I have this running on a dedicated linux server (not localhost) so nothing you've setup will ever work. I can reach the server on port 45554 but it just gives me a static page...

Enfugue

Enfugue developed by [Benjamin Paine](mailto:[email protected]) and licensed under the [GNU Affero General Public License (AGPL) v3.0](https://www.gnu.org/licenses/agpl-3.0.html).

Based on [High-Resolution Image Synthesis with Latent Diffusion Models (A.K.A. LDM & Stable Diffusion)](https://ommer-lab.com/research/latent-diffusion-models/) by the Computer Vision & Learning Group, [Stability AI](https://stability.ai/) and [Runway](http://runwayml.com/), licensed under The [BigScience (Creative ML) OpenRAIL-M](https://bigscience.huggingface.co/blog/bigscience-openrail-m) License.

I'm not sure what you're doing or what the actual url I need to access and I can't find anything in your README. Any tips? I'd really love to check this out!

`File > Save` Does Not Work

At some point the blob saving seems to have broken. This does look a little odd, so it's also possible that a Chrome update broke it.

image

Add Invert to Adjustments

Allow inverting colors in the adjustments panel. This can create some really cool alien effects especially when re-diffusing the sample.

Default username/password does not work on fresh install in Windows

Installed and successfully ran server according to directions. Did not download any models or attempt any generations yet (waiting on ability to point enfugue to my already downloaded models). Browsed CivitAi from enfugue (still did not download anything). Then checked the box for Under System>"Use Authentication".
Attempted to use the default "enfugue"/ "password" as listed in directions, but that failed. Had to reset the password using the text file in the cache to make it work.

Add Contextual Menu Item for Nodes on Canvas

As an additional way to provide input for nodes, add a menu item when a node is focused (clicked on,) and remove it when a different node is focused.

  • The menu should have all the same buttons that the node has - show/hide options, mirror, rotate, etc.
  • The menu header should be obviously named so it's understood that it's contextual; use the node name as given or input.
  • Add an additional menu item that allows the user to rename the node. This can already be done directly in the header.

Permit Selecting Checkpoints Directly from Model Picker

The concept of preconfigured models is a new one for many existing Stable Diffusion users.

While they are necessary when it comes to organizing TensorRT engines, they're only a nice decoration for users who just want to use Enfugue like they would other SD Web UI's. Therefore, a compromise should be made that allows Enfugue to behave like users expect, while still allowing a user to eventually migrate to using preconfigured models if they desire.

The idea is this:

  • When a user types in the model they're looking for in the model picker, Enfugue will search both it's preconfigured model database as well as directly in the configured checkpoint directory, with both options being displayed to the user.
    • The options will be directly notated and visibly distinct from one another so the user knows which kind they are selecting.
  • Once the user makes a selection, one of two things happens:
    • If the user selected a pre-configured model, pass an API request to determine the status of TensorRT support and engines. If supported, display the icon showing current engine status.
    • If the user selected a checkpoint, display additional inputs beneath the model picker for LoRA and Textual Inversions.
      • These inputs should be collapsible so as not to take up too much screen space.
      • When a user changes checkpoints again (or changes to a pre-configured model), these can be hidden but should not be changed, so a user can swap between checkpoints quickly while maintaining the same LoRA/TI.

Potential security issue with your page

Hello! I haven't found your email or social network page, so I'll write here

Your project on pypi named enfugue probably reveals a security problem:
enfugue/util/signature.py has a link to your private keys and certificates, they are encoded here: enfugue/util/security.py

image

This developmenttools-like code not so easy to deobfuscate, so we have one private key and two certs.

I did not check what's private key related (your ssh, your github or other things)
The certs are:

Serial Number:
            36:b7:59:6f:e1:72:21:c6:9c:aa:d0:0c:5b:be:a5:bb
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, ST = Texas, L = Houston, O = SSL Corporation, CN = SSL.com RSA SSL subCA
Validity
    Not Before: Jun 27 20:04:02 2023 GMT
    Not After : Jul 27 20:04:02 2024 GMT
Subject: C = US, ST = Texas, L = Houston, O = SSL Corporation, CN = SSL.com RSA SSL subCA
Serial Number: 691281723435976700 (0x997ed109d1f07fc)
Issuer: C = US, ST = Texas, L = Houston, O = SSL Corporation, CN = SSL.com Root Certification Authority RSA
Validity
            Not Before: Feb 12 18:48:52 2016 GMT
            Not After : Feb 12 18:48:52 2031 GMT
Subject: CN = app.enfugue.ai

You made some steps for extra encoding it so I can suppose this private key and certs are valuable and should not be obtainable by clients. If it is right, I suggest you to make steps to change them and avoid their leak to client-side.

The invocation was terminated prematurely

I'm testing enfuge 0.1.2 in my Laptip.
When I click ENFUGE initialization never ends
I found out that some requests to server is happenning repeatedly and this error is in Network tab of browser console in response of /api/invocation:

{
  "meta": { "params": {} },
  "data": [
    {
      "status": "error",
      "uuid": "850ea9228afa4598a7ab6b9f82c5549c",
      "message": "The invocation was terminated prematurely"
    },
    {
      "id": 3,
      "uuid": "33cf948fd65746dc86666b9d8c739e65",
      "status": "processing",
      "progress": null,
      "step": null,
      "duration": 53.501603,
      "total": null,
      "images": null,
      "rate": null
    }
  ]
}

This is my GPU info:

{
  "gpu": {
    "driver": "511.69",
    "name": "NVIDIA GeForce MX330",
    "load": 0.03,
    "temp": 51,
    "memory": {
      "free": 1983,
      "total": 2048,
      "used": 0,
      "util": 0
    }
  }
}

Add Current Task to Invocation Status

Right now, there is little transparency regarding what task is currently being performed. You would need to notice when the total number of steps changes or the current step reverts to zero in order to understand when another multi-step task is being performed.

Add another response from the invocation API regarding the current step being executed. Add a callback to send the step name back along the intermediates pipeline, as formed during plan assembly.

Improvement to logs

I saw you are opening issues and assigning them to a release. An improvement to the logs would be very beneficial to help you problem solving other users issues. And also competent people might try to solve it themselves.

Currently logs are filled with repeating stuff and the errors seem to be caught and listing them as INFO.

Can you filter out the "non-important" repeating logs and marking the errors as ERROR?

image

Enfugue autodownloads SD 1.5 without needing to when different model is specified

I found the system>installation file management area, tried adding a checkpoint I already had successfully. Then went to the model manager and created a new model with my checkpoint set. Then tried to generate my first image. Server took waaay too long for something that was already downloaded and my PC was acting slow; realized in task manager that the disk usage was 100% so I stopped everything and investigated. Then found out that enfugue had gone and downloaded 1.5 to the cache area anyway despite me never asking it to (nor should it have needed to with my other model already specified first).

Help or Examples with Inpainting?

I seem to recall some sort of video or animated gif in your reddit post on this, but I can't find it. I'm trying to do some basic inpainting and it's not working. I've got no idea if I'm doing this right or not.

I started out just generating a cat with an apple. Then I've clicked the Use for Inpainting option, which gives me the interface. I made a circle on the wall and changed the prompt to "a clock" or "Add a clock on the wall" and neither seem to do anything.

Do you have any documentation somewhere on this?

Add Context Menu Item for Image on Canvas

As an additional way to call out the various tools for manipulating an image on the canvas (either an invoked sample or after clicking on an image in the "results" view), add a contextual menu item that appears when viewing the image that calls out all of the same menu items again.

Add 'Popout Form' Button to Nodes

As an additional way to provide input for nodes, add a button that creates a copy of the node options form and creates a UI window for it.

  • This form should feed into the other form, such that when one is changed, the other is too (so long as the popped out form is visible.)
  • Add an additional form input item for the name of the node. When this is changed, set the name of the node on the canvas.

No output from starting up?

I'm not sure if it's working or not, firing this up on my ubuntu server using enfugue run doesn't display anything. There's no indication if it's working, and there's no errors either.

FWIW, I installed this using your conda instructions from the .yaml file you provided.

installation problem

i tried to install but keep recieving this error.
what todo?

Preparing metadata (setup.py) ... done
ERROR: Cannot install enfugue==0.1.0 and enfugue==0.1.1 because these package ve
rsions have conflicting dependencies.

The conflict is caused by:
enfugue 0.1.1 depends on polygraphy<0.48 and >=0.47
enfugue 0.1.0 depends on polygraphy<0.48 and >=0.47

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics
/dependency-resolution/#dealing-with-dependency-conflicts

Web Interface can not be reached

Hi,
I'm trying release 0.1.0 on Windows 10, Chrome 114.x with the provided win64 binaries. The server is running fine and I can see it with netstat but navigating to my.enfugue.ai forwards me to https://app.enfugue.ai:45554/ which can not be reached. I've double checked that there is nothing running on that port but still can not reach the web UI.

Navigating to localhost:45554 will take me to the UI but obviously can not load resources from app.enfugue.ai:45554

Any help would be appreciated, thank you!
Tappi

Installation compessed files broken

The file enfugue-server-0.1.0-win64.zip.002 isn't recognised by 7Z.

enfugue-server-0.1.0-win64.zip.001 works fine.

enfugue-server-0.1.0-win64.zip.002 is recognised as a .002 file, instead of a zip file, giving a Cannot open file as archive error when trying to unzip it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.