Git Product home page Git Product logo

mist-v2's People

Contributors

aeroxi avatar alice2o3 avatar caradryanl avatar nicholas0228 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mist-v2's Issues

Stable diffusion Repo id must be in the form 'repo_name' or 'namespace/repo_name'

Since my GPU isn't supported, I tried the CPU command and got a an error when trying to download from the stable diffusion repo:

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
12/17/2023 17:59:16 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cpu
Mixed precision type: bf16

==precision: torch.float32==
Traceback (most recent call last):
  File "...\mist-v2\attacks\mist.py", line 1155, in <module>
    main(args)
  File "...\mist-v2\attacks\mist.py", line 898, in main
    pipeline = DiffusionPipeline.from_pretrained(
  File "...\miniconda3\envs\mist-v2\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 857, in from_pretrained
    cached_folder = cls.download(
  File "...\miniconda3\envs\mist-v2\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1178, in download
    config_file = hf_hub_download(
  File "...\miniconda3\envs\mist-v2\lib\site-packages\huggingface_hub\utils\_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "...\miniconda3\envs\mist-v2\lib\site-packages\huggingface_hub\utils\_validators.py", line 158, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './stable-diffusion/stable-diffusion-1-5'. Use `repo_type` argument if needed.
Traceback (most recent call last):
  File "...\miniconda3\envs\mist-v2\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "...\miniconda3\envs\mist-v2\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "...\miniconda3\envs\mist-v2\Scripts\accelerate.exe\__main__.py", line 7, in <module>
  File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
    args.func(args)
  File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command
    simple_launcher(args)
  File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['...\\miniconda3\\envs\\mist-v2\\python.exe', 'attacks/mist.py', '--low_vram_mode', '--instance_data_dir', 'original', '--output_dir', 'output/', '--class_data_dir', 'data/class', '--instance_prompt', 'an illustration of a stone statue, high quality, masterpiece', '--class_prompt', 'an illustration of a stone statue, high quality, masterpiece', '--mixed_precision', 'bf16']' returned non-zero exit status 1.

I tried changing the --pretrained_model_name_or_path flag to runwayml/stable-diffusion-1-5, but got an authorization error:

requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/runwayml/stable-diffusion-1-5/resolve/main/model_index.json

GUI Feedback & Ending Process

Greetings!

I love the most recent update that supports a one click setup. I believe the GUI for Mist v2 needs to be updated to give feedback for users to know that their images are being Misted. (I am aware that the CMD prompt updates, but it doesn't go to the forefront after the 'Mist' button is pressed)

I think that the best setup would be to have a debug box within the GUI, underneath the MIST button, that echos what the command line is saying in the background. That way there is feedback for the client running the program.

Secondly I believe that the command line should make it clear it is ready to mist again. Currently it lists, when the process is fully finished, that the images have been exported... then it just hangs there. It leaves me unsure if it's still running in the background stuck, or if it was just finished and is idling. I am aware that one could close the window, but in doing so would kill the entire process and require a re-run.

My solution is that once it's done, presuming that it is idling doing nothing and not consuming resources, is that it echos a comment for the end-user that the requested images have been misted and the program is ready to mist another batch if desired. Something like:

"X images successfully Misted!"

This could be implemented in the GUI itself by having the "Mist button" change colors, and show a revolving circle as its doing the process, and switch back to its normal state once done with a small pop-up window declaring the Misting completed successfully.

Thanks again for your amazing work.

lora does not work?

Dear authors,

I am applying the code on the wikiart dataset. I set the argument as:

accelerate launch attacks/mist.py --cuda --low_vram_mode --instance_data_dir /data4/user/yan390/diffusion/Impress/processed_wikiart/$artist/clean/train --output_dir output_$artist --class_data_dir data/class --instance_prompt "a misted art work by artist $artist" --class_prompt "an art work by artist $artist" --mixed_precision bf16 -p "stable-diffusion-2-1-base"

and followed by:
accelerate launch eval/train_dreambooth_lora_15.py --instance_data_dir=output_$artist --output_dir=lora_$artist --class_data_dir=lora_class_dir --instance_prompt "a misted artwork by artist $artist" --class_prompt "an artwork by artist $artist" --resolution=512 --train_batch_size=1 --learning_rate=1e-4 --scale_lr --max_train_steps=2000 --pretrained_model_name_or_path stable-diffusion-2-1-base.

So I got a lora using clean data and another using misted images.
However, I found they both generate the exact same images using sample_lora_15.py. The misted images seem produced correctly since the finetuned SD model using misted images generate new images with the adversarial pattern. However, they just seem do not make a difference when using Lora.

Any ideas about the issue? Is it because the arguments of my lora training command are given wrong?
Looking forward to your reply as we are preparing to submit a paper. Thanks so much!

不支持jpeg文件,只支持jpg

现象:
如果输入图片是.jpeg后缀,会报错:
image

File "/home/mist/mist-v2/attacks/mist.py", line 978, in main perturbed_data, prompts, data_sizes = load_data( File "/home/mist/mist-v2/attacks/mist.py", line 542, in load_data images = torch.stack(images) RuntimeError: stack expects a non-empty TensorList
简单的解决办法是教程里让用户把jpeg重命名为jpg即可。
或许可以添加jpeg支持?毕竟都是同一种格式的不同后缀名

Dependencies error in provided colab notebook, with workarounds.

Summary

Several dependency errors found in the Colab notebook, which may cause a range of unexpected and unmentioned errors. To address this, I've came up with some workarounds.

This includes:

  1. Downgrade several PIP packages (torch, nvidia stuff)
  2. Installing missing package (jax)

Workaround

Use this shared Notebook

Why not open a PR to change the requirements.txt

Well, I would like to, but considering I run only once the notebook in google colab instead of on my home laboratory, I guess those problems is caused by some mysterious Google preinstalled packages, which might not lead to similar problems in local deployment. To avoid unnecessary downgrades, which should be always paid consideration to since there are performance and security concerns, I opened this issue to both propose a solution for those without GPUs but want to use Colab.

Another problem (might be my fault or temporary downtime)

The Gradio shared public link doesn't appear to work. Workaround: Used Google Colab builtin port forwarding.

Update: Gradio share link API is down now.

Mist w/ Ability To Ignore Alpha Channel (Transparent Sections)

Greetings!

After multiple failed attempts I have managed to get the program to work! However a flaw has been uncovered... Currently during the Misting process of any image converts the alpha channel / transparent background into pure black.

For creatives to be able to use their misted images in various mediums, maintaining any and all transparent elements is a huge need for most misted images to be usable in projects. Not only would it still protect the entire photo, but the transparent sections would remain transparent enabling easier use for those that have images with transparent elements. (As filling in the transparent layer with pure black could cause issues when attempting to remove those sections, thus degrade the image and not having it be as clean.)

Is this possible? The program works great, just needs a tweak to account for transparency to enable smooth usage of their own misted images. I guess a "Smart Mist / Selective Mist / Targeted Mist" mode more or less?

程序运行时出现以下问题,如何解决??

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 459, in call_prediction
output = await route_utils.call_process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\mist-webui.py", line 20, in process_image
main(args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\attacks\mist.py", line 851, in main
accelerator = Accelerator(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\accelerate\accelerator.py", line 444, in init
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
ValueError: bf16 mixed precision requires PyTorch >= 1.10 and a supported device.
Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 459, in call_prediction
output = await route_utils.call_process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\mist-webui.py", line 20, in process_image
main(args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\attacks\mist.py", line 851, in main
accelerator = Accelerator(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\accelerate\accelerator.py", line 444, in init
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
ValueError: bf16 mixed precision requires PyTorch >= 1.10 and a supported device.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 497, in process_events response = await self.call_prediction(awake_events, batch)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 468, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.