psyker-team / mist-v2 Goto Github PK
View Code? Open in Web Editor NEWA watermarking tool to protect artworks from AIGC-driven style mimicry (e.g. LoRA)
Home Page: https://mist-project.github.io/
License: Apache License 2.0
A watermarking tool to protect artworks from AIGC-driven style mimicry (e.g. LoRA)
Home Page: https://mist-project.github.io/
License: Apache License 2.0
Since my GPU isn't supported, I tried the CPU command and got a an error when trying to download from the stable diffusion repo:
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
12/17/2023 17:59:16 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cpu
Mixed precision type: bf16
==precision: torch.float32==
Traceback (most recent call last):
File "...\mist-v2\attacks\mist.py", line 1155, in <module>
main(args)
File "...\mist-v2\attacks\mist.py", line 898, in main
pipeline = DiffusionPipeline.from_pretrained(
File "...\miniconda3\envs\mist-v2\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 857, in from_pretrained
cached_folder = cls.download(
File "...\miniconda3\envs\mist-v2\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1178, in download
config_file = hf_hub_download(
File "...\miniconda3\envs\mist-v2\lib\site-packages\huggingface_hub\utils\_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "...\miniconda3\envs\mist-v2\lib\site-packages\huggingface_hub\utils\_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './stable-diffusion/stable-diffusion-1-5'. Use `repo_type` argument if needed.
Traceback (most recent call last):
File "...\miniconda3\envs\mist-v2\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "...\miniconda3\envs\mist-v2\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "...\miniconda3\envs\mist-v2\Scripts\accelerate.exe\__main__.py", line 7, in <module>
File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command
simple_launcher(args)
File "...\miniconda3\envs\mist-v2\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['...\\miniconda3\\envs\\mist-v2\\python.exe', 'attacks/mist.py', '--low_vram_mode', '--instance_data_dir', 'original', '--output_dir', 'output/', '--class_data_dir', 'data/class', '--instance_prompt', 'an illustration of a stone statue, high quality, masterpiece', '--class_prompt', 'an illustration of a stone statue, high quality, masterpiece', '--mixed_precision', 'bf16']' returned non-zero exit status 1.
I tried changing the --pretrained_model_name_or_path
flag to runwayml/stable-diffusion-1-5
, but got an authorization error:
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/runwayml/stable-diffusion-1-5/resolve/main/model_index.json
Greetings!
I love the most recent update that supports a one click setup. I believe the GUI for Mist v2 needs to be updated to give feedback for users to know that their images are being Misted. (I am aware that the CMD prompt updates, but it doesn't go to the forefront after the 'Mist' button is pressed)
I think that the best setup would be to have a debug box within the GUI, underneath the MIST button, that echos what the command line is saying in the background. That way there is feedback for the client running the program.
Secondly I believe that the command line should make it clear it is ready to mist again. Currently it lists, when the process is fully finished, that the images have been exported... then it just hangs there. It leaves me unsure if it's still running in the background stuck, or if it was just finished and is idling. I am aware that one could close the window, but in doing so would kill the entire process and require a re-run.
My solution is that once it's done, presuming that it is idling doing nothing and not consuming resources, is that it echos a comment for the end-user that the requested images have been misted and the program is ready to mist another batch if desired. Something like:
"X images successfully Misted!"
This could be implemented in the GUI itself by having the "Mist button" change colors, and show a revolving circle as its doing the process, and switch back to its normal state once done with a small pop-up window declaring the Misting completed successfully.
Thanks again for your amazing work.
Dear authors,
I am applying the code on the wikiart dataset. I set the argument as:
accelerate launch attacks/mist.py --cuda --low_vram_mode --instance_data_dir /data4/user/yan390/diffusion/Impress/processed_wikiart/$artist/clean/train --output_dir output_$artist --class_data_dir data/class --instance_prompt "a misted art work by artist $artist" --class_prompt "an art work by artist $artist" --mixed_precision bf16 -p "stable-diffusion-2-1-base"
and followed by:
accelerate launch eval/train_dreambooth_lora_15.py --instance_data_dir=output_$artist --output_dir=lora_$artist --class_data_dir=lora_class_dir --instance_prompt "a misted artwork by artist $artist" --class_prompt "an artwork by artist $artist" --resolution=512 --train_batch_size=1 --learning_rate=1e-4 --scale_lr --max_train_steps=2000 --pretrained_model_name_or_path stable-diffusion-2-1-base.
So I got a lora using clean data and another using misted images.
However, I found they both generate the exact same images using sample_lora_15.py. The misted images seem produced correctly since the finetuned SD model using misted images generate new images with the adversarial pattern. However, they just seem do not make a difference when using Lora.
Any ideas about the issue? Is it because the arguments of my lora training command are given wrong?
Looking forward to your reply as we are preparing to submit a paper. Thanks so much!
Hope to get a version deployed with Docker. Thank you~
File "/home/mist/mist-v2/attacks/mist.py", line 978, in main perturbed_data, prompts, data_sizes = load_data( File "/home/mist/mist-v2/attacks/mist.py", line 542, in load_data images = torch.stack(images) RuntimeError: stack expects a non-empty TensorList
简单的解决办法是教程里让用户把jpeg重命名为jpg即可。
或许可以添加jpeg支持?毕竟都是同一种格式的不同后缀名
I'd like to ask about the correspondence between the command line parameters --mode ['lunet', 'fused', 'anti-db'] and the attack algorithms in the paper. I assume 'lunet' corresponds to ITA or ITA+, 'anti-db' directly corresponds to ANTI-DB. However, what does 'fused' represent?
Several dependency errors found in the Colab notebook, which may cause a range of unexpected and unmentioned errors. To address this, I've came up with some workarounds.
This includes:
Use this shared Notebook
Well, I would like to, but considering I run only once the notebook in google colab instead of on my home laboratory, I guess those problems is caused by some mysterious Google preinstalled packages, which might not lead to similar problems in local deployment. To avoid unnecessary downgrades, which should be always paid consideration to since there are performance and security concerns, I opened this issue to both propose a solution for those without GPUs but want to use Colab.
The Gradio shared public link doesn't appear to work. Workaround: Used Google Colab builtin port forwarding.
Update: Gradio share link API is down now.
Greetings!
After multiple failed attempts I have managed to get the program to work! However a flaw has been uncovered... Currently during the Misting process of any image converts the alpha channel / transparent background into pure black.
For creatives to be able to use their misted images in various mediums, maintaining any and all transparent elements is a huge need for most misted images to be usable in projects. Not only would it still protect the entire photo, but the transparent sections would remain transparent enabling easier use for those that have images with transparent elements. (As filling in the transparent layer with pure black could cause issues when attempting to remove those sections, thus degrade the image and not having it be as clean.)
Is this possible? The program works great, just needs a tweak to account for transparency to enable smooth usage of their own misted images. I guess a "Smart Mist / Selective Mist / Targeted Mist" mode more or less?
在最新的支持bf16 commit中 805a8dc
from attacks.mist import update_args_with_config, main
update_args_with_config
函数已经不存在,并且后面的调用也注释掉了
导致启动web ui报错
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 459, in call_prediction
output = await route_utils.call_process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\mist-webui.py", line 20, in process_image
main(args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\attacks\mist.py", line 851, in main
accelerator = Accelerator(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\accelerate\accelerator.py", line 444, in init
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
ValueError: bf16 mixed precision requires PyTorch >= 1.10 and a supported device.
Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 459, in call_prediction
output = await route_utils.call_process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\mist-webui.py", line 20, in process_image
main(args)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\src\attacks\mist.py", line 851, in main
accelerator = Accelerator(
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\accelerate\accelerator.py", line 444, in init
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
ValueError: bf16 mixed precision requires PyTorch >= 1.10 and a supported device.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 497, in process_events response = await self.call_prediction(awake_events, batch)
File "E:\Mist_v2\Mist启动器\Mist启动器\mist-v2\venv\lib\site-packages\gradio\queueing.py", line 468, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.