Comments (14)
did someone managed to make tutorial for cross image region drag and merge? really curious to try out this functionality
from editanything.
Curious about the effect of "Cross-Image region drag and merge".
I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml.
It would be great if could provide a google colab configuration!π
from editanything.
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
I am also getting issues. I have attached the [collab](https://colab.research.google.com/drive/1eLnlD8ACvzawbBX7vUPlHU6a9f_Tn0Vz?usp=sharing) here. Really appreciate your help !
You can try my https://github.com/ennnnny/sd_colab/blob/self/editanything.ipynb but I haven't solved the OOM problem. Maybe the colab Pro account can perform.
from editanything.
To reproduce our results, you can launch the editany_test.py And there is a reference tab in the grdio demo. You can update the image in reference tab then select the region you want to drag. We will update the readme file, thanks.
The environment required by editany_test.py is not consistent with environment.yaml (for example, running editany_test.py with the environment installed by environment.yaml will report multiple errors such as such as missing xformers and diffusers versions), please update the readme file as soon as possible. Thanks for your contribution!
Thanks for the feedback, I have updated the packages in environment.yaml. Please let me know if you still encounter errors.
from editanything.
To reproduce our results, you can launch the editany_test.py And there is a reference tab in the grdio demo. You can update the image in reference tab then select the region you want to drag. We will update the readme file, thanks.
from editanything.
To reproduce our results, you can launch the editany_test.py And there is a reference tab in the grdio demo. You can update the image in reference tab then select the region you want to drag. We will update the readme file, thanks.
The environment required by editany_test.py is not consistent with environment.yaml (for example, running editany_test.py with the environment installed by environment.yaml will report multiple errors such as such as missing xformers and diffusers versions), please update the readme file as soon as possible. Thanks for your contribution!
from editanything.
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
from editanything.
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
![Screenshot 2023-06-30 at 10 08 11 AM](https://private-user-images.githubusercontent.com/50608241/249996236-968fd113-8882-41f4-a336-bd6162146fcb.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEwNTMwNDYsIm5iZiI6MTcyMTA1Mjc0NiwicGF0aCI6Ii81MDYwODI0MS8yNDk5OTYyMzYtOTY4ZmQxMTMtODg4Mi00MWY0LWEzMzYtYmQ2MTYyMTQ2ZmNiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE1VDE0MTIyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU4MTRiMThmNTZlMjU2NDM4ZTg3ZmZhYmEzYzQ5YjIxYWZlODU3YzUyNTBlYjk3ZGZkYjI2YzU1YjA5MGJlODYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.mXs2mVWTV-i--SzdHoGjY_j2D9Ea_s0kZP50jAMbnuc)
I am also getting issues. I have attached the collab here.
Really appreciate your help !
from editanything.
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
environment.yaml need safetensors>=0.3.1
Thanks for the update! I tested it on a 3080Ti graphics card with 32G RAM machine and was able to run it, but the generated graphs were very resource intensive. Tried several times and failed to achieve similar results as the demo. Hopefully a more detailed tutorial on how to do this will follow.
from editanything.
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
I am also getting issues. I have attached the [collab](https://colab.research.google.com/drive/1eLnlD8ACvzawbBX7vUPlHU6a9f_Tn0Vz?usp=sharing) here. Really appreciate your help !
You can try my https://github.com/ennnnny/sd_colab/blob/self/editanything.ipynb but I haven't solved the OOM problem. Maybe the colab Pro account can perform.
Thanks will look into it !
from editanything.
https://github.com/ennnnny/sd_colab/blob/self/editanything.ipynb
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
environment.yaml need safetensors>=0.3.1
Thanks for the update! I tested it on a 3080Ti graphics card with 32G RAM machine and was able to run it, but the generated graphs were very resource intensive. Tried several times and failed to achieve similar results as the demo. Hopefully a more detailed tutorial on how to do this will follow.
As this solution is training-free, you need to adjust the parameters to get the good results. Also, I find that the text prompt is important. If you cannot get a good description of your reference region, you can train the reference region with text inversion to get a good text embedding. I will upload a tutorial, thanks for your advice.
from editanything.
https://github.com/ennnnny/sd_colab/blob/self/editanything.ipynb
Curious about the effect of "Cross-Image region drag and merge". I tried to run it in google colab and found that the memory exceeded 12G causing the startup to fail, and there may be a problem with environment.yaml. It would be great if could provide a google colab configuration!π
Other extra modules like BLIP model and SAM model may cost some GPU memory. You can extract ths function
Line 825 in 5722988
to use the cross-image drag in the google colab. This would aovid the OOM problem I suppose.
environment.yaml need safetensors>=0.3.1
Thanks for the update! I tested it on a 3080Ti graphics card with 32G RAM machine and was able to run it, but the generated graphs were very resource intensive. Tried several times and failed to achieve similar results as the demo. Hopefully a more detailed tutorial on how to do this will follow.As this solution is training-free, you need to adjust the parameters to get the good results. Also, I find that the text prompt is important. If you cannot get a good description of your reference region, you can train the reference region with text inversion to get a good text embedding. I will upload a tutorial, thanks for your advice.
Results were not that great but hopefully your tutorial can help us !
Tutorial for this would be great. Thanks waiting for it : )
from editanything.
the "Cross-image region drag and merge" is great, but which files can i read to know how it works
from editanything.
How to use it with controlnet?
from editanything.
Related Issues (20)
- sam2image.py can run on the gui,but when i click run,the html is always circling, and there is no log in the script HOT 1
- Filenotfound error HOT 1
- AttributeError: module 'keras.backend' has no attribute 'is_tensor' HOT 2
- serializer = serializing.COMPONENT_MAPPING[type]() KeyError: 'dataset' HOT 1
- ζι¨η½²ζΆζδΉζη€Ίapp.pyεeditany_loraηζδ»Άιε₯½ε€δ»£η ι½ζ―ιη HOT 1
- Colors for SAM mask based ControlNet during training
- How to install this project in a1111 sd webui?
- App.py run error
- fix demo HOT 2
- why should generate the mask again? HOT 1
- Unable to reproduce the dog's head example when using the same example image
- Replace pytorch 2.1+cu12.1 is ok? I found now version is 1.13, is too low
- Are we going to support SDXL-Turbo? HOT 2
- Weights creation HOT 3
- Has the author of this repository given up? HOT 3
- Which scripts if for Haircut editing? HOT 2
- ValueError at runtime HOT 2
- What is TEXT_ENCODER_TARGET_MODULES in utils/train_dreambooth_lora_inpaint.py HOT 1
- Why there is no strength parameter for StableDiffusionInpaintPipleline? HOT 3
- How to train text encoder for dreambooth inpaint lora?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from editanything.