Git Product home page Git Product logo

sdupdates's Introduction

SD RESOURCE GOLDMINE

->Original rentrys (news) https://rentry.org/sdupdates3 (non-news) https://rentry.org/sdgoldmine<- ->Old stuff here https://rentry.org/sdupdates2 and here https://rentry.org/sdupdates<-

!!! danger Warnings:

1. Ckpts/hypernetworks/embeddings are ==not== interently safe as of right now. They can be pickled/contain malicious code. Use your common sense and protect yourself as you would with any random download link you would see on the internet.

2. Monitor your GPU temps and increase cooling and/or undervolt them if you need to. There have been claims of GPU issues due to high temps.

3. Extensions can change code when they're ran. Be careful Check the news for more information

!!! info There is now a github for this rentry: https://github.com/questianon/sdupdates. This should allow you to see changes across the different updates

!!! note Changelog: everything except discord and reddit

All rentry links are ended with a '.org' here and can be changed to a '.co'. Also, use incognito/private browsing when opening google links, else you lose your anonymity / someone may dox you

Contact

If you have information/files (e.g. embed) not on this list, have questions, or want to help, please contact me with details

Socials: Trip: questianon !!YbTGdICxQOw Discord: malt#6065 Reddit: u/questianon Github: https://github.com/questianon Twitter: https://twitter.com/questianon)

!!! note Don't forget to git pull to get a lot of new optimizations + updates, if SD breaks go backward in commits until it starts working again Instructions: * If on Windows: 1. navigate to the webui directory through command prompt or git bash a. Git bash: right click > git bash here b. Command prompt: click the spot in the "url" between the folder and the down arrow and type "command prompt". c. If you don't know how to do this, open command prompt, type "cd [path to stable-diffusion-webui]" (you can get this by right clicking the folder in the "url" or holding shift + right clicking the stable-diffusion-webui folder) 2. git pull 3. pip install -r requirements.txt * If on Linux: 1. go to the webui directory 2. source ./venv/bin/activate a. if this doesn't work, run python -m venv venv beforehand 3. git pull 4. pip install -r requirements.txt

11/13+11/14

11/11+11/12

11/10

Prompting

Google Docs with a prompt list/ranking/general info for waifu creation: https://docs.google.com/document/d/1Vw-OCUKNJHKZi7chUtjpDEIus112XBVSYHIATKi1q7s/edit?usp=sharing Ranked and calssibied danbooru tags, sorted by amount of pictures, and ranked by type and quality (WD): https://cdn.discordapp.com/attachments/1029235713989951578/1038585908934483999/Kopi_af_WAIFU_MASTER_PROMPT_DANBOORU_LIST.pdf Anon's prompt collection: https://mega.nz/folder/VHwF1Yga#sJhxeTuPKODgpN5h1ALTQg Tag effects on img: https://pastebin.com/GurXf9a4 Clothing comparison: https://files.catbox.moe/z3n66e.jpg

  • Anon says that "8k, 4k, (highres:1.1), best quality, (masterpiece:1.3)" leads to nice details

Chinese scroll collection: https://note.com/sa1p/ Scroll 1: https://docs.qq.com/doc/DWHl3am5Zb05QbGVs

Scroll 2: https://docs.qq.com/doc/DWGh4QnZBVlJYRkly Scroll 3 (spooky): https://docs.qq.com/doc/DWEpNdERNbnBRZWNL Tome: https://docs.qq.com/doc/DSHBGRmRUUURjVmNM Tome 2 (missing link) Japanese Scroll: https://p1atdev.notion.site/021f27001f37435aacf3c84f2bc093b5?p=f9d8c61c4ed8471a9ca0d701d80f9e28

Using emoticons and emojis can be really good: https://docs.google.com/spreadsheets/d/1aTYr4723NSPZul6AVYOX56CVA0YP3qPos8rg4RwVIzA/edit#gid=1453378351 🕊💥😱😲😶🙄 leads to https://files.catbox.moe/biy755.png 🌷🕊🗓👋😛👋 leads to https://files.catbox.moe/7khxe0.png spoken squiggle: https://twitter.com/AI_Illust_000/status/1588838369593032706 Anon: The emoji performs well in terms of semantic accuracy because it is only one character.

Database of prompts: https://publicprompts.art/

Hololive prompts: https://rentry.org/3y56t Hololive 2: https://rentry.org/q8x5y

Big negative: https://pastes.io/x9crpin0pq Fat negative: https://www.reddit.com/r/WaifuDiffusion/comments/yrpovu/img2img_from_my_own_loose_sketch/

Krea AI prompt database: https://github.com/krea-ai/open-prompts Prompt search: https://www.ptsearch.info/home/ Another search: http://novelai.io/ 4chan prompt search: https://desuarchive.org/g/search/text/masterpiece%20high%20quality/ Prompt book: https://openart.ai/promptbook Prompt word/phrase collection: https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion/raw/main/ideas.txt

Dynamic prompts: https://github.com/adieyal/sd-dynamic-prompts

Japanese prompt generator: https://magic-generator.herokuapp.com/ Build your prompt (chinese): https://tags.novelai.dev/ NAI Prompts: https://seesaawiki.jp/nai_ch/d/%c8%c7%b8%a2%a5%ad%a5%e3%a5%e9%ba%c6%b8%bd/%a5%a2%a5%cb%a5%e1%b7%cf

Japanese wiki: https://seesaawiki.jp/nai_ch/

Korean wiki: https://arca.live/b/aiart/60392904 Korean wiki 2: https://arca.live/b/aiart/60466181

Multilingual study: https://jalonso.notion.site/Stable-Diffusion-Language-Comprehension-5209abc77a4f4f999ec6c9b4a48a9ca2

Aesthetic value (imgs used to train SD): https://laion-aesthetic.datasette.io/laion-aesthetic-6pls

NAI to webui translator (not 100% accurate): https://seesaawiki.jp/nai_ch/d/%a5%d7%a5%ed%a5%f3%a5%d7%a5%c8%ca%d1%b4%b9

Prompt editing parts of image but without using img2img/inpaint/prompt editing guide by anon: https://files.catbox.moe/fglywg.JPG

Tip Dump: https://rentry.org/robs-novel-ai-tips Tips: https://github.com/TravelingRobot/NAI_Community_Research/wiki/NAI-Diffusion:-Various-Tips-&-Tricks Info dump of tips: https://rentry.org/Learnings Outdated guide: https://rentry.co/8vaaa Tip for more photorealism: https://www.reddit.com/r/StableDiffusion/comments/yhn6xx/comment/iuf1uxl/

  • TLDR: add noise to your img before img2img

NAI prompt tips: https://docs.novelai.net/image/promptmixing.html NAI tips 2: https://docs.novelai.net/image/uifunctionalities.html

Masterpiece vs no masterpiece: https://desuarchive.org/g/thread/89714899#89715160

SD 1.4 vs 1.5: https://postimg.cc/gallery/mhvWsnx NAI vs Anything: https://www.bilibili.com/read/cv19603218 Model merge comparisons: https://files.catbox.moe/rcxqsi.png Model merge: https://files.catbox.moe/vgv44j.jpg Some sampler comparisons: https://www.reddit.com/r/StableDiffusion/comments/xmwcrx/a_comparison_between_8_samplers_for_5_different/ More comparisons: https://files.catbox.moe/csrjt5.jpg More: https://i.redd.it/o440iq04ocy91.jpg (https://www.reddit.com/r/StableDiffusion/comments/ynt7ap/another_new_sampler_steps_comparison/) More: https://i.redd.it/ck4ujoz2k6y91.jpg (https://www.reddit.com/r/StableDiffusion/comments/yn2yp2/automatic1111_added_more_samplers_so_heres_a/) Every sampler comparison: https://files.catbox.moe/u2d6mf.png

Prompt: 1girl, pointy ears, white hair, medium hair, ahoge, hair between eyes, green eyes, medium:small breasts, cyberpunk, hair strand, dynamic angle, cute, wide hips, blush, sharp eyes, ear piercing, happy, hair highlights, multicoloured hair, cybersuit, cyber gas mask, spaceship computers, ai core, spaceship interior Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, animal ears, panties

Original image: Steps: 50, Sampler: DDIM, CFG scale: 11, Seed: 3563250880, Size: 1024x1024, Model hash: cc024d46, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, First pass size: 512x512 NAI/SD mix at 0.25

New samplers: AUTOMATIC1111/stable-diffusion-webui#4363 New vs. DDIM: https://files.catbox.moe/5hfl9h.png

f222 comparisons: https://desuarchive.org/g/search/text/f222/filter/text/start/2022-11-01/

Deep Danbooru: https://github.com/KichangKim/DeepDanbooru Demo: https://huggingface.co/spaces/hysts/DeepDanbooru

Embedding tester: https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Euler vs. Euler A: AUTOMATIC1111/stable-diffusion-webui#2017 (comment)

According to anon: DPM++ should converge to result much much faster than Euler does. It should still converge to the same result though.

Seed hunting:

  • By nai speedrun asuka imgur anon:

    made something that might help the highres seed/prompt hunters out there. this mimics the "0x0" firstpass calculation and suggests lowres dimensions based on target higheres size. it also shows data about firstpass cropping as well. it's a single file so you can download and use offline. picrel. https://preyx.github.io/sd-scale-calc/ view code and download from https://files.catbox.moe/8ml5et.html for example you can run "firstpass" lowres batches for seed/prompt hunting, then use them in firstpass size to preserve composition when making highres.

Script for tagging (like in NAI) in AUTOMATIC's webui: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete Danbooru Tag Exporter: https://sleazyfork.org/en/scripts/452976-danbooru-tags-select-to-export Another: https://sleazyfork.org/en/scripts/453380-danbooru-tags-select-to-export-edited Tags (latest vers): https://sleazyfork.org/en/scripts/453304-get-booru-tags-edited Basic gelbooru scraper: https://pastebin.com/0yB9s338 UMI AI: https://www.patreon.com/klokinator

Random Prompts: https://rentry.org/randomprompts Python script of generating random NSFW prompts: https://rentry.org/nsfw-random-prompt-gen Prompt randomizer: https://github.com/adieyal/sd-dynamic-prompting Prompt generator: https://github.com/h-a-te/prompt_generator

  • apparently UMI uses these?

http://dalle2-prompt-generator.s3-website-us-west-2.amazonaws.com/ https://randomwordgenerator.com/ funny prompt gen that surprisingly works: https://www.grc.com/passwords.htm Unprompted extension released: https://github.com/ThereforeGames/unprompted

  • HAS ADS

StylePile: https://github.com/some9000/StylePile script that pulls prompt from Krea.ai and Lexica.art based on search terms: https://github.com/Vetchems/sd-lexikrea randomize generation params for txt2img, works with other extensions: https://github.com/stysmmaker/stable-diffusion-webui-randomize

Ideas for when you have none: https://pentoprint.org/first-line-generator/ Colors: http://colorcode.is/search?q=pantone

I didn't check the safety of these plugins, but they're open source, so you can check them yourself Photoshop/Krita plugin (free): https://internationaltd.github.io/defuser/ (kinda new and currently only 2 stars on github)

Photoshop: https://github.com/Invary/IvyPhotoshopDiffusion Photoshop plugin (paid, not open source): https://www.flyingdog.de/sd/ Krita plugins (free):

GIMP: https://github.com/blueturtleai/gimp-stable-diffusion

Blender: https://github.com/carson-katri/dream-textures https://github.com/benrugg/AI-Render

External masking: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script anon: theres a commanda rg for adding basic painting, its '--gradio-img2img-tool'

Script collection: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts Prompt matrix tutorial: https://gigazine.net/gsc_news/en/20220909-automatic1111-stable-diffusion-webui-prompt-matrix/ Animation Script: https://github.com/amotile/stable-diffusion-studio Animation script 2: https://github.com/Animator-Anon/Animator Video Script: https://github.com/memes-forever/Stable-diffusion-webui-video Masking Script: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script XYZ Grid Script: https://github.com/xrpgame/xyz_plot_script Vector Graphics: https://github.com/GeorgLegato/Txt2Vectorgraphics/blob/main/txt2vectorgfx.py Txt2mask: https://github.com/ThereforeGames/txt2mask Prompt changing scripts:

Interpolation script (img2img + txt2img mix): https://github.com/DiceOwl/StableDiffusionStuff

img2tiles script: https://github.com/arcanite24/img2tiles Script for outpainting: https://github.com/TKoestlerx/sdexperiments Img2img animation script: https://github.com/Animator-Anon/Animator/blob/main/animation_v6.py

Google's interpolation script: https://github.com/google-research/frame-interpolation

Animation Guide: https://rentry.org/AnimAnon#introduction Rotoscope guide: https://rentry.org/AnimAnon-Rotoscope Chroma key after SD (fully prompted?): https://files.catbox.moe/d27xdl.gif

More animation guide: https://www.reddit.com/r/StableDiffusion/comments/ymwk53/better_frame_consistency/ Animation guide + example for face: https://www.reddit.com/r/StableDiffusion/comments/ys434h/animating_generated_face_test/ Something for aninmation: https://github.com/nicolai256/Few-Shot-Patch-Based-Training

Animating faces by anon:

workflow looks like this:
>generate square portrait (i use 1024 for this example)
>create or find driving video
>crop driving video to square with ffmpeg, making sure to match the general distance from camera and face position(it does not do well with panning/zooming video or too much head movement)
>run thin-plate-spline-motion-model
>take result.mp4 and put it into Video2x (Waifu2x Caffe)
>put into flowframes for 60fps and webm

>if you don't care about upscaling it makes 256x256 pretty easily
>an extension for webui could probably be made by someone smarter than me, its a bit tedious right now with so many terminals

here is a pastebin of useful commands for my workflow
https://pastebin.com/6Y6ZK8PN

Another person who used it: https://www.reddit.com/r/StableDiffusion/comments/ynejta/stable_diffusion_animated_with_thinplate_spline/

Img2img megalist + implementations: AUTOMATIC1111/stable-diffusion-webui#2940

Runway inpaint model: https://huggingface.co/runwayml/stable-diffusion-inpainting

Inpainting Tips: https://www.pixiv.net/en/artworks/102083584 Rentry version: https://rentry.org/inpainting-guide-SD

Extensions: Artist inspiration: https://github.com/yfszzx/stable-diffusion-webui-inspiration

History: https://github.com/yfszzx/stable-diffusion-webui-images-browser Collection + Info: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions Deforum (video animation): https://github.com/deforum-art/deforum-for-automatic1111-webui

Auto-SD-Krita: https://github.com/Interpause/auto-sd-paint-ext

ddetailer (object detection and auto-mask, helpful in fixing faces without manually masking): https://github.com/dustysys/ddetailer Aesthetic Gradients: https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients Aesthetic Scorer: https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer Autocomplete Tags: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete Prompt Randomizer: https://github.com/adieyal/sd-dynamic-prompting Wildcards: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/ Wildcard script + collection of wildcards: https://app.radicle.xyz/seeds/pine.radicle.garden/rad:git:hnrkcfpnw9hd5jb45b6qsqbr97eqcffjm7sby Symmetric image script (latent mirroring): https://github.com/dfaker/SD-latent-mirroring

macOS Finder right-click menu extension: https://github.com/anastasiuspernat/UnderPillow

Clip interrogator: https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb 2 (apparently better than AUTO webui's interrogate): https://huggingface.co/spaces/pharma/CLIP-Interrogator, https://github.com/pharmapsychotic/clip-interrogator

Enchancement Workflow by anon: https://pastebin.com/8WVyDxt9

Inpainting a face by anon:

send the picture to inpaint modify the prompt to remove anything related to the background add (face) to the prompt slap a masking blob over the whole face mask blur 10-16 (may have to adjust after), masked content: original, inpaint at full resolution checked, full resolution padding 0, sampling steps ~40-50, sampling method DDIM, width and height set to your original picture's full res denoising strength .4-.5 if you want minor adjustments, .6-.7 if you want to really regenerate the entire masked area let it rip

  • AUTOMATIC1111 webui modification that "compensates for the natural heavy-headedness of sd by adding a line from 0 -> sqrt(2) over the 0 -> 74 token range (anon)" (evens out the token weights with a linear model, helps with the weight reset at 75 tokens (?))

VAEs

Tutorial + how to use on ALL models (applies for the NAI vae too): https://www.reddit.com/r/StableDiffusion/comments/yaknek/you_can_use_the_new_vae_on_old_models_as_well_for/

Booru tag scraping:

Wildcards:

Wildcard extension: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/

Someone's prompt using a lot of wildcards: Positive Prompt: (masterpiece:1.4), (best quality:1.4), [[nsfw]], highres, large breasts, 1girl, detailed clothing, skimpy clothing, haircolor, haircut, hairlength, eyecolor, cum, ((fetish)), lingerie, lingeriestate, ((sexacts)), sexposition,

Artist Comparisons (may or may not work with NAI):

Some comparisons of 421 different artists in different models.

Anon's list of comparisons:

  • Stable Diffusion v1.5, Waifu Diffusion v1.3, Trinart it4

https://imgur.com/a/ADPHh9q

  • Berry Mix, CLIP 2:

https://imgur.com/a/zzXqLPc

  • Berry Mix, CLIP 1:

https://imgur.com/a/TDGBAlc

  • Artist + Artist, WD v1.3 (incomplete):

https://mega.nz/file/ACtigCpD#f9zP9h1AU_0_4DPsBnvdhnUYdQmIJMb4pyc6PJ4J-FU

Creating fake animes:

Some observations by anon:

  1. Removing the spaces after the commas changed nothing
  2. Using "best_quality" instead of "best_quality" did change the image. masterpiece,best_quality,akai haato but she is a spider,blonde hair,blue eyes
  3. Changing all of the spaces into underscores changed the image somewhat substantially.
  4. Replacing those commas with spaces changed the image again.

Reduce bias of dreambooth models: https://www.reddit.com/r/StableDiffusion/comments/ygyq2j/a_simple_method_explained_in_the_comments_to/?utm_source=share&utm_medium=web2x&context=3

Landscape tutorial: https://www.reddit.com/r/StableDiffusion/comments/yivokx/landscape_matte_painting_with_stable_diffusion/

Anon's process:

  • Start with a prompt to get the general scenario you have in mind, here I was just looking to seggs the rrat so I used the embed here >>36743515 and described some of her character features to help steer the AI (in this case hair details, sharp teeth, her mouse ears and tail) as well as making her be naked and having vaginal sex
  • Generate images at a default resolution size (512 by X pixels) at a relative standard number of steps (30 in this case) and keep going until I find an image thats in a position I like (in this case seed 1920052602 gave me a very nice one to work with, as you can see here https://files.catbox.moe/8z2mua.png (embed))
  • Copy the seed of the image and paste it into the Seed field on the Web UI, which will maintain the composition of the image. I then double the resolution I was working with (so here I went from 512 by 768 to 1024 by 1536) and checkmark the "Hires fix option" underneath the width and height sliders. Hires fix is the secret sauce on the Web UI that helps maintain the detail of the image when you are upscaling the resolution of the image, and combined with that Upscale latent space option I mentioned earlier it really enhances the detail. With that done you can generate the upscaled image.
  • Play around with the weights of the prompt tags and add things to the negatives to fix little things like hair being too red, tummy too chubby, etc. You have to be careful with adding new tags because that can drastically change the image

Anon's booba process: >you can generate a perfect barbie doll anatomy but more accurate chuba in curated >then switch to full, img2img it on the same seed after blotching nipples on it like a caveman, and hit generate

Boooba v2:

  1. Generate whatever NSFW proompt you were thinking of using the CURATED model, yes, I know that sounds ridiculous https://files.catbox.moe/b6k6i4.png (embed)
  2. Inpaint the naughty bits back in. You REALLY don't have to do a good job of this: https://files.catbox.moe/yegjrw.png (embed)
  3. Switch to Full after clicking "Save", set Strength to 0.69, Noise to 0.17, and make sure you copy/paste the same seed # back in. Hit Generate: https://files.catbox.moe/8dag88.png (embed) Compare that with what you'd get trying to generate the same exact proompt using the Full model purely txt2img on the same seed: https://files.catbox.moe/ytfdv3.png (embed)

Img2img rotoscoping tutorial by anon:

1. extract image sequence from video
2. testing prompt by using the 1st photo from the batch
3. find the suitable prompt that you want, the pose/sexual acts should be the same as the original to prevent weirdness
4. CFG Scale and Denoising Strength is very important
> Low CFG Scale will make your image less follow your prompt and make it more blurry and messy (i use 9-13)
> Denoising Strength determines the mix between your prompt and your image: 0 = Original input 1 = Only Prompt, nothing resemble of the input except the colors.
the interesting thing that i've noticed from Denoising strength is not linear, its behave more exponential ( my speculation is 0-0.6 = still reminds of the original 0.61-0.76 = starting to change 0.77-1 = change a lot )
5. sampler:
> Euler-a is quite nice, but lack of consistency between the step, adding/lower 1 step can change the entire photo
> Euler is better than euler-a in terms of consistency but requires more steps = longer generation time between each image
> DPM++ 2S a Karras is the best in quality (for me) but it is very slow, good for generate single image
> DDIM is the fastest and very useful for this case, 20-30 steps can produces a nice quality anime image.
6. test prompting into a batch of 4-6 to choosing a seed
7. Batch img2img
8. Assembling the generated images into video, i don't want to use eveyframes so i rendered into 2 frame steps and half the frame rate
9. Use Flowframes to interpolate the inbetween frame to match the original video frame rate.

Ex: https://files.catbox.moe/e30szo.mp4

Models, Embeddings, and Hypernetworks

!!! Downloads listed as "sus" or "might be pickled" generally mean there were 0 replies and not enough "information" (like training info). or, the replies indicated they were suspicious. I don't think any of the embeds/hypernets have had their code checked so they could all be malicious, but as far as I know no one has gotten pickled yet

!!! All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Models*

Collection of potentially dangerous models: https://bt4g.org/search/.ckpt/1 Collection?: https://civitai.com/ Huggingface collection: https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads

potential magnet that someone gave me

magnet:?xt=urn:btih:689c0fe075ab4c7b6c08a6f1e633491d41186860&dn=Anything-V3.0.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fvibe.sleepyinternetfun.xyz%3a1738%2fannounce&tr=udp%3a%2f%2ftracker1.bt.moack.co.kr%3a80%2fannounce&tr=udp%3a%2f%2ftracker.zerobytes.xyz%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.theoks.net%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.swateam.org.uk%3a2710%2fannounce&tr=udp%3a%2f%2ftracker.publictracker.xyz%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.monitorit4.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.encrypted-data.xyz%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.dler.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.army%3a6969%2fannounce&tr=http%3a%2f%2ftracker.bt4g.com%3a2095%2fannounce

Mag2

Little update, here's the link with all including VAE (second one)
magnet:?xt=urn:btih:689C0FE075AB4C7B6C08A6F1E633491D41186860&dn=Anything-V3.0.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce

magnet:?xt=urn:btih:E87B1537A4B5B5F2E23236C55F2F2F0A0BB6EA4A&dn=NAI-Anything&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce

Mag3

magnet:?xt=urn:btih:689c0fe075ab4c7b6c08a6f1e633491d41186860&dn=Anything-V3.0.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.zerobytes.xyz%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.monitorit4.me%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.moeking.me%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.encrypted-data.xyz%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.dler.org%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.army%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.altrosky.nl%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce

from: https://bt4g.org/magnet/689c0fe075ab4c7b6c08a6f1e633491d41186860

another magnet on https://rentry.org/sdmodels from the author

  • Mixed SFW/NSFW Pony/Furry V2 from AstraliteHeart: https://mega.nz/file/Va0Q0B4L#QAkbI2v0CnPkjMkK9IIJb2RZTegooQ8s6EpSm1S4CDk

  • Mega mixing guide (has a different berry mix): https://rentry.org/lftbl

  • Cafe Unofficial Instagram TEST Model Release

    • Trained on ~140k 640x640 Instagram images made up of primarily Japanese accounts (mix of cosplay, model, and personal accounts)
    • Note: While the model can create some realistic (Japanese) Instagram-esque images on its own, for full potential, it is recommended that it be merged with another model (such as berry or anything)
    • Note: Use CLIP 2 and resolutions greater than 640x640

Raspberry mix download by anon (not sure if safe): https://pixeldrain.com/u/F2mkQEYp Strawberry Mix (anon, safety caution): https://pixeldrain.com/u/z5vNbVYc

magnet:?xt=urn:btih:eb085b3e22310a338e6ea00172cb887c10c54cbc&dn=cafe-instagram-unofficial-test-epoch-9-140k-images-fp32.ckpt&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Fopentor.org%3A2710&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp%3A%2F%2Ftracker.blackunicorn.xyz%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969

ThisModel:

  1. (Weighted Sum 0.05) Anything3 + SD1.5 = Temp1
  2. (Add Difference 1.0) Temp1 + F222 + SD1.5 = Temp2
  3. (Weighted Sum 0.2) Temp2 + TrinArt2_115000 = ThisModel

Anon's model for vampires(?):

My steps

Step 1:
>A : Anything-V3.0
>B : trinart2_step115000.ckpt [f1c7e952]
>C : stable-diffusion-v-1-4-original

A from https://huggingface.co/Linaqruf/anything-v3.0/blob/main/Anything-V3.0-pruned.ckpt
B from https://rentry.org/sdmodels#trinart2_step115000ckpt-f1c7e952
C from https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt

and I "Add Difference" at 0.45, and name as part1.ckpt

Step 2:
>A : part1.ckpt (What I made in Step 1)
>B: Cafe Unofficial Instagram TEST Model [50b987ae]

B is from https://rentry.org/sdmodels#cafe-unofficial-instagram-test-model-50b987ae

and I "Weighted Sum" at 0.5, and name it TrinArtMix.ckpt

EveryDream Trainer

!!! All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Download + info + prompt templates: https://github.com/victorchall/EveryDream-trainer

Dreambooth Models:

!!! All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Links:

Embeddings

!!! info If an embedding is >80mb, I mislabeled it and it's a hypernetwork

!!! info Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

!!! All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

You can check .pts here for their training info using a text editor

Found on 4chan:

NOTE TO MYSELF, ADD THAT PONY EMBEDDING THAT I DOWNLOADING 2 WEEKS AGO

Found on Discord:

Found on Reddit:

Hypernetworks:

!!! info If a hypernetwork is <80mb, I mislabeled it and it's an embedding

!!! info Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

!!! All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Chinese telegram (uploaded by telegram anon): magnet:?xt=urn:btih:8cea1f404acfa11b5996d1f1a4af9e3ef2946be0&dn=ChatExport%5F2022-10-30&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

I've made a full export of the Chinese Telegram channel.

It's 37 GB (~160 hypernetworks and a bunch of full models). If you don't want all that, I would recommend downloading everything but the 'files' folder first (like 26 MB), then opening the html file to decide what you want.

Found on 4chan:

Found on Discord:

Colored eyes:

>Hey everyone , this hypernetwork was released by me (IWillRemember) (IWillRemember#1912 on discord) if you have any questions you can find me on discord!
>
>Did the Hn as a commission for a friend 😄
>
>I'm releasing an Hn to do better animation like glowing eyes, and a more slender face/upper body.
>
>The tags are : 
>detailed eyes, 
>(color) eyes  = ex: white eyes, blue eyes, etc etc
>collarbone
>
>Trained for 12k steps on a 80 ish images dataset
>
>You can use the Hn with a str of 1 without any problem.
>
>Happy prompting!
>
>Example: https://media.discordapp.net/attachments/1023082871822503966/1038115846222008392/00162-3940698197-masterpiece_highest_quality_digital_art_1girl_on_back_detailed_eyes_perfect_face_detailed_face_breasts_white_hair_yell.png?width=648&height=702
>
>https://mega.nz/file/dHFwmaxS#NQhMPjT4TElPXX_YAZhTsFrQ36PDJhpWFm9BcHU_BO4 

Aesthetic Gradients

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Polar Resources

DEAD/MISSING

If you have one of these, please get it to me

Apparently there's a Google drive collection of downloads? (might be the korean site but mistyped)

Dreambooth:

Embed:

Hypernetworks:

Datasets:

Training dataset with aesthetic ratings: https://github.com/JD-P/simulacra-aesthetic-captions

Training

Use pics where:

  • Character doesn't blend with background and isn't overlapped by random stuff
  • Character is in different poses, angles, and backgrounds
  • Resolution is 512x512 (crop if it's not)

Train stable diffusion model with Diffusers, Hivemind and Pytorch Lightning: https://github.com/Mikubill/naifu-diffusion

Dreambooth colab with custom model (old, so might be outdated): https://desuarchive.org/g/thread/89140837/#89140895

GPU seems to determine training results (--low/med vram arg too)

Extension: https://github.com/d8ahazard/sd_dreambooth_extension

Image tagger helper: https://github.com/nub2927/image_tagger/

anything.ckpt comparisons Old final-pruned: https://files.catbox.moe/i2zu0b.png (embed) v3-pruned-fp16: https://files.catbox.moe/k1tvgy.png (embed) v3-pruned-fp32: https://files.catbox.moe/cfmpu3.png (embed) v3 full or whatever: https://files.catbox.moe/t9jn7y.png (embed)

for key in tqdm(theta_0.keys(), desc="Stage 1/2"):
    if "model" in key and key in theta_1:
        # sigmoid
        alpha = alpha * alpha * (3 - (2 * alpha))
        theta_0[key] = theta_0[key] + ((theta_1[key] - theta_0[key]) * alpha)

        # inverse sigmoid
        #alpha = 0.5 - math.sin(math.asin(1.0 - 2.0 * alpha) / 3.0)
        #theta_0[key] = theta_0[key] + ((theta_1[key] - theta_0[key]) * alpha)

        # Weighted sum
        #theta_0[key] = ((1 - alpha) * theta_0[key]) + (alpha * theta_1[key])

Supposedly how to append model data without merging by anon:

x = (Final Dreambooth Model) - (Original Model) filter x for x >= (Some Threshold) out = (Model You Want To Merge It With) * (1 - M) + x * M

Model merging method that preserves weights: https://github.com/samuela/git-re-basin

>2. unloads vae from VRAM during training. This is done in hypernetworks, and idk why it wasn't in the code for TI. It doesn't break anything and doesn't make anything worse.
>This saves around .2 GB VRAM
>
>After you apply this, turn on Move VAE and CLIP to RAM and Use cross attention optimizations while training
  • By anon:

No idea if someone else will have a use for this but I needed to make it for myself since I can't get a hypernetwork trained regardless of what I do.

https://mega.nz/file/LDwi1bab#xrGkqJ9m-IsqsTQNixVkeWrGw2HvmAr_fx9FxNhrrbY

That link above is a spreadsheet where you paste the hypernetwork_loss.csv data into A1 cell (A2 is where numbers should start). Then you can use M1 to set how many epochs of the most recent data you want to use for the red trendline (green is the same length but starting before red). Outlayer % is if you want to filter out extreme points 100% means all points are considered for trendline 95% filters out top and bottom 5 etc. Basically you can use this to see where the training started fucking up.

  • Anon's best:

Creation: 1,2,1 Normalized Layers Dropout Enabled Swish XavierNormal (Not sure yet on this one. Normal or XavierUniform might be better)

Training:

Rate: 5e-5:1000, 5e-6:5000, 5e-7:20000, 5e-8:100000 Max Steps: 100,000

Vector guide by anon: https://rentry.org/dah4f

  • Another training guide: https://www.reddit.com/r/stablediffusion/comments/y91luo

  • Super simple embed guide by anon: Grab the high quality images, run them through the processor. Create an embedding called art by {artist}. Then train that same embedding with your processed images and set the learning rate to the following: 0.1:500,0.05:1000,0.025:1500,0.001:2000,1e-5` Run it for 10k steps and you'll be good. No need for an entire hypernetwork.

  • Has training info and a tutorial for Asagi Igawa, Edjit, and Rouge the Bat embeds (RealYiffingFar#4510): https://mega.nz/folder/5nIAnJaA#YMClwO8r7tR1zdJJeTfegA

Datasets:

FAQ

Check out https://rentry.org/sdupdates and https://rentry.org/sdupdates2 for other questions https://rentry.org/sdg_FAQ

What's all the new stuff?

Check here to see if your question is answered:

How do I set this up?

Refer to https://rentry.org/nai-speedrun (has the "Asuka test") Easy guide: https://rentry.org/3okso Standard guide: https://rentry.org/voldy Detailed guide: AUTOMATIC1111/stable-diffusion-webui#2017 Paperspace: https://rentry.org/865dy

AMD Guide: https://rentry.org/sdamd

What's the "Hello Asuka" test?

It's a basic test to see if you're able to get a 1:1 recreation with NAI and have everything set up properly. Coined after asuka anon and his efforts to recreate 1:1 NAI before all the updates.

Refer to

What is pickling/getting pickled?

ckpt files and python files can execute code. Getting pickled is when these files execute malicious code that infect your computer with malware. It's a memey/funny way of saying you got hacked.

I want to run this, but my computer is too bad. Is there any other way? Check out one of these (I did not used most of these, so they might be unsafe to use):

How do I directly check AUTOMATIC1111's webui updates?

For a complete list of updates, go here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master

What do I do if a new updates bricks/breaks my AUTOMATIC1111 webui installation?

Go to https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master See when the change happened that broke your install Get the blue number on the right before the change Open a command line/git bash to where you usually git pull (the root of your install) 'git checkout ' to reset your install, use 'git checkout master'

What is...?

What is a VAE? Variational autoencoder, basically a "compressor" that can turn images into a smaller representation and then "decompress" them back to their original size. This is needed so you don't need tons of VRAM and processing power since the "diffusion" part is done in the smaller representation (I think). The newer SD 1.5 VAEs have been trained more and they can recreate some smaller details better. What is pruning? Removing unnecessary data (anything that isn't needed for image generation) from the model so that it takes less disk space and fits more easily into your VRAM What is a pickle, not referring to the python file format? What is the meme surrounding this? When the NAI model leaked people were scared that it might contain malicious code that could be executed when the model is loaded. People started making pickle memes because of the file format. Why is some stuff tagged as being 'dangerous', and why does the StableDiffusion WebUI have a 'safe-unpickle' flag? -- I'm stuck on pytorch 1.11 so I have to disable this Safe unpickling checks the pickle's code library imports against an approved list. If it tried to import something that isn't on the list it won't load it. This doesn't necessarily mean it's dangerous but you should be cautious. Some stuff might be able to slip through and execute arbitrary code on your computer. Is the rentry stuff all written by one person or many? There are many people maintaining different rentries.

Why are some of my prompts outputting black images?

Add " --no-half-vae " (remove the quotations) to your commandline args in webui-user.bat

What's the difference between embeds, hypernetworks, and dreambooths? What should I train? Anon:

I've tested a lot of the model modifications and here are my thoughts on them: embeds: these are tiny files which find the best representation of whatever you're training them on in the base model. By far the most flexible option and will have very good results if the goal is to group or emphasize things the model already understands hypernetworks: there are like instructions that slightly modify the result of the base model after each sampling step. They are quite powerful and work decently for everything I've tried (subjects, styles, compositions). The cons are they can't be easily combined like embeds. They are also harder to train because good parameters seem to vary wildly so a lot of experimentation is needed each time dreambooth: modifies part of the model itself and is the only method which actually teaches it something new. Fast and accurate results but the weights for generating adjacent stuff will get trashed. These are gigantic and have the same cons as embeds

Link Dump will sort

Info:

Boorus:

Upscalers:

Resizing: https://www.birme.net/?target_width=512&target_height=512&quality_jpeg=100&quality_webp=100

Simple png editor: https://entropymine.com/jason/tweakpng/

Install Stable Diffusion on an AMD GPU PC running Ubuntu 20.04: https://gist.github.com/geerlingguy/ff3c3cbcf4416be2c0c1e0f836a8183d

How to run https://huggingface.co/spaces/skytnt/moe-tts locally (read through the replies): https://desuarchive.org/g/thread/89714899#89715329

lol: https://desuarchive.org/g/thread/89719598#89719734

Twitter anons: https://twitter.com/AICoomer https://twitter.com/BluMeino https://twitter.com/ElfBreasts https://twitter.com/Elf_Anon https://twitter.com/ElfieAi https://twitter.com/EyeAI_ https://twitter.com/FEDERALOFFICER https://twitter.com/FizzleDorf https://twitter.com/Headstacker https://twitter.com/KLaknatullah https://twitter.com/Kw0337 https://twitter.com/Lisandra_brave https://twitter.com/Merkurial_Mika https://twitter.com/PorchedArt https://twitter.com/Rahmeljackson https://twitter.com/RaincoatWasted https://twitter.com/S37030315 https://twitter.com/YoucefN30829772 https://twitter.com/ai_sneed https://twitter.com/dproompter https://twitter.com/epitaphtoadog https://twitter.com/mommyartfactory https://twitter.com/nadanainone https://twitter.com/spee321

https://mobile.twitter.com/spee321 https://mobile.twitter.com/ElfieAi https://mobile.twitter.com/Headstacker https://mobile.twitter.com/ai_sneed https://mobile.twitter.com/Rahmeljackson https://twitter.com/SpiteAnon

Aggressively clear cache: https://desuarchive.org/g/thread/89718344/#q89722878

diff --git a/modules/sd_hijack_optimizations.py b/modules/sd_hijack_optimizations.py
index 98123fb..0f5f327 100644
--- a/modules/sd_hijack_optimizations.py
+++ b/modules/sd_hijack_optimizations.py
@@ -99,11 +100,14 @@ def split_cross_attention_forward(self, x, context=None, mask=None):
         raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). '
                            f'Need: {mem_required / 64 / gb:0.1f}GB free, Have:{mem_free_total / gb:0.1f}GB free')
 
+    torch.cuda.empty_cache()
     slice_size = q.shape[1] // steps if (q.shape[1] % steps) == 0 else q.shape[1]
     for i in range(0, q.shape[1], slice_size):
         end = i + slice_size
         s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)

Something about training? old: https://www.bdhammel.com/learning-rates/

Koikatsu game cards: https://illusioncards.booru.org/index.php?page=post&s=list&tags=card_frame&pid=0

Faunanon's pixiv: https://www.pixiv.net/en/users/87884328

Depickler?: https://github.com/trailofbits/fickling

watermark lol: AUTOMATIC1111/stable-diffusion-webui#2803, https://github.com/AUTOMATIC1111/stable-diffusion-webui/search?q=do_not_add_watermark

Fairseq demse 13B (text model for nsfw?): https://huggingface.co/KoboldAI/fairseq-dense-13B-Shinen?text=My+name+is+Julien+and+I+like+to

Link collection: https://rentry.org/p5pk2

Japanese discussion of images from 4chan: http://yaraon-blog.com/archives/225884 *According to anon: nothing new there, it's the infamous clickbait (アフィカス, stand for site that consists of no content or only reprints from another site for the purpose of advertising revenue)

sdupdates's People

Contributors

questianon avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.