Git Product home page Git Product logo

awesome-diffusion-categorized's Introduction

Diffusion Models Inversion

⭐⭐⭐Null-text Inversion for Editing Real Images using Guided Diffusion Models
[CVPR 2023] [Project] [Code]

⭐⭐Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code
[Website] [Code]

Improving Negative-Prompt Inversion via Proximal Guidance
[Website] [Code]

Accelerating Diffusion Models for Inverse Problems through Shortcut Sampling
[Website] [Code]

Negative-prompt Inversion: Fast Image Inversion for Editing with Text-guided Diffusion Models
[Website]

IterInv: Iterative Inversion for Pixel-Level T2I Models
[Website] [Neurips 2023 workshop] [Openreview] [NeuripsW on Diffusion Models] [Code]

Object-aware Inversion and Reassembly for Image Editing
[Website] [Code] [Project]

EDICT: Exact Diffusion Inversion via Coupled Transformations
[Website] [Code]

Inversion-Based Creativity Transfer with Diffusion Models
[CVPR 2023] [Code]

Score-Based Diffusion Models as Principled Priors for Inverse Imaging
[Website] [ICCV 2023]

Direct Inversion: Optimization-Free Text-Driven Real Image Editing with Diffusion Models
[Website]

Text Guided Image Editing

⭐⭐⭐Prompt-to-Prompt Image Editing with Cross Attention Control
[ICLR 2023] [Website] [Project] [Code] [Replicate Demo]

⭐⭐⭐Zero-shot Image-to-Image Translation
[SIGGRAPH 2023] [Project] [Code] [Demo] [Replicate Demo] [Diffusers Doc] [Diffusers Code]

⭐⭐⭐Null-text Inversion for Editing Real Images using Guided Diffusion Models
[CVPR 2023] [Project] [Code]

⭐⭐Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation
[CVPR 2023] [Project] [Code] [Dataset] [Replicate Demo] [Demo]

Imagic: Text-Based Real Image Editing with Diffusion Models
[CVPR 2023] [Project] [Diffusers]

InstructPix2Pix: Learning to Follow Image Editing Instructions
[CVPR 2023 (Highlight)] [Project] [Diffusers Doc] [Diffusers Code] [Official Code] [Dataset]

SINE: SINgle Image Editing with Text-to-Image Diffusion Models
[CVPR 2023] [Project] [Code]

Inpaint Anything: Segment Anything Meets Image Inpainting
[Website] [Code 1] [Code 2]

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations
[Website] [ICLR 2022] [Project] [Code]

DiffEdit: Diffusion-based semantic image editing with mask guidance
[ICLR 2023] [Website] [Unofficial Code] [Diffusers Doc] [Diffusers Code]

Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code
[Website] [Code]

EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods
[Website] [Code] [Project]

MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing
[ICCV 2023] [Project] [Code] [Demo]

An Edit Friendly DDPM Noise Space: Inversion and Manipulations
[Website] [Code] [Project] [Demo]

InstructEdit: Improving Automatic Masks for Diffusion-based Image Editing With User Instructions
[Website] [Code] [Project]

StyleDiffusion: Prompt-Embedding Inversion for Text-Based Editing
[Website] [Code]

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
[Neurips 2023] [Code]

Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing
[Neurips 2023] [Code]

PAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models
[Website] [Code] [Demo]

Collaborative Score Distillation for Consistent Visual Synthesis
[Neurips 2023] [Project] [Code]

Localizing Object-level Shape Variations with Text-to-Image Diffusion Models
[ICCV 2023] [Project] [Code]

ReGeneration Learning of Diffusion Models with Rich Prompts for Zero-Shot Image Translation
[Website] [Project]

Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance
[Website] [Code1] [Code2] [Diffusers Code]

Delta Denoising Score
[Website] [Project]

Visual Instruction Inversion: Image Editing via Visual Prompting
[Neurips 2023] [Project] [Code]

MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path
[Website] [Code]

Differential Diffusion: Giving Each Pixel Its Strength
[Website] [Code]

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
[Website] [Code]

Learning to Follow Object-Centric Image Editing Instructions Faithfully
[EMNLP 2023] [Code]

Iterative Multi-granular Image Editing using Diffusion Models
[Website]

Conditional Score Guidance for Text-Driven Image-to-Image Translation
[Website]

Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models
[CVPR 2023 AI4CC Workshop]

HIVE: Harnessing Human Feedback for Instructional Visual Editing
[Website] [Code]

Region-Aware Diffusion for Zero-shot Text-driven Image Editing
[Website] [Code]

Forgedit: Text Guided Image Editing via Learning and Forgetting
[Website] [Code]

UniTune: Text-Driven Image Editing by Fine Tuning an Image Generation Model on a Single Image
[SIGGRAPH 2023] [Code]

Watch Your Steps: Local Image and Scene Editing by Text Instructions
[Website] [Project]

Effective Real Image Editing with Accelerated Iterative Diffusion Inversion
[ICCV 2023 Oral]

Face Aging via Diffusion-based Editing
[BMVC 2023]

FISEdit: Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference
[Website]

LayerDiffusion: Layered Controlled Image Editing with Diffusion Models
[Website]

Text-to-image Editing by Image Information Removal
[Website]

iEdit: Localised Text-guided Image Editing with Weak Supervision
[Website]

Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models
[Website]

KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing
[Website]

User-friendly Image Editing with Minimal Text Input: Leveraging Captioning and Injection Techniques
[Website]

PFB-Diff: Progressive Feature Blending Diffusion for Text-driven Image Editing
[Website]

LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance
[Website]

PRedItOR: Text Guided Image Editing with Diffusion Prior
[Website]

InstructDiffusion: A Generalist Modeling Interface for Vision Tasks
[Website]

MoEController: Instruction-based Arbitrary Image Manipulation with Mixture-of-Expert Controllers
[Website]

FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing
[Website]

The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing
[Website]

Continual Learning

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA
[Website] [Project]

Create Your World: Lifelong Text-to-Image Diffusion
[Website]

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
[Website] [Code]

RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models
[Website] [CVPR 2023] [Project] [Code]

Exploring Continual Learning of Diffusion Models
[Website]

DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency
[Website]

Class-Incremental Learning using Diffusion Model for Distillation and Replay
[ICCV 2023 VCL workshop best paper]

DiffusePast: Diffusion-based Generative Replay for Class Incremental Semantic Segmentation
[Website]

Remove Concept

Ablating Concepts in Text-to-Image Diffusion Models
[ICCV 2023] [Code] [Project]

Erasing Concepts from Diffusion Models
[ICCV 2023] [Code] [Project]

Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models
[Website] [Code]

Inst-Inpaint: Instructing to Remove Objects with Diffusion Models
[Website] [Code] [Project] [Demo]

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
[Website] [Code]

Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion Models
[ICML 2023 workshop] [Code]

Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models
[Website]

New Concept Learning

⭐⭐⭐An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
[Website] [ICLR 2023 top-25%] [Code] [Diffusers Doc] [Diffusers Code]

⭐⭐⭐DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
[CVPR 2023] [Official Dataset] [Unofficial Code] [Project] [Diffusers Doc] [Diffusers Code]

⭐⭐Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion
[CVPR 2023] [Code] [Project] [Diffusers Doc] [Diffusers Code]

⭐⭐ReVersion: Diffusion-Based Relation Inversion from Images
[Website] [Code] [Project]

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
[Website] [Code] [Demo] [Project]

Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach
[Website] [Code]

SINE: SINgle Image Editing with Text-to-Image Diffusion Models
[CVPR 2023] [Project] [Code]

SVDiff: Compact Parameter Space for Diffusion Fine-Tuning
[Website] [Code]

A Neural Space-Time Representation for Text-to-Image Personalization
[Website] [Code] [Project]

Break-A-Scene: Extracting Multiple Concepts from a Single Image
[Website] [Project] [Code]

Concept Decomposition for Visual Exploration and Inspiration
[Website] [Project] [Code]

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
[Website] [Project] [Code]

Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning
[Website] [Project] [Code]

HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
[Website] [Project]

Highly Personalized Text Embedding for Image Manipulation by Stable Diffusion
[Website] [Code] [Project]

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
[Neurips 2023] [Code] [Project]

Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
[ICCV 2023] [Code] [Project]

ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation
[ICCV 2023] [Code] [Demo]

Cones: Concept Neurons in Diffusion Models for Customized Generation
[ICML 2023 oral] [Code]

ITI-GEN: Inclusive Text-to-Image Generation
[ICCV 2023] [Project]

Anti-DreamBooth: Protecting users from personalized text-to-image synthesis
[ICCV 2023] [Code] [Project]

DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Positive-Negative Prompt-Tuning
[Website] [Code] [Project]

Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models
[SIGGRAPH 2023] [Code] [Project]

The Hidden Language of Diffusion Models
[Website] [Code] [Project]

Inserting Anybody in Diffusion Models via Celeb Basis
[Website] [Code] [Project]

ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models
[Website] [Code] [Project]

Controlling Text-to-Image Diffusion by Orthogonal Finetuning
[Website] [Code] [Project]

SingleInsert: Inserting New Concepts from a Single Image into Text-to-Image Models for Flexible Editing
[Website] [Project] [Code]

Photoswap: Personalized Subject Swapping in Images
[Website] [Project] [Code]

ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation
[Website] [Code]

Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation
[Website] [Project]

ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation
[Website] [Code]

Subject-driven Text-to-Image Generation via Apprenticeship Learning
[Website] [Project]

DisenBooth: Disentangled Parameter-Efficient Tuning for Subject-Driven Text-to-Image Generation
[Website]

Controllable Textual Inversion for Personalized Text-to-Image Generation
[Website] [Code]

Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation
[Website] [Code]

Multiresolution Textual Inversion
[Neurips 2022 workshop] [Code]

Key-Locked Rank One Editing for Text-to-Image Personalization
[SIGGRAPH 2023] [Project]

Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework
[Website]

A Closer Look at Parameter-Efficient Tuning in Diffusion Models
[Website] [Code]

Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models
[Website]

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models
[Website] [Project]

$P+$: Extended Textual Conditioning in Text-to-Image Generation
[Website] [Project]

CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models
[Website] [Project]

Gradient-Free Textual Inversion
[Website]

Identity Encoder for Personalized Diffusion
[Website]

PhotoVerse: Tuning-Free Image Customization with Text-to-Image Diffusion Models
[Website] [Project]

InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning
[Website] [Project]

Cross-domain Compositing with Pretrained Diffusion Models
[Website] [Code]

Total Selfie: Generating Full-Body Selfies
[Website] [Project]

Unified Multi-Modal Latent Diffusion for Joint Subject and Text Conditional Image Generation
[Website]

ELODIN: Naming Concepts in Embedding Spaces
[Website]

Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
[Website]

Cones 2: Customizable Image Synthesis with Multiple Subjects
[Website]

Generate Anything Anywhere in Any Scene
[Website]

Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model
[Website]

Face0: Instantaneously Conditioning a Text-to-Image Model on a Face
[Website]

MagiCapture: High-Resolution Multi-Concept Portrait Customization
[Website]

DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models
[Website]

A Data Perspective on Enhanced Identity Preservation for Diffusion Personalization
[Website]

Representation Learning

Denoising Diffusion Autoencoders are Unified Self-supervised Learners
[ICCV 2023] [Code]

Diffusion Model as Representation Learner
[ICCV 2023] [Code]

Diffusion Models as Masked Autoencoders
[ICCV 2023] [Project]

Additional conditions

⭐⭐⭐Adding Conditional Control to Text-to-Image Diffusion Models
[Website] [Official Code] [Diffusers Doc] [Diffusers Code]

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
[ICML 2023] [Project] [Code] [Demo] [Diffusers Code] [Diffusers Doc] [Replicate Demo]

GLIGEN: Open-Set Grounded Text-to-Image Generation
[CVPR 2023] [Code] [Demo]

T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
[Website] [Code]

Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
[Website] [Code] [Project]

IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
[Website] [Code] [Project]

Composer: Creative and controllable image synthesis with composable conditions
[Website] [Code] [Project]

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
[Website] [Code] [Project]

Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation
[Website] [Code] [Project]

UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
[Website] [Code] [Project]

FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model
[ICCV 2023] [Code]

Collaborative Diffusion for Multi-Modal Face Generation and Editing
[CVPR 2023] [Code] [Project]

HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
[Website] [Code] [Project]

Freestyle Layout-to-Image Synthesis
[CVPR 2023 highlight] [Code] [Project]

Universal Guidance for Diffusion Models
[Website] [Code]

Late-Constraint Diffusion Guidance for Controllable Image Synthesis
[Website] [Code] [Project]

Freestyle Layout-to-Image Synthesis
[CVPR 2023 highlight] [Code] [Project]

HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation
[ICCV 2023] [Code] [Project]

Modulating Pretrained Diffusion Models for Multimodal Image Synthesis
[SIGGRAPH 2023] [Project]

Sketch-Guided Text-to-Image Diffusion Models
[SIGGRAPH 2023] [Project] [Code]

SpaText: Spatio-Textual Representation for Controllable Image Generation
[CVPR 2023] [Project]

Late-Constraint Diffusion Guidance for Controllable Image Synthesis
[Website] [Code]

SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form Layout-to-Image Generation
[Website]

Conditioning Diffusion Models via Attributes and Semantic Masks for Face Generation
[Website]

Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor
[Website] [Project]

Integrating Geometric Control into Text-to-Image Diffusion Models for High-Quality Detection Data Generation via Text Prompt
[Website]

Adding 3D Geometry Control to Diffusion Models
[Website]

LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation
[Website]

SketchKnitter: Vectorized Sketch Generation with Diffusion Models
[ICLR 2023] [Code]

Palette: Image-to-Image Diffusion Models
[SIGGRAPH 2022] [Website]

JointNet: Extending Text-to-Image Diffusion for Dense Distribution Modeling
[Website]

Spatial Control

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
[ICML 2023] [Project] [Code] [Demo] [Diffusers Code] [Diffusers Doc] [Replicate Demo]

TextDiffuser: Diffusion Models as Text Painters
[Website] [Project] [Code] [Demo]

BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion
[ICCV 2023] [Code]

Dense Text-to-Image Generation with Attention Modulation
[ICCV 2023] [Code]

Directed Diffusion: Direct Control of Object Placement through Attention Guidance
[Website] [Project] [Code]

Grounded Text-to-Image Synthesis with Attention Refocusing
[Website] [Project] [Code]

LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation
[Website] [Project] [Code]

LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models
[Website] [Project] [Code] [Demo] [Blog]

Training-Free Layout Control with Cross-Attention Guidance
[Website] [Project] [Code]

Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
[Arxiv] [ICLR 2023 openreview] [Project] [Code]

ReCo: Region-Controlled Text-to-Image Generation
[Website] [CVPR 2023] [Code]

Visual Programming for Text-to-Image Generation and Evaluation
[Website] [Project] [Code]

SceneComposer: Any-Level Semantic Image Synthesis
[Website] [CVPR 2023] [Code] [Project]

Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis
[ICCV 2023] [Code]

Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models
[Website] [Code]

A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis
[Website]

Controllable Text-to-Image Generation with GPT-4
[Website]

Guided Image Synthesis via Initial Image Editing in Diffusion Model
[Website]

Localized Text-to-Image Generation for Free via Cross Attention Control
[Website]

Training-Free Location-Aware Text-to-Image Synthesis
[Website]

Composite Diffusion | whole >= \Sigma parts
[Website]

Continuous Layout Editing of Single Images with Diffusion Models
[Website]

Masked-Attention Diffusion Guidance for Spatially Controlling Text-to-Image Generation
[Website]

Zero-shot spatial layout conditioning for text-to-image diffusion models
[Website]

R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image Generation
[Website]

T2I Diffusion Model augmentation

⭐⭐Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
[SIGGRAPH 2023] [Official Code] [Diffusers Code] [Diffusers doc] [Project] [Replicate Demo]

Improving Sample Quality of Diffusion Models Using Self-Attention Guidance
[ICCV 2023] [Project] [Code Official] [Diffusers Code] [Diffusers Doc] [Demo]

Expressive Text-to-Image Generation with Rich Text
[ICCV 2023] [Project] [Code] [Demo]

SEGA: Instructing Diffusion using Semantic Dimensions
[Website] [Neurips 2023] [Code] [Diffusers Code] [Diffusers Doc]

MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models
[ICCV 2023] [Project] [Code]

Discriminative Class Tokens for Text-to-Image Diffusion Models
[ICCV 2023] [Project] [Code]

Compositional Visual Generation with Composable Diffusion Models
[ECCV 2022] [Code] [Project]

ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation
[Neurips 2023] [Code]

Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment
[Neurips 2023] [Code]

ORES: Open-vocabulary Responsible Visual Synthesis
[Website] [Code]

Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
[Website] [Code]

Editing Implicit Assumptions in Text-to-Image Diffusion Models
[ICCV 2023] [Project] [Demo]

Real-World Image Variation by Aligning Diffusion Inversion Chain
[Website] [Project] [Code]

SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
[Website] [Code]

Detector Guidance for Multi-Object Text-to-Image Generation
[Website] [Code]

Designing a Better Asymmetric VQGAN for StableDiffusion
[Website] [Code]

FABRIC: Personalizing Diffusion Models with Iterative Feedback
[Website] [Code]

FreeU: Free Lunch in Diffusion U-Net
[Website] [Code] [Project]

ConceptLab: Creative Generation using Diffusion Prior Constraints
[Website] [Code] [Project]

Aligning Text-to-Image Diffusion Models with Reward Backpropagationn
[Website] [Code] [Project]

Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models
[Website] [Code] [Project]

ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models
[Website] [Code] [Project]

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
[Website] [Code]

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models
[Website] [Code]

Progressive Text-to-Image Diffusion with Soft Latent Direction
[Website] [Code]

Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet Hierarchy
[Website] [Code]

If at First You Don’t Succeed, Try, Try Again:Faithful Diffusion-based Text-to-Image Generation by Selection
[Website] [Code]

LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts
[Website] [Code]

Making Multimodal Generation Easier: When Diffusion Models Meet LLMs
[Website] [Code]

StyleDrop: Text-to-Image Generation in Any Style
[Website] [Project]

Diffusion Self-Guidance for Controllable Image Generation
[Website] [Project]

Amazing Combinatorial Creation: Acceptable Swap-Sampling for Text-to-Image Generation
[Website] [Project]

Divide & Bind Your Attention for Improved Generative Semantic Nursing
[Website] [Project]

MaskDiffusion: Boosting Text-to-Image Consistency with Conditional Mask
[Website]

Any-Size-Diffusion: Toward Efficient Text-Driven Synthesis for Any-Size HD Images
[Website]

Text2Layer: Layered Image Generation using Latent Diffusion Model
[Website]

AltDiffusion: A Multilingual Text-to-Image Diffusion Model
[Website]

It is all about where you start: Text-to-image generation with seed selection
[Website]

End-to-End Diffusion Latent Optimization Improves Classifier Guidance
[Website]

Stimulating the Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling
[Website]

A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation
[Website]

Norm-guided latent space exploration for text-to-image generation
[Website]

DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models
[Website]

Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models
[Website]

Improving Compositional Text-to-image Generation with Large Vision-Language Models
[Website]

Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else
[Website]

Unseen Image Synthesis with Diffusion Models
[Website]

Segmentation Detection Tracking

⭐⭐odise: open-vocabulary panoptic segmentation with text-to-image diffusion modelss
[CVPR 2023 Highlight] [Project] [Code] [Demo]

Personalize Segment Anything Model with One Shot
[Website] [Code]

Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion
[Website] [Code] [Project]

LD-ZNet: A Latent Diffusion Approach for Text-Based Image Segmentation
[Website] [ICCV 2023] [Project] [Code]

Stochastic Segmentation with Conditional Categorical Diffusion Models
[Website] [ICCV 2023] [Code]

DDP: Diffusion Model for Dense Visual Prediction
[Website] [ICCV 2023] [Code]

OVTrack: Open-Vocabulary Multiple Object Tracking
[Website] [CVPR 2023] [Project]

Diffusion-based Image Translation with Label Guidance for Domain Adaptive Semantic Segmentation
[Website] [ICCV 2023]

DiffusionDet: Diffusion Model for Object Detection
[ICCV 2023] [Code]

DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models
[Website] [Project]

DiffusionTrack: Diffusion Model For Multi-Object Tracking
[Website] [Code]

MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation
[Website] [Code]

Beyond Generation: Harnessing Text to Image Models for Object Detection and Segmentation
[Website] [Code]

SLiMe: Segment Like Me
[Website]

MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model for Few-Shot Instance Segmentation
[Website]

DiffusionSeg: Adapting Diffusion Towards Unsupervised Object Discovery
[Website]

Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models
[Website]

Diffusion Model is Secretly a Training-free Open Vocabulary Semantic Segmenter
[Website]

Attention as Annotation: Generating Images and Pseudo-masks for Weakly Supervised Semantic Segmentation with Diffusion
[Website]

From Text to Mask: Localizing Entities Using the Attention of Text-to-Image Diffusion Models
[Website]

Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation
[Website]

Patch-based Selection and Refinement for Early Object Detection
[Website]

Few-Shot

Discriminative Diffusion Models as Few-shot Vision and Language Learners
[Website]

Few-shot Semantic Image Synthesis with Class Affinity Transfer
[CVPR 2023]

Few-Shot Diffusion Models
[Website] [Code]

DiffAlign : Few-shot learning using diffusion based synthesis and alignment
[Website]

Few-shot Image Generation with Diffusion Models
[Website]

Lafite2: Few-shot Text-to-Image Generation
[Website]

Drag Image Edit

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
[SIGGRAPH 2023] [Code] [Project]

DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
[Website] [Code]

DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models
[Website] [Code]

DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
[Website] [Code] [Project]

SD-inpaint

Blended Diffusion for Text-driven Editing of Natural Images
[Website] [CVPR 2022] [Project] [Code]

Blended Latent Diffusion
[SIGGRAPH 2023] [Code] [Project]

Paint by Example: Exemplar-based Image Editing with Diffusion Models
[Website] [Code] [Diffusers Doc] [Diffusers Code]

GLIDE: Towards photorealistic image generation and editing with text-guided diffusion model
[Website] [Code]

Reference-based Image Composition with Sketch via Structure-aware Diffusion Model
[Website] [Code]

Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting
[CVPR 2023] [Code]

TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition
[ICCV 2023] [Code] [Project]

Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models
[ICML 2023] [Website] [Code]

AnyDoor: Zero-shot Object-level Image Customization
[Website] [Project] [Code]

Delving Globally into Texture and Structure for Image Inpainting
[ACM MM 2022] [Code]

Image Inpainting via Iteratively Decoupled Probabilistic Modeling
[Website] [Code]

ControlCom: Controllable Image Composition using Diffusion Model
[Website] [Code]

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
[Website] [Code]

Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model
[Website] [Code]

MAGICREMOVER: TUNING-FREE TEXT-GUIDED IMAGE INPAINTING WITH DIFFUSION MODELS
[Website] [Code]

360-Degree Panorama Generation from Few Unregistered NFoV Images
[ACM MM 2023] [Code]

SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model
[Website]

Gradpaint: Gradient-Guided Inpainting with Diffusion Models
[Website]

Infusion: Internal Diffusion for Video Inpainting
[Website]

I2I translation

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations
[Website] [ICLR 2022] [Project] [Code]

DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
[Website] [CVPR 2022] [Code]

Diffusion-based Image Translation using Disentangled Style and Content Representation
[Website] [ICLR 2023] [Code]

Diffusion Guided Domain Adaptation of Image Generators
[Website] [Code] [Project]

CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation
[Website] [Code] [Project]

FlexIT: Towards Flexible Semantic Image Translation
[Website] [Code]

Improving Diffusion-based Image Translation using Asymmetric Gradient Guidance
[Website] [Code]

Cross-Image Attention for Zero-Shot Appearance Transfer
[Website] [Project] [Code]

Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model
[ICCV 2023]

Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer
[ICCV 2023] [Code]

StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models
[ICCV 2023]

ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors
[Website]

Document Layout Generation

LayoutDM: Discrete Diffusion Model for Controllable Layout Generation
[CVPR 2023] [Code] [Project]

DLT: Conditioned layout generation with Joint Discrete-Continuous Diffusion Layout Transformer
[ICCV 2023] [Code]

LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models
[ICCV 2023] [Code]

LayoutDM: Transformer-based Diffusion Model for Layout Generation
[CVPR 2023]

Unifying Layout Generation with a Decoupled Diffusion Model
[CVPR 2023]

PLay: Parametrically Conditioned Layout Generation using Latent Diffusion
[ICML 2023]

Diffusion-based Document Layout Generation
[Website]

Dolfin: Diffusion Layout Transformers without Autoencoder
[Website]

Text Generation

DiffUTE: Universal Text Editing Diffusion Model
[Neurips 2023] [Code]

GlyphControl: Glyph Conditional Control for Visual Text Generation
[Website] [Code] [Neurips 2023]

TextDiffuser: Diffusion Models as Text Painters
[Website] [Neurips 2023] [Code] [Project]

Word-As-Image for Semantic Typography
[SIGGRAPH 2023] [Code] [Project]

Ambigram generation by a diffusion model
[ICDAR 2023] [Code]

AnyText: Multilingual Visual Text Generation And Editing
[Website] [Code]

Super Resolution

⭐⭐⭐Image Super-Resolution via Iterative Refinement
[TPAMI] [Code] [Project]

ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting
[Website] [Neurips 2023 spotlight] [Code] [Project]

Exploiting Diffusion Prior for Real-World Image Super-Resolution
[Website] [Code] [Project]

DiffIR: Efficient Diffusion Model for Image Restoration
[ICCV 2023] [Code]

Image Super-resolution Via Latent Diffusion: A Sampling-space Mixture Of Experts And Frequency-augmented Decoder Approach
[Website] [Code]

Pixel-Aware Stable Diffusion for Realistic Image Super-resolution and Personalized Stylization
[Website] [Code]

HSR-Diff: Hyperspectral Image Super-Resolution via Conditional Diffusion Models
[ICCV 2023]

Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution
[Website]

Dissecting Arbitrary-scale Super-resolution Capability from Pre-trained Diffusion Generative Models
[Website]

YODA: You Only Diffuse Areas. An Area-Masked Diffusion Approach For Image Super-Resolution
[Website]

Domain Transfer in Latent Space (DTLS) Wins on Image Super-Resolution -- a Non-Denoising Model
[Website]

X2I X2X

GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation
[ICCV 2023] [Code]

CoDi: Any-to-Any Generation via Composable Diffusion
[Website] [Neurips 2023] [Code] [Project]

Video Generation

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators
[ICCV 2023 Oral] [Code] [Project]

Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models
[CVPR 2023] [Website] [Project] [Code]

MagicAvatar: Multimodal Avatar Generation and Animation
[Website] [Project] [Code]

Video Diffusion Models
[Website] [ICLR 2022 workshop] [Code] [Project]

VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation
[CVPR 2023] [Code]

SinFusion: Training Diffusion Models on a Single Image or Video
[Website] [ICML 2023] [Project] [Code]

MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
[Website] [openreview] [Neurips 2022] [Project] [Code]

Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos
[Website] [Project] [Code]

GLOBER: Coherent Non-autoregressive Video Generation via GLOBal Guided Video DecodER
[Neurips 2023] [Code]

Conditional Image-to-Video Generation with Latent Flow Diffusion Models
[CVPR 2023] [Code]

Latent Video Diffusion Models for High-Fidelity Long Video Generation
[Website] [Project] [Code]

Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance
[Website] [Project] [Code]

Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models
[Website] [Project] [Code]

Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising
[Website] [Code] [Project]

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
[Website] [Code] [Project]

VideoComposer: Compositional Video Synthesis with Motion Controllability
[Website] [Project] [Code]

DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion
[Website] [Project] [Code]

LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models
[Website] [Project] [Code]

Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
[Website] [Project] [Code]

LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation
[Website] [Project] [Code]

LLM-GROUNDED VIDEO DIFFUSION MODELS
[Website] [Project] [Code]

FreeNoise: Tuning-Free Longer Video Diffusion Via Noise Rescheduling
[Website] [Project] [Code]

VideoCrafter1: Open Diffusion Models for High-Quality Video Generation
[Website] [Project] [Code]

VideoDreamer: Customized Multi-Subject Text-to-Video Generation with Disen-Mix Finetuning
[Website] [Project] [Code]

I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models
[Website] [Project] [Code]

DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
[Website] [Code]

Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator
[NeurIPS 2023] [Code]

Diffusion Probabilistic Modeling for Video Generation
[Website] [Code]

Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
[Website] [Project]

Imagen Video: High Definition Video Generation with Diffusion Models
[Website] [Project]

SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
[Website]

Dual-Stream Diffusion Net for Text-to-Video Generation
[Website]

SimDA: Simple Diffusion Adapter for Efficient Video Generation
[Website]

VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation
[Website]

Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models
[Website]

ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation
[Website]

LatentWarp: Consistent Diffusion Latents for Zero-Shot Video-to-Video Translation
[Website]

Optimal Noise pursuit for Augmenting Text-to-Video Generation
[Website]

Video Editing

FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
[ICCV 2023] [Code] [Project]

Video-P2P: Video Editing with Cross-attention Control
[Website] [Code] [Project]

Vid2Vid-zero: Zero-Shot Video Editing Using Off-the-Shelf Image Diffusion Models
[Website] [Code]

CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
[Website] [Code] [Project]

MagicEdit: High-Fidelity and Temporally Coherent Video Editing
[Website] [Project] [Code]

Edit Temporal-Consistent Videos with Image Diffusion Model
[Website]

Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
[ICCV 2023] [Code] [Project]

TokenFlow: Consistent Diffusion Features for Consistent Video Editing
[Website] [Code] [Project]

ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing
[Website] [Code] [Project]

Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts
[Website] [Code] [Project]

MotionDirector: Motion Customization of Text-to-Video Diffusion Models
[Website] [Code] [Project]

Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding
[CVPR 2023] [Code] [Project]

Text2LIVE: Text-Driven Layered Image and Video Editing
[ECCV 2022 Oral] [Project] [code]

StableVideo: Text-driven Consistency-aware Diffusion Video Editing
[ICCV2023] [Code]

Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models
[Website] [Project] [Code]

LOVECon: Text-driven Training-Free Long Video Editing with ControlNet
[Website] [Code]

Pix2video: Video Editing Using Image Diffusion
[Website] [Code]

DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
[Website] [Project]

Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer
[Website] [Code]

MagicProp: Diffusion-based Video Editing via Motion-aware Appearance Propagation
[Website]

FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing
[Website] [Project]

VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing
[Website] [Project]

Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
[Website] [Project]

Shape-Aware Text-Driven Layered Video Editing
[CVPR 2023] [Project]

MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance
[Website] [Project]

Dreamix: Video Diffusion Models Are General Video Editors
[Website]

Towards Consistent Video Editing with Text-to-Image Diffusion Models
[Website]

EVE: Efficient zero-shot text-based Video Editing with Depth Map Guidance and Temporal Consistency Constraints
[Website]

CCEdit: Creative and Controllable Video Editing via Diffusion Models
[Website]

Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models
[Website]

awesome-diffusion-categorized's People

Contributors

wangkai930418 avatar tchuanm avatar yangqy91 avatar ocram17 avatar ernestchu avatar omriav avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.