Git Product home page Git Product logo

annias / grounded-segment-anything Goto Github PK

View Code? Open in Web Editor NEW

This project forked from idea-research/grounded-segment-anything

0.0 0.0 0.0 71.69 MB

Marrying Grounding DINO with Segment Anything & Stable Diffusion & BLIP & Whisper - Automatically Detect , Segment and Generate Anything with Image, Text, and Speech Inputs

License: Apache License 2.0

Shell 0.01% C++ 0.05% Python 3.00% Cuda 0.40% Jupyter Notebook 96.55%

grounded-segment-anything's Introduction

Grounded-Segment-Anything

We plan to create a very interesting demo by combining Grounding DINO and Segment Anything! Right now, this is just a simple small project. We will continue to improve it and create more interesting demos. And thanks for the community users provide the colab demo for us.

We are very willing to help everyone share and promote new projects based on Segment-Anything, we highlight some excellent projects here: Highlight Extension Projects. You can submit a new issue (with project tag) or a new pull request to add new projects' links.

Why this project?

The core idea behind this project is to combine the strengths of different models in order to build a very powerful pipeline for solving complex problems. And it's worth mentioning that this is a workflow for combining strong expert models, where all parts can be used separately or in combination, and can be replaced with any similar but different models (like replacing Grounding DINO with GLIP or other detectors / replacing Stable-Diffusion with ControlNet or GLIGEN/ Combining with ChatGPT).

  • Segment Anything is a strong segmentation model. But it needs prompts (like boxes/points) to generate masks.
  • Grounding DINO is a strong zero-shot detector which is capable of to generate high quality boxes and labels with free-form text.
  • The combination of Grounding DINO + SAM enable to detect and segment everything at any levels with text inputs!
  • The combination of BLIP + Grounding DINO + SAM for automatic labeling system!
  • The combination of Grounding DINO + SAM + Stable-diffusion for data-factory, generating new data!
  • The combination of Whisper + Grounding DINO + SAM to detect and segment anything with speech!

๐Ÿ”ฅ ๐Ÿ”ˆSpeak to edit๐ŸŽจ: Whisper + ChatGPT + Grounded-SAM + SD

Grounded-SAM

Grounded-SAM + Stable-Diffusion Inpainting: Data-Factory, Generating New Data!

BLIP + Grounded-SAM: Automatic Label System!

Using BLIP to generate caption, extract tags and using Grounded-SAM for box and mask generating. Here's the demo output:

Imagine Space

Some possible avenues for future work ...

  • Automatic image generation to construct new datasets.
  • Stronger foundation models with segmentation pre-training.
  • Collaboration with (Chat-)GPT.
  • A whole pipeline to automatically label image (with box and mask) and generate new image.

More Examples

Tips

  • If you want to detect multiple objects in one sentence with Grounding DINO, we suggest seperating each name with . . An example: cat . dog . chair .

๐Ÿ”ฅ What's New

  • ๐Ÿ†• Release the interactive fashion-edit playground in here. Run in the notebook, just click for annotating points for further segmentation. Enjoy it!

  • ๐Ÿ†• Checkout our related human-face-edit branch here. We'll keep updating this branch with more interesting features. Here are some examples:

๐Ÿ’ก Highlight Extension Projects

๐Ÿ“‘ Catelog

  • Grounding DINO Demo
  • Grounding DINO + Segment Anything Demo
  • Grounding DINO + Segment Anything + Stable-Diffusion Demo
  • BLIP + Grounding DINO + Segment Anything + Stable-Diffusion Demo
  • Whisper + Grounding DINO + Segment Anything + Stable-Diffusion Demo
  • Hugging Face Demo
  • Colab demo

๐Ÿ“– Notebook Demo

See our notebook file as an example.

๐Ÿ› ๏ธ Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

Install Segment Anything:

python -m pip install -e segment_anything

Install Grounding DINO:

python -m pip install -e GroundingDINO

Install diffusers:

pip install --upgrade diffusers[torch]

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.

pip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel

More details can be found in install segment anything and install GroundingDINO

๐Ÿƒ Run Grounding DINO Demo

  • Download the checkpoint for Grounding Dino:
cd Grounded-Segment-Anything

wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
  • Run demo
export CUDA_VISIBLE_DEVICES=0
python grounding_dino_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --input_image assets/demo1.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --text_prompt "bear" \
  --device "cuda"
  • The model prediction visualization will be saved in output_dir as follow:

๐Ÿƒโ€โ™‚๏ธ Run Grounded-Segment-Anything Demo

  • Download the checkpoint for Segment Anything and Grounding Dino:
cd Grounded-Segment-Anything

wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
  • Run Demo
export CUDA_VISIBLE_DEVICES=0
python grounded_sam_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/demo1.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --text_prompt "bear" \
  --device "cuda"
  • The model prediction visualization will be saved in output_dir as follow:

โ›ท๏ธ Run Grounded-Segment-Anything + Inpainting Demo

CUDA_VISIBLE_DEVICES=0
python grounded_sam_inpainting_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/inpaint_demo.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --det_prompt "bench" \
  --inpaint_prompt "A sofa, high quality, detailed" \
  --device "cuda"

๐ŸŒ๏ธ Run Grounded-Segment-Anything + Inpainting Gradio APP

python gradio_app.py
  • The gradio_app visualization as follow:

๐Ÿค– Run Grounded-Segment-Anything + BLIP Demo

It is easy to generate pseudo labels automatically as follows:

  1. Use BLIP (or other caption models) to generate a caption.
  2. Extract tags from the caption. We use ChatGPT to handle the potential complicated sentences.
  3. Use Grounded-Segment-Anything to generate the boxes and masks.
  • Run Demo
export CUDA_VISIBLE_DEVICES=0
python automatic_label_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/demo3.jpg \
  --output_dir "outputs" \
  --openai_key your_openai_key \
  --box_threshold 0.25 \
  --text_threshold 0.2 \
  --iou_threshold 0.5 \
  --device "cuda"
  • The pseudo labels and model prediction visualization will be saved in output_dir as follows:

๐Ÿ˜ฎ Run Grounded-Segment-Anything + Whisper Demo

Detect and segment anything with speech!

Install Whisper

pip install -U openai-whisper

See the whisper official page if you have other questions for the installation.

Run Voice-to-Label Demo

Optional: Download the demo audio file

wget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/demo_audio.mp3
export CUDA_VISIBLE_DEVICES=0
python grounded_sam_whisper_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/demo4.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --speech_file "demo_audio.mp3" \
  --device "cuda"

Run Voice-to-inpaint Demo

You can enable chatgpt to help you automatically detect the object and inpainting order with --enable_chatgpt.

Or you can specify the object you want to inpaint [stored in args.det_speech_file] and the text you want to inpaint with [stored in args.inpaint_speech_file].

# Example: enable chatgpt
export CUDA_VISIBLE_DEVICES=0
export OPENAI_KEY=your_openai_key
python grounded_sam_whisper_inpainting_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/inpaint_demo.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --prompt_speech_file assets/acoustics/prompt_speech_file.mp3 \
  --enable_chatgpt \
  --openai_key $OPENAI_KEY \
  --device "cuda"
# Example: without chatgpt
export CUDA_VISIBLE_DEVICES=0
python grounded_sam_whisper_inpainting_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/inpaint_demo.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --det_speech_file "assets/acoustics/det_voice.mp3" \
  --inpaint_speech_file "assets/acoustics/inpaint_voice.mp3" \
  --device "cuda"

๐Ÿ’˜ Acknowledgements

Citation

If you find this project helpful for your research, please consider citing the following BibTeX entry.

@article{kirillov2023segany,
  title={Segment Anything}, 
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@inproceedings{ShilongLiu2023GroundingDM,
  title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
  author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
  year={2023}
}

grounded-segment-anything's People

Contributors

rentainhe avatar andy1621 avatar ciaohe avatar nomorewzx avatar slongliu avatar tuofeilunhifi avatar fengli-ust avatar haozhang534 avatar eltociear avatar lhy-hongyangli avatar panxinmiao avatar shoufachen avatar mickelliu avatar stevezkw1998 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.