Git Product home page Git Product logo

clip_surgery's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

clip_surgery's Issues

[feature proposal]

image

As you know, tensor activations tend to act noisy on unseen input image. So for the clarity of segmentation, it would be better if you use masking based on tensor values. Please check on Pull requests

How can I get masks of open-vocabulary semantic segmentation

Thank you for sharing excellent work!

I am trying to get segmentation masks (open-vocabulary) from the code.
I tried argmax from "similarity_map" from demo.py, and it showed lower performance.

Is there any way to get a segmentation mask?

Design of the category weight w.

Hi! Thanks for your great work.
Could you please provide a detailed explanation of how the category weight 'w' in formula (7) is designed? Why is it crucial to emphasize obvious classes?

Results of multi-label recognition

Thanks for your excellent work.

I failed to reproduce the multi-label recognition results in Table 7. For example, when I use CLIP ViT-B/16 with softmax function, I only got 35% mAP on NUS-Wide (42.85% in paper). I use the cls token of the original CLIP without feature surgery. Could you share the details and evaluation code of multi-label recognition?

Error _ Can you help me?

I have an error in jupyter :

AttributeError: module 'clip' has no attribute 'encode_text_with_prompt_ensemble'

"encode_text_with_prompt_ensemble" is a function? How can i solve it ?

Thanks!

EVA-CLIP surgery?

Hi, Thank you for great work !

I wonder if your method also can be applied for EVA-CLIP ! Have you ever tried?
Thanks to MIM training, EVA-CLIP-L showed 15 mIoU zero-shot performance on Cityscapes Validation, while original CLIP showed 0 mIoU without any scheme(i.e. MaskCLIP, CLIPSurgery).
Therefore, I wonder your Surgery method can also boost zero-shot EVA-CLIP too !

How to control the accurance of generate point?

The segmentation effect is good when there are existed object in the image, but when a random prompts text is given, a high-confidence point will also be generated, which will cause segmentation errors in the downstream SAM model. How can this situation be solved?
here is some case:
I need to find the bag in the image , but the bag actually is not in the image,but the generated point is also high score.
image

Questions about the open-vocabulary semantic segmentation.

Hi, thanks for your great work.

I am interested in the details about open-vocab segmentation and I have few questions regarding this task.

  1. In the architecture surgery, I'm wondering whether the prediction for segmentation comes from the original path or the new path? Additionally, which features are used in the feature surgery? The paper said "Note that Eq. 9 is specifically designed for the explainability task", but I think the segmentation should use this too?

  2. And it confused me in the [code](https://github.com/xmed-lab/CLIP_Surgery/blob/e346359d67e8fc4fe301467914151316d3982661/clip/clip_surgery_model.py#L349C36-L349C36)

    x[0, :, :] = x_ori[0, :, :] # clip_surgery
    

    Why do you preserve the [cls] token in the original_path? If my understanding was right, the [cls] token in the original_path is not influenced by the new_path. So for the multi-label recognition task, the architecture surgery would be useless?

  3. Could you give more details? And it would be of great help if you could release the code for the open-vocabulary segmentation.

Thanks again for your work!

About input sentence for SAM

Hi there

Thanks for this good work!

I am trying to explore input a sentence to get some satisfied SAM results with CLIp surgery, could you please provide some suggestions on it?

Best.

How can I make similarity_map_to_points select multiple objects!

First of all great work and thank you for sharing!

I have a question that I am in the process of figuring out but I thought I would post it here incase it was useful. I was able to get everything working quite successfully but I ran into a case I had a problem with. As you can see below I have a photo with two birds, through clip.get_similarity_map I was able to select the two birds

Screenshot 2023-06-01 at 14 53 18

After when I want to implement clip.similarity_map_to_points to infer for SAM, it only seems to provide me with one bird

Screenshot 2023-06-01 at 14 56 05

As you can guess in the title, I think this is a limitation of similarity_map_to_points method, but I am not sure. Would you be able to provide some clarity on similarity_map_to_points method any/or maybe suggest why this is happening.

Again, great work and thank you for posting!

question for equation 8

Thanks for your excellent work, but I can't understand that the redundant features Fr can be obtained with equation 8. Can you help me with this question?

Aquire about open-vocabulary segmentation implementation

Hi, thanks for this great work, I've noticed that clip_surgery also achieves good performance in open-vocabulary segmentation. However, clip_surgery requires specific words as input to obtain the corresponding segmentation results. I'd like to ask how clip_surgery is implemented in the open-vocabulary segmentation setting?

Train/fine-tune CLIP_Surgery

Hi! thank you for this good work and neat implementation
Have you tried training/fine-tuning CLIP_surgery on out of domain datasets (medical scans, drawings ..etc)? Do you think that would improve the mIoU on these dataset or the model would collapse?

mIoU evaluation for open-vocabulary segmentation

Thanks for your work and for sharing the code!

It is, however unclear to me how the mIoU was computed for the open-vocabulary segmentation tasks.
In the paper (sec.5.1.3 page 10), you mention that:

"Specifically, the mIoU is also used to measure the visualization quality. While each positive label is evaluated independently with a grid search threshold to identify the foreground."

I have several questions:

  1. For each image, do you only evaluate the mIoU of the classes present in the image? i.e., if there is only a "cat" and "dog" in the image, you only compute the mIoU for those 2 classes?
  2. It appears to me that you define one similarity threshold for every class. Hence, how do you find the optimal threshold?
  3. Is the computed threshold the same across the dataset, or do you compute it for each image?

Thanks,

About the results of Clip surgery with SAM

  1. I run the demo with CLIP "CS-ViT-B/16", and the results of "person" is as following with wrong points:
    image

Could you help me find my errors?

  1. More, I try "a person on the bench" and other sentences. Does the results with a long sentence input or with some extra description not work?
  2. Could you share the training dataset of "CS-ViT-B/16"?

Thanks a lot.

onnx conversion

Hello
Thanks for the great work.
Do you have a script to convert your CLIP_Surgery to onnx format ?

About mSC

Hi, Thank u for your great works! I just want to ask about the exact definition of $m_c$ and $m_s$ in metric mSC, could you provide a specific example to illustrate it? Thank you again!

question about section 5.6 in the paper

你好,
论文中5.6不是很明白。clip在训练的时候是句子和图片pair。如何得到每个单词与图片的相似度的呢?等式(9)里的Nt,是text token数量,那么在5.6的实验中,把句子中的每个单词作为text token吗?

Question related to different image resolution from 224x224

Hi, thanks for the awesome repo.

I had a question about how clip processes the images that are different then 224x224, specifically for the high resolution images in the demo.
When I load the vit-b-16 model, and print the shape for the model.visual.positional_embedding, it is 197x768. Then when I encode the high res image (512x512) using model.visual, I notice that the shape of the positional embeddings automatically change to greater then 197 to be compatible with the image tokens. Can you tell me where in the code the positional embeddings is changing dynamically with the input image?

SAM struggles to correctly mask small objects

Hi,

First of all, great work! I have implemented CLIP_Surgery in my project and can confirm that it's better than clipseg at certain tasks.

However, I'm having a hard time getting it to make decent selections of small objects when using SAM. Let me give you an example:

Source image:

CLIP_Surgery selection of "hand" without SAM:

CLIP_Surgery selection of "hand" with SAM (it selected the background?):

clipseg selection of "hand":


Now, SAM outputs 3 different masks but the one above was selected as having the highest score per masks[np.argmax(scores)]. But if I look at the outputs, I can see that it really should have preferred mask0 in this case:

image

Is this an issue with CLIP_Surgery's implementation of SAM or SAM itself?

Also, even the best SAM mask seems to include a lot of background noise not present in clipseg's output. Is there an easy way to filter that out?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.