Git Product home page Git Product logo

dino_v2's Introduction

GitHub repo size GitHub Repo stars

Dino_V2

Learning Robust Visual Features without Supervision

Check out the paper here DINOv2: Learning Robust Visual Features without Supervision

feature_visualization

inspired from original Facebook Meta AI repo.

1. Feature Visualization:

  • Like Mentioned in paper, I have used the features from Images using 2 step-PCA to visualize in a fashion showed in paper. The above visualization is the result of it.

2. Image Classification:

  • I also have used DinoV2 for Classification, and compared it with Resnets(might not be a fair comparision of transformers vs CNNs).

Classification

3. Image Search/Retrieval

  • I have also used DinoV2 for Image similarity, extracted image features form Database of Images, Used A Query image to query against the Database of Images. We used FAISS for quick Retrieval.

ImageRetrieval

Notes:

  • Paralell Processing is not required at Faiss Search Time, since Faiss Already implements it.
  • Paralell Processing at Feature creation for Database images is helpful.
  • We are currently using Faiss.IndexL2 with Normalized Vectors which is Cosine Similarity, But IVFPQ(Inverted File pointer Quantization) + HNSW Of FAISS can Search Billions Of Points in MilliSeconds & Can be added Later.
  • The Training/finetuning of DinoV2 on CustomData is same as training Regular computer vision models.

CLIP: Check Out My CLIP For Image Retrieval Repo here

TODO:

  • Adding PCA Visualization
  • Adding DinoV2 VS Resnet Classification
  • Adding Faiss indexing in ImageRetrival

Please Give credits if one use this repo for any purpose. It would be helpful. Thank you

citation

@misc{oquab2023dinov2,
  title={DINOv2: Learning Robust Visual Features without Supervision},
  author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
  journal={arXiv:2304.07193},
  year={2023}
}

dino_v2's People

Contributors

purnakona avatar purnasai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dino_v2's Issues

Train the classification model without setting DinoV2's param.requires_grad = False

Hello,

Thank you for sharing your great article on Medium and GitHub. I was inspired a lot.
In 3.DinoV2_VS_ResnetClassification.ipynb, you load the dinov2_vits14 model, and I don't see anywhere you freeze the parameters in the dinov2.

Does it mean that during the training process, you tune all the parameters in the DinoVisionTransformerClassifier (including transformer and linear layer)?
Then does it mean that all the benefits come from the pretrained model and curated data is discarded?

Question about transformer.norm

Hello! Thank you for sharing the great code. I have a question. Could you please explain why you applied 'transformer.norm' in the 'forward' function of 'DinoVisionTransformerClassifier'?

def forward(self, x):
    x = self.transformer(x)
    x = self.transformer.norm(x)
    x = self.classifier(x)
    return x

How do you know the threshold when visualizing the feature?

Hi, your repo is really helpful. I have one minor question, how do you know this threshold pca_features_bg = pca_features[:, 0] > 0.35 in third-party/Dino_V2/2.PCA_visualization.ipynb IN [10]? I am not sure how to infer this number from the first histogram. Could you please explain it?

Moreover, according to the original paper of DINOv2, it says Background is removed by removing patches with a negative score of the first PCA component. (Figure 9 caption) May I ask what is the relationship between the negative score and your chosen threshold. Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.