Git Product home page Git Product logo

Comments (6)

nkulkarni3297 avatar nkulkarni3297 commented on May 28, 2024 1

@AlvaroCavalcante Thank you for the response. I'll also look for some way if I can contribute towards this question as an alternate solution which can be useful for future and will definitely post here.

from auto_annotate.

github-actions avatar github-actions commented on May 28, 2024

Thanks for contributing this issue! We will be replying soon.

from auto_annotate.

AlvaroCavalcante avatar AlvaroCavalcante commented on May 28, 2024

Hello @nkulkarni3297 thank you for using the project!

Actually, the bounding box dimensions are determined by the model itself (through the model's inference), so it's expected to have some incorrect dimensions if your model makes mistakes, as you showed. The intention of this project is to be a semi-supervised helper for image labeling, so you'll need to manually fix the bad predictions.

As I explained, you'll need to manually label some images to train an initial model, and then use this model to help in the annotation of the complete dataset.

That said, if your initial model was trained with too few images or you have used an oversimplified architecture (like ssd_mobilenet_320x320), you'll probably get some poor predictions and only detect some images (like the 12-15 that you mentioned).

I recommend using at least 100 images in the initial training, and trying a more robust model (EfficinetDet, ResNet) or fine-tuning your SSD model to get better results. After that, you will definitely get better results using this library!

About the error in the label ("N/A"), this is actually very strange, probably it's something wrong in your label_map.pbtxt, please, follow the same format as shown in the TensorFlow documentation!

from auto_annotate.

nkulkarni3297 avatar nkulkarni3297 commented on May 28, 2024

Hello @AlvaroCavalcante as you are saying to manually label images to train an initial model. So, here is what will happen then.

I want to auto_annotate images to create xml files so that I can train them to detect the signs. Now, if I create initial model and auto run that on model, then I can actually use my initial model only to train my final model.

Clarifying my point here:

I am working on Sign Language Detection using this repo
https://github.com/nicknochnack/RealTimeObjectDetection

Now, here I need to manually create xml files for some images and train that files and images on top of ssd_model to get the detections. So my flow would be like this

Label some images manually and create xml files for initial model - Use that to label entire dataset to create xml files - train those files again on top of ssd model - run the detection model to get the detections.

So in this process I can then directly use the initial model. Then what would be use of auto_annotate? I want to reduce the steps so I am trying this.

If you could guide me little bit on this, it would actually help me a lot.

from auto_annotate.

AlvaroCavalcante avatar AlvaroCavalcante commented on May 28, 2024

Hello @nkulkarni3297, I'm not sure if I understood your whole context, but I'll try to explain based on what you asked.

The idea of the auto annotation package is to be used as a semi-supervised tool, so it's impossible to avoid the manual annotation part unless you find an open source model that was trained by someone else to be used as this "initial model".

Given that fact, your flow will be something like this:

  • Manually annotate some images of your dataset.
  • Train your initial model.
  • Use your initial model with auto_annotate to create new labels for the entire dataset.
  • Review the auto-generated annotations to improve the quality.
  • Retrain your model and be happy.

Let's suppose that you have 1000 images in your dataset. Considering that flow, you'll just waste your time labeling 100 images, and quickly reviewing the auto-generated labels.

In a "normal" scenario, you would need to manually label your 1000 images, which would use much more time!

In the end, this package is very simple, once we just use your model predictions to create an XML structure according to pascal VOC Format!

If you have more doubts, let me know!!

from auto_annotate.

AlvaroCavalcante avatar AlvaroCavalcante commented on May 28, 2024

Awesome @nkulkarni3297, thank you for your contribution! This week I released the new version of this library, check this medium article to see the details, I hope that maybe this version could help you more.

from auto_annotate.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.