Git Product home page Git Product logo

samexporter's Introduction

SAM Exporter

Exporting Segment Anything models to different formats.

The Segment Anything repository does not have a way to export encoder to ONNX format. There are some pull requests for this feature, but they have not accepted by SAM authors. Therefore, I want to create an easy tool to export Segment Anything models to different output formats as an easy option.

Supported models:

  • SAM ViT-B
  • SAM ViT-L
  • SAM ViT-H
  • MobileSAM*

Installation

From PyPi:

pip install samexporter

From source:

git clone https://github.com/vietanhdev/samexporter
cd samexporter
pip install -e .

Usage

original_models
   + sam_vit_b_01ec64.pth
   + sam_vit_h_4b8939.pth
   + sam_vit_l_0b3195.pth
   + mobile_sam.pt
   ...
  • Convert encoder SAM-H to ONNX format:
python -m samexporter.export_encoder --checkpoint original_models/sam_vit_h_4b8939.pth \
    --output output_models/sam_vit_h_4b8939.encoder.onnx \
    --model-type vit_h \
    --quantize-out output_models/sam_vit_h_4b8939.encoder.quant.onnx \
    --use-preprocess
  • Convert decoder SAM-H to ONNX format:
python -m samexporter.export_decoder --checkpoint original_models/sam_vit_h_4b8939.pth \
    --output output_models/sam_vit_h_4b8939.decoder.onnx \
    --model-type vit_h \
    --quantize-out output_models/sam_vit_h_4b8939.decoder.quant.onnx \
    --return-single-mask

Remove --return-single-mask if you want to return multiple masks.

  • Inference using the exported ONNX model:
python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/truck.jpg \
    --prompt images/truck_prompt.json \
    --output output_images/truck.png \
    --show

truck

python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/plants.png \
    --prompt images/plants_prompt1.json \
    --output output_images/plants_01.png \
    --show

plants_01

python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/plants.png \
    --prompt images/plants_prompt2.json \
    --output output_images/plants_02.png \
    --show

plants_02

Short options:

  • Convert all Meta's models to ONNX format:
bash convert_all_meta_sam.sh
  • Convert MobileSAM to ONNX format:
bash convert_mobile_sam.sh

Tips

  • Use "quantized" models for faster inference and smaller model size. However, the accuracy may be lower than the original models.
  • SAM-B is the most lightweight model, but it has the lowest accuracy. SAM-H is the most accurate model, but it has the largest model size. SAM-M is a good trade-off between accuracy and model size.

AnyLabeling

This package was originally developed for auto labeling feature in AnyLabeling project. However, you can use it for other purposes.

License

This project is licensed under the MIT License - see the LICENSE file for details.

samexporter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

samexporter's Issues

Where does the 684 height size comes from?

Not an issue. The code works perfectly for me. I saw the original code expects 1024x1024 image size. So, I was just wondering where the 684 fixed size for height comes from. Thank you

How to export decoder for mobile net model?

I've exported a mobile net model using the export_encoder script using mobilenet weights for mobilenet model. Now how do I export decoder for this?
I cannot see a mobile type in export_decoder script, also should I pass the same mobile_sam.pt or vit_h.pth model as the check point for decoder?
Any help is appreciated.

No module named 'samexporter'

python.exe: Error while finding module specification for 'samexporter.export_encoder' (ModuleNotFoundError: No module named 'samexporter')
python.exe: Error while finding module specification for 'samexporter.export_decoder' (ModuleNotFoundError: No module named 'samexporter')

Encoder only exports for vit_h

Great job on this, exactly what I was looking for!

However, when I attempt to export the encoder using:

python -m samexporter.export_encoder --checkpoint original_models/sam_vit_l_0b3195.pth --output output_models/sam_vit_l_0b3195.encoder/model.onnx --model-type vit_l --quantize-out output_models/sam_vit_l_0b3195.encoder.quant.onnx --use-preprocess

python -m samexporter.export_encoder --checkpoint original_models/sam_vit_b_01ec64.pth --output output_models/sam_vit_b_01ec64.encoder/model.onnx --model-type vit_b --quantize-out output_models/sam_vit_b_01ec64.encoder.quant.onnx --use-preprocess

Error Output:
Loading model...
Exporting onnx model to output_models/sam_vit_l_0b3195.encoder/model.onnx...
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Work\AR\samexporter\samexporter\export_encoder.py", line 178, in
run_export(
File "C:\Work\AR\samexporter\samexporter\export_encoder.py", line 157, in run_export
with open(output, "wb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'output_models/sam_vit_l_0b3195.encoder/model.onnx'

I see the flag on export_encoder.py, but I'm not sure how this should look for vit_l and vit_b?

Thanks,
Steven.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.