Git Product home page Git Product logo

Comments (24)

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024 1

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

hi @akashAD98 We have this notebook to demonstrate the post-processing with --grid parameter
https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/226-yolov7-optimization/226-yolov7-optimization.ipynb

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

Which means you can feed the model's single output directly into nms module without concatenating 3 of them.

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

@OpenVINO-dev-contest Yes i tried that repo but im facing issues for inference on video/webcams? So I want to use your code for webcams/video inference

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

without grid im getting no detection here
image

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!


from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

without grid im getting no detection here image

Yes this notebook is only for grid model

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

Im using


from openvino.tools import mo
from openvino.runtime import serialize

model = mo.convert_model('model/best_veh_withbgnew.onnx')
# serialize model for saving IR
serialize(model, 'model/best_veh_withbgnew.xml')

for conversion, .XML file is stored in system ,but not able to read it

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

without grid im getting no detection here image

Yes this notebook is only for grid model

1.can you please give some suggestions regarding how can I use --grid in your yolov.py code?
2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt format))

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

without grid im getting no detection here image

Yes this notebook is only for grid model

1.can you please give some suggestions regarding how can I use --grid in your yolov.py code? 2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt for

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

Im using


from openvino.tools import mo
from openvino.runtime import serialize

model = mo.convert_model('model/best_veh_withbgnew.onnx')
# serialize model for saving IR
serialize(model, 'model/best_veh_withbgnew.xml')

for conversion, .XML file is stored in system ,but not able to read it

Did you got a .bin file in the same fold with .xml file ?

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

@OpenVINO-dev-contest yes i got .bin & .xml file in folder

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.

i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?

my goal is to convert into int8 format & without data I think its not possible

quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)

serialize(quantized_model, 'model/yolov7-tiny_int8.xml')

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.

i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?

my goal is to convert into int8 format & without data I think its not possible

quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)

serialize(quantized_model, 'model/yolov7-tiny_int8.xml')

You should define your dataloader and preprocessing firstly. In notebook example, we use COCO format.

from yolov7_openvino_cpp-python.

bbartling avatar bbartling commented on July 21, 2024

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

@bbartling i was able to run both ob window & ubantu linux system

from yolov7_openvino_cpp-python.

bbartling avatar bbartling commented on July 21, 2024

from yolov7_openvino_cpp-python.

akashAD98 avatar akashAD98 commented on July 21, 2024

size in term of wh or model memory size? its 640640 & yolov7-tiny.onnx & yolov7-tiny.bin is 24mb

from yolov7_openvino_cpp-python.

superkido511 avatar superkido511 commented on July 21, 2024

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

from yolov7_openvino_cpp-python.

superkido511 avatar superkido511 commented on July 21, 2024

Thank you so much!

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

from yolov7_openvino_cpp-python.

superkido511 avatar superkido511 commented on July 21, 2024

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

from yolov7_openvino_cpp-python.

OpenVINO-dev-contest avatar OpenVINO-dev-contest commented on July 21, 2024

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.

from yolov7_openvino_cpp-python.

superkido511 avatar superkido511 commented on July 21, 2024

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.

I got it. Thank you

from yolov7_openvino_cpp-python.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.