Comments (24)
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True
for inference script
from yolov7_openvino_cpp-python.
hi @akashAD98 We have this notebook to demonstrate the post-processing with --grid
parameter
https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/226-yolov7-optimization/226-yolov7-optimization.ipynb
from yolov7_openvino_cpp-python.
Which means you can feed the model's single output directly into nms
module without concatenating 3 of them.
from yolov7_openvino_cpp-python.
@OpenVINO-dev-contest Yes i tried that repo but im facing issues for inference on video/webcams? So I want to use your code for webcams/video inference
from yolov7_openvino_cpp-python.
without grid im getting no detection here
from yolov7_openvino_cpp-python.
also I'm not able to read my custom_model ,which has without --grid parameter
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
2 core = Core()
3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
5 # load model on CPU device
6 compiled_model = core.compile_model(model, 'CPU')
RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!
from yolov7_openvino_cpp-python.
also I'm not able to read my custom_model ,which has without --grid parameter
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-db161e3ad74f> in <module> 2 core = Core() 3 # read converted model ----> 4 model = core.read_model('model/best_veh_withbgnew.xml') 5 # load model on CPU device 6 compiled_model = core.compile_model(model, 'CPU') RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54: Converting input model Incorrect weights in bin file!
Did you get the model from model optimizer
. and error happened during model offline converting ?
from yolov7_openvino_cpp-python.
Yes this notebook is only for grid model
from yolov7_openvino_cpp-python.
also I'm not able to read my custom_model ,which has without --grid parameter
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-db161e3ad74f> in <module> 2 core = Core() 3 # read converted model ----> 4 model = core.read_model('model/best_veh_withbgnew.xml') 5 # load model on CPU device 6 compiled_model = core.compile_model(model, 'CPU') RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54: Converting input model Incorrect weights in bin file!
Did you get the model from
model optimizer
. and error happened during model offline converting ?
Im using
from openvino.tools import mo
from openvino.runtime import serialize
model = mo.convert_model('model/best_veh_withbgnew.onnx')
# serialize model for saving IR
serialize(model, 'model/best_veh_withbgnew.xml')
for conversion, .XML file is stored in system ,but not able to read it
from yolov7_openvino_cpp-python.
Yes this notebook is only for grid model
1.can you please give some suggestions regarding how can I use --grid in your yolov.py code?
2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt format))
from yolov7_openvino_cpp-python.
Yes this notebook is only for grid model
1.can you please give some suggestions regarding how can I use --grid in your yolov.py code? 2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt for
also I'm not able to read my custom_model ,which has without --grid parameter
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-db161e3ad74f> in <module> 2 core = Core() 3 # read converted model ----> 4 model = core.read_model('model/best_veh_withbgnew.xml') 5 # load model on CPU device 6 compiled_model = core.compile_model(model, 'CPU') RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54: Converting input model Incorrect weights in bin file!
Did you get the model from
model optimizer
. and error happened during model offline converting ?Im using
from openvino.tools import mo from openvino.runtime import serialize model = mo.convert_model('model/best_veh_withbgnew.onnx') # serialize model for saving IR serialize(model, 'model/best_veh_withbgnew.xml')
for conversion, .XML file is stored in system ,but not able to read it
Did you got a .bin file in the same fold with .xml file ?
from yolov7_openvino_cpp-python.
@OpenVINO-dev-contest yes i got .bin & .xml file in folder
from yolov7_openvino_cpp-python.
@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.
i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?
my goal is to convert into int8 format & without data I think its not possible
quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)
serialize(quantized_model, 'model/yolov7-tiny_int8.xml')
from yolov7_openvino_cpp-python.
@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.
i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?
my goal is to convert into int8 format & without data I think its not possible
quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED) serialize(quantized_model, 'model/yolov7-tiny_int8.xml')
You should define your dataloader and preprocessing firstly. In notebook example, we use COCO format.
from yolov7_openvino_cpp-python.
from yolov7_openvino_cpp-python.
@bbartling i was able to run both ob window & ubantu linux system
from yolov7_openvino_cpp-python.
from yolov7_openvino_cpp-python.
size in term of wh or model memory size? its 640640 & yolov7-tiny.onnx & yolov7-tiny.bin is 24mb
from yolov7_openvino_cpp-python.
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference script
Hello, could also you add the grid option for C++ script?
from yolov7_openvino_cpp-python.
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference scriptHello, could also you add the grid option for C++ script?
updated, you can add a true
after the cpp running command
from yolov7_openvino_cpp-python.
Thank you so much!
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference scriptHello, could also you add the grid option for C++ script?
updated, you can add a
true
after the cpp running command
from yolov7_openvino_cpp-python.
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference scriptHello, could also you add the grid option for C++ script?
updated, you can add a
true
after the cpp running command
Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?
from yolov7_openvino_cpp-python.
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference scriptHello, could also you add the grid option for C++ script?
updated, you can add a
true
after the cpp running commandJust one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?
25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.
from yolov7_openvino_cpp-python.
Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter
--grid True
for inference scriptHello, could also you add the grid option for C++ script?
updated, you can add a
true
after the cpp running commandJust one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?
25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.
I got it. Thank you
from yolov7_openvino_cpp-python.
Related Issues (20)
- Yolov7 Tiny setting confidence Thres HOT 4
- Link not working HOT 1
- [Bug] The line `img.transpose(2, 0, 1)` should be `img = img.transpose(2, 0, 1)`. NumPy's transpose operation does not support in-place assignment. HOT 1
- output processing is slow HOT 24
- adding tracker deepsort/sort (int 8 or openvo ir ) to object detection .onnx file or .int8 format file HOT 31
- fps code is not working HOT 2
- float data1[img_h*img_w*3] compile error HOT 1
- Inference with 1280 images HOT 4
- fps im getting is varing too much
- Yolov7-seg support HOT 2
- Downloading Yolo7 modex HOT 4
- hardware to run HOT 4
- Process multiple video feeds ansyc HOT 5
- Python Run Issue
- c++ has encountered an error HOT 1
- getting setup HOT 11
- webcam.py HOT 13
- 4 anchor boxes instead of 3 HOT 1
- YOLOv7 with Multiple Object Tracker - SORT Algorithm HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov7_openvino_cpp-python.