Git Product home page Git Product logo

jason-li-831202 / vehicle-cv-adas Goto Github PK

View Code? Open in Web Editor NEW
136.0 3.0 31.0 23.18 MB

The project can achieve FCWS, LDWS, and LKAS functions solely using only visual sensors. using YOLOv5 / YOLOv5-lite / YOLOv6 / YOLOv7 / YOLOv8 / YOLOv9 / EfficientDet and Ultra-Fast-Lane-Detection-v2 .

License: GNU General Public License v3.0

Python 100.00%
lane-detection object-detection onnxruntime yolov5 ultra-fast-lane-detection-v2 python tensorrt yolov8 yolov5-lite fcws

vehicle-cv-adas's Introduction

Vehicle-CV-ADAS

Python OnnxRuntime TensorRT Markdown Visual Studio Code Windows

Example scripts for the detection of lanes using the ultra fast lane detection v2 model in ONNX/TensorRT.

Example scripts for the detection of objects using the YOLOv5/YOLOv5-lite/YOLOv6/YOLOv7/YOLOv8/YOLOv9/EfficientDet model in ONNX/TensorRT.

Add ByteTrack to determine the driving direction of ID vehicles and perform trajectory tracking.

➤ Contents

  1. Requirements

  2. Examples

  3. Demo

  4. License

!ADAS on video

➤ Requirements

  • Python 3.7+

  • OpenCV, Scikit-learn, onnxruntime, pycuda and pytorch.

  • Install :

    The requirements.txt file should list all Python libraries that your notebooks depend on, and they will be installed using:

    pip install -r requirements.txt
    

➤ Examples

  • Download YOLO Series Onnx model :

    Use the Google Colab notebook to convert

    Model release version Link
    YOLOv5 v6.2 Open In Colab
    YOLOv6/Lite 0.4.0 Open In Colab
    YOLOv7 v0.1 Open In Colab
    YOLOv8 8.1.27 Open In Colab
    YOLOv9 v0.1 Open In Colab
  • Convert Onnx to TenserRT model :

    Need to modify onnx_model_path and trt_model_path before converting.

    python convertOnnxToTensorRT.py -i <path-of-your-onnx-model>  -o <path-of-your-trt-model>
    
  • Quantize ONNX models :

    Converting a model to use float16 instead of float32 can decrease the model size.

    python onnxQuantization.py -i <path-of-your-onnx-model>
    
  • Video Inference :

    • Setting Config :

      Note : can support onnx/tensorRT format model. But it needs to match the same model type.

    lane_config = {
     "model_path": "./TrafficLaneDetector/models/culane_res18.trt",
     "model_type" : LaneModelType.UFLDV2_CULANE
    }
    
    object_config = {
     "model_path": './ObjectDetector/models/yolov8l-coco.trt',
     "model_type" : ObjectModelType.YOLOV8,
     "classes_path" : './ObjectDetector/models/coco_label.txt',
     "box_score" : 0.4,
     "box_nms_iou" : 0.45
    }
    Target Model Type Describe
    Lanes LaneModelType.UFLD_TUSIMPLE Support Tusimple data with ResNet18 backbone.
    Lanes LaneModelType.UFLD_CULANE Support CULane data with ResNet18 backbone.
    Lanes LaneModelType.UFLDV2_TUSIMPLE Support Tusimple data with ResNet18/34 backbone.
    Lanes LaneModelType.UFLDV2_CULANE Support CULane data with ResNet18/34 backbone.
    Object ObjectModelType.YOLOV5 Support yolov5n/s/m/l/x model.
    Object ObjectModelType.YOLOV5_LITE Support yolov5lite-e/s/c/g model.
    Object ObjectModelType.YOLOV6 Support yolov6n/s/m/l, yolov6lite-s/m/l model.
    Object ObjectModelType.YOLOV7 Support yolov7 tiny/x/w/e/d model.
    Object ObjectModelType.YOLOV8 Support yolov8n/s/m/l/x model.
    Object ObjectModelType.YOLOV9 Support yolov9s/m/c/e model.
    Object ObjectModelType.EfficientDet Support efficientDet b0/b1/b2/b3 model.
    • Run :
    python demo.py
    

➤ Demo

  • Demo Youtube Video

  • Display

    !ADAS on video

  • Front Collision Warning System (FCWS)

    !FCWS

  • Lane Departure Warning System (LDWS)

    !LDWS

  • Lane Keeping Assist System (LKAS)

    !LKAS

➤ License

WiFi Analyzer is licensed under the GNU General Public License v3.0 (GPLv3).

GPLv3 License key requirements :

  • Disclose Source
  • License and Copyright Notice
  • Same License
  • State Changes

vehicle-cv-adas's People

Contributors

jason-li-831202 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

vehicle-cv-adas's Issues

imageDetection.py

When I run the TrafficLaneDetector/imageDetection.py, it show "No module named 'ultrafastLaneDetector.ultrafastLane'".

pycuda._driver issue

from pycuda._driver import *  # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

SystemError: type Boost.Python.enum has the Py_TPFLAGS_HAVE_GC flag but has no traverse function

Data testing

Can you provide the videos you test on the gif or the link to it. Thanks a lot !

运行demo.py报错

运行onnx模型可以正常运行,运行trt模型,直接中断,输出如下
[2024-01-17 10:25:43] [INFO] [Pycuda] Cuda Version: (11, 4, 0)
[2024-01-17 10:25:43] [INFO] [Driver] Cuda Version: 12020
[2024-01-17 10:25:43] [INFO] ----------------------------------------
[2024-01-17 10:25:43] [INFO] UfldDetector Model Type : UFLDV2_CULANE

python demo.py

您好!请问运行python demo.py 终端有以下输出,没有报错,但是文件夹下保存的视频只有1k,
[2023-12-28 15:49:45] [INFO] [Pycuda] Cuda Version: (11, 4, 0)
[2023-12-28 15:49:45] [INFO] [Driver] Cuda Version: 12020
[2023-12-28 15:49:45] [INFO] UfldDetector Model Type : UFLDV2_CULANE

video demo

Hi, you can share video original video for me, tks !!

convertPytorchToONNX.py

Hello author, your project is very interesting, but I encountered some issues. I successfully converted the convertOnnxToTensorRT.py file and yolo model file to trt. However, when running the convertPytorchToONNX.py file, I encountered an issue and did not receive a specific error message. The operation was unexpectedly terminated as follows:

C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ python. exe C:/Users/86152/Desktop/AI/code/Vehicle CV ADAS master/TrafficLane Detector/convertPytorch ToONNX. py

Model Type: UFLDV2_ TUSIMPLE

C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ lib \ site packages \ torch vision \ models_ Utils.py: 209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead

"The parameter '{pretrained param}' is deprecated since 0.13 and may be removed in the future,"

C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ lib \ site packages \ torch vision \ models_ Utils. py: 223: UserWarning: Arguments other than a weight enum or 'None' for 'weights' are deprecated since 0.13 and may be removed in the future The current behavior is equivalent to passing weights=ResNet18_ Weights IMAGENET1K_ V1 You can also use weights=ResNet18_ Weights DEFAULT to get the most up to date weights

Warnings. warn (msg)

Process completed with exit code -1073741819 (0xC000000 5)

Could the author please provide the converted TRT file required for the project model? Thank you very much!

qt.qpa.xcb: could not connect to display :.0

I meet an issue when run the demo.py.
The error was:
qt.qpa.xcb: could not connect to display :.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/nijuan/anaconda3/envs/python3.7/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)

My local computer operating system is windows and I run the code on the remote server Ubuntu os , and then report error like this.Could you tell me the reason about this? Thanks !

大佬,运行demo.py报错,你之前遇到过吗

[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "demo.py", line 221, in
laneDetector = UltrafastLaneDetectorV2(logger=LOGGER)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 208, in init
self._initialize_model(self.model_path, self.cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 213, in _initialize_model
self.infer = TensorRTEngine(model_path, cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 60, in init
self.context = self._create_context(engine)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 93, in _create_context
return engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

PyCUDA ERROR: The context stack was not empty upon module cleanup.

A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.

convertPytorchToONNX.py

I successfully converted the curvelanes/tusimple.pth into curvelanes/tusimple.onnx. When I try to achieve culane.onnx, the process was killed at torch.onnx.export(net, img, onnx_file_path, verbose=True).

Whether there is a train and test code

Hello!I‘m so sorry to bother you. I have an inssue in the project.
If the model not have train code. How the model detect video clips of datasets with different field of view characteristics efficiently ? For example, the model can efficiently detect the lane of this videos, but I want to use the dataset have the different lane parameter.
If the model have the train code , Could you tell me where are they?(and the test code also tell me)
How to evaluate model accuracy and detection efficiency?

Thank you very much!

Documentation Requirement

I want to submit this project with some modification for final year so do you have any documentation and metrics?

Errors while running!!

[2023-10-04 21:25:55] [INFO] [Pycuda] Cuda Version: (12, 2, 0)
[2023-10-04 21:25:55] [INFO] [Driver] Cuda Version: 12020
[2023-10-04 21:25:55] [INFO] UfldDetector Model Type : UFLDV2_CULANE
[10/04/2023-21:25:59] [TRT] [E] 3: [engine.cpp::nvinfer1::rt::Engine::getUserRegion::1312] Error Code 3: Internal Error (call of getBindingDimensions with invalid bindingIndex -1)
[2023-10-04 21:25:59] [INFO] UfldDetector Type : [trt] || Version : [CUDAExecutionProvider]
[2023-10-04 21:25:59] [INFO] YoloDetector Model Type : YOLOV8
[2023-10-04 21:25:59] [INFO] YoloDetector Type : [trt] || Version : [CUDAExecutionProvider]
Traceback (most recent call last):
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\demo.py", line 267, in
laneDetector.DetectFrame(frame)
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\TrafficLaneDetector\ultrafastLaneDetector\ultrafastLaneDetectorV2.py", line 271, in DetectFrame
self.lanes_points, self.lanes_detected = self.process_output(output, self.cfg, original_image_width = self.img_width, original_image_height = self.img_height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\TrafficLaneDetector\ultrafastLaneDetector\ultrafastLaneDetectorV2.py", line 426, in process_output
return np.array(list(lanes_points.values())), list(lanes_detected.values())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part.

PyCUDA ERROR: The context stack was not empty upon module cleanup.

A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.

您好,小白请教一下,这个文件是需要自己建立的吗?

if name == 'main':
onnx_model_path = "./ObjectDetector/models/yolov8m-coco.onnx"
trt_model_path = "./ObjectDetector/models/yolov8m-coco.trt"

我运行python3 convertOnnxToTensorRT.py 的时候报一下错误。

File=[ ./ObjectDetector/models/yolov8m-coco.onnx ] is not exist. Please check it !

python demo.py

为什么我运行python demo.py,没有报错,但是保存的视频只有1k

Use GPU

To use this code i have to use GPU right?
I'm making an error can't install tensorrt !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.