vehicle-cv-adas's Issues
When I run the TrafficLaneDetector/imageDetection.py, it show "No module named 'ultrafastLaneDetector.ultrafastLane'".
from pycuda._driver import * # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SystemError: type Boost.Python.enum has the Py_TPFLAGS_HAVE_GC flag but has no traverse function
Can you provide the videos you test on the gif or the link to it. Thanks a lot !
i have this error when run demo.py,
运行onnx模型可以正常运行,运行trt模型,直接中断,输出如下
[2024-01-17 10:25:43] [INFO] [Pycuda] Cuda Version: (11, 4, 0)
[2024-01-17 10:25:43] [INFO] [Driver] Cuda Version: 12020
[2024-01-17 10:25:43] [INFO] ----------------------------------------
[2024-01-17 10:25:43] [INFO] UfldDetector Model Type : UFLDV2_CULANE
您好!请问运行python demo.py 终端有以下输出,没有报错,但是文件夹下保存的视频只有1k,
[2023-12-28 15:49:45] [INFO] [Pycuda] Cuda Version: (11, 4, 0)
[2023-12-28 15:49:45] [INFO] [Driver] Cuda Version: 12020
[2023-12-28 15:49:45] [INFO] UfldDetector Model Type : UFLDV2_CULANE
请问图像输入分辨率有要求吗,yolov8m的图像输入是否需要跟车道先检测一致呢
Hi, you can share video original video for me, tks !!
Thanks for this great work. Just wondering the accuracy of the distance_calculate? The distance calculate using fixed obstacle size for each label?
Best regards
Hello author, your project is very interesting, but I encountered some issues. I successfully converted the convertOnnxToTensorRT.py file and yolo model file to trt. However, when running the convertPytorchToONNX.py file, I encountered an issue and did not receive a specific error message. The operation was unexpectedly terminated as follows:
C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ python. exe C:/Users/86152/Desktop/AI/code/Vehicle CV ADAS master/TrafficLane Detector/convertPytorch ToONNX. py
Model Type: UFLDV2_ TUSIMPLE
C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ lib \ site packages \ torch vision \ models_ Utils.py: 209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead
"The parameter '{pretrained param}' is deprecated since 0.13 and may be removed in the future,"
C: \ Users \ 86152 \ anaconda3 \ envs \ lane det \ lib \ site packages \ torch vision \ models_ Utils. py: 223: UserWarning: Arguments other than a weight enum or 'None' for 'weights' are deprecated since 0.13 and may be removed in the future The current behavior is equivalent to passing weights=ResNet18_ Weights IMAGENET1K_ V1
You can also use weights=ResNet18_ Weights DEFAULT
to get the most up to date weights
Warnings. warn (msg)
Process completed with exit code -1073741819 (0xC000000 5)
Could the author please provide the converted TRT file required for the project model? Thank you very much!
I meet an issue when run the demo.py.
The error was:
qt.qpa.xcb: could not connect to display :.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/nijuan/anaconda3/envs/python3.7/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb.
Aborted (core dumped)
My local computer operating system is windows and I run the code on the remote server Ubuntu os , and then report error like this.Could you tell me the reason about this? Thanks !
From where can we get the onnx and/or trt files for the models?
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "demo.py", line 221, in
laneDetector = UltrafastLaneDetectorV2(logger=LOGGER)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 208, in init
self._initialize_model(self.model_path, self.cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 213, in _initialize_model
self.infer = TensorRTEngine(model_path, cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 60, in init
self.context = self._create_context(engine)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 93, in _create_context
return engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
PyCUDA ERROR: The context stack was not empty upon module cleanup.
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
I successfully converted the curvelanes/tusimple.pth into curvelanes/tusimple.onnx. When I try to achieve culane.onnx, the process was killed at torch.onnx.export(net, img, onnx_file_path, verbose=True).
Hello!I‘m so sorry to bother you. I have an inssue in the project.
If the model not have train code. How the model detect video clips of datasets with different field of view characteristics efficiently ? For example, the model can efficiently detect the lane of this videos, but I want to use the dataset have the different lane parameter.
If the model have the train code , Could you tell me where are they?(and the test code also tell me)
How to evaluate model accuracy and detection efficiency?
Thank you very much!
I want to submit this project with some modification for final year so do you have any documentation and metrics?
[2023-10-04 21:25:55] [INFO] [Pycuda] Cuda Version: (12, 2, 0)
[2023-10-04 21:25:55] [INFO] [Driver] Cuda Version: 12020
[2023-10-04 21:25:55] [INFO] UfldDetector Model Type : UFLDV2_CULANE
[10/04/2023-21:25:59] [TRT] [E] 3: [engine.cpp::nvinfer1::rt::Engine::getUserRegion::1312] Error Code 3: Internal Error (call of getBindingDimensions with invalid bindingIndex -1)
[2023-10-04 21:25:59] [INFO] UfldDetector Type : [trt] || Version : [CUDAExecutionProvider]
[2023-10-04 21:25:59] [INFO] YoloDetector Model Type : YOLOV8
[2023-10-04 21:25:59] [INFO] YoloDetector Type : [trt] || Version : [CUDAExecutionProvider]
Traceback (most recent call last):
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\demo.py", line 267, in
laneDetector.DetectFrame(frame)
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\TrafficLaneDetector\ultrafastLaneDetector\ultrafastLaneDetectorV2.py", line 271, in DetectFrame
self.lanes_points, self.lanes_detected = self.process_output(output, self.cfg, original_image_width = self.img_width, original_image_height = self.img_height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\us\Desktop\Vehicle-CV-ADAS\TrafficLaneDetector\ultrafastLaneDetector\ultrafastLaneDetectorV2.py", line 426, in process_output
return np.array(list(lanes_points.values())), list(lanes_detected.values())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part.
PyCUDA ERROR: The context stack was not empty upon module cleanup.
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
if name == 'main':
onnx_model_path = "./ObjectDetector/models/yolov8m-coco.onnx"
trt_model_path = "./ObjectDetector/models/yolov8m-coco.trt"
我运行python3 convertOnnxToTensorRT.py 的时候报一下错误。
File=[ ./ObjectDetector/models/yolov8m-coco.onnx ] is not exist. Please check it !
为什么我运行python demo.py,没有报错,但是保存的视频只有1k
To use this code i have to use GPU right?
I'm making an error can't install tensorrt !