Git Product home page Git Product logo

darknet's People

Contributors

acxz avatar adujardin avatar alexeyab avatar aughey avatar avensun avatar bouncyphoton avatar cenit avatar compaile avatar cyyever avatar davidssmith avatar duohappy avatar enesozi avatar ilyaovodov avatar imaami avatar jaledmc avatar jklemmack avatar judwhite avatar jveitchmichaelis avatar lineofbestgit avatar lordofkillz avatar marvinklemp avatar mmaaz60 avatar pjreddie avatar stephanecharette avatar tiagoshibata avatar tigerhawkvok avatar tinohager avatar tomheaven avatar vinjn avatar willbattel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

darknet's Issues

darknet.py references load_image_color, which was removed

load_image_color was removed here: d3e3339#diff-1669d998bbe197992292514f3da1f46a151d50cd474e33d5407a307897942fa0

It's still referenced here:

load_image = lib.load_image_color

Which causes python code to fail with: AttributeError: /usr/lib/libdarknet.so: undefined symbol: load_image_color

This change seems to work for me, but I'm not familiar enough with this project to know if other changes are needed to:

load_image = lib.load_image
load_image.argtypes = [c_char_p, c_int, c_int, c_int]

on Windows a build issue causes application to have to link against darknet.exe instead of darknet.dll

Strange CMake + Visual Studio bug. If the library and the executable have the same name, it is causing linker issues. In this case, we have darknet.dll and darknet.exe. So 3rd party apps that attempt to link against darknet.dll end up having to link against darknet.exe.

When you run Dependency Walker against an application that links against darknet.lib / darknet.dll, it will instead report that the application is linked against darknet.exe.

bottom of chart.png shows text rendering issues

In the chart.png when the user is in an area with a long timezone name, the text is rendered incorrectly. Looks like the area where the timestamp is written is not erased correctly or far enough.

For example:

image

Problem when using Darknet in demo mode

I am not 100% sure but it seems that when using Darknet in demo mode in Ubuntu with the command "./darknet detecto demo ... -out_filename ...XXX.avi" and a local video, the resulting video displays the boxes around the objects in advance of one iteration.

Yolov9

YOLOv9 was published roughly a week ago, in Feb 2024: AlexeyAB/darknet#8887
Feature request: Darknet framework compatible with YOLOv9.

the role of compare_yolo_class

Hi guys, i want to apologize if this is misleading title, i 'm trying to comprehend yolo loss calculation in yolo_layer.cpp file, one detail particularly seems like a bug, on line 603 int class_id_match = compare_yolo_class(l.output, l.classes, class_index, l.w * l.h, objectness, class_id, 0.25f); Judging by variable name this function suppose to produce boolean value if current truth class we comparing box against appears in output in any way, yet inside this function we simply returning true if any random class is above certain threshold. Am i wrong ? Moreover it contains class_id as an argument but it is unused inside. This seems really counterintuitive. Again sorry if this was already brought up or this should be this way for some unknown reason. If this is correct maybe it can be renamed to any_class_response or something similar to better reflect function purpose and return value ?

Cannot open include file: 'cuda_runtime.h'

I am unable to build, so I cannot provide a darknet version. My issue is as of commit 4f93ab7 on Dec 1.

I have created a clean install of Windows 11 Pro 22H2 (VM), and followed the instructions in the README.

First, I got the following error from cmake:

C:\src\darknet\build>cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
-- Building for: Visual Studio 17 2022
CMake Error at CMakeLists.txt:28 (PROJECT):
  VERSION ".." format invalid.


-- Configuring incomplete, errors occurred!

I had to modify CM_version.cmake to explicitly set the DARKNET_VERSION_SHORT. Great - now I can run cmake.

SET (DARKNET_VERSION_SHORT 1.0.0)

Then, running the next command, msbuild via command line or Visual Studio, I get several dozen errors saying "Cannot open include file: 'cuda_runtime.h'":

>C:\src\darknet\src\darknet.h(41,10): error C1083: Cannot open include file: 'cuda_runtime.h': No such file or directory [C:\src\darknet\build\src\darknetobjlib.vcxproj]
         (compiling source file '../../src/Chart.cpp')

     7>C:\src\darknet\src\darknet.h(41,10): error C1083: Cannot open include file: 'cuda_runtime.h': No such file or directory [C:\src\darknet\build\src\darknetobjlib.vcxproj]
         (compiling source file '../../src/activation_layer.cpp')

         activations.cpp
     7>C:\src\darknet\src\darknet.h(41,10): error C1083: Cannot open include file: 'cuda_runtime.h': No such file or directory [C:\src\darknet\build\src\darknetobjlib.vcxproj]
         (compiling source file '../../src/activations.cpp')

         art.cpp
     7>C:\src\darknet\src\darknet.h(41,10): error C1083: Cannot open include file: 'cuda_runtime.h': No such file or directory [C:\src\darknet\build\src\darknetobjlib.vcxproj]
         (compiling source file '../../src/art.cpp')

I have verified the files exist at C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\include, but I'm not sure what I've missed to have that folder be included during compile.

Training with multiple GPUs is not faster than 1 GPU???

I follow the guide to train my dataset with multiple GPUs, I saw speed of 2 cases is same. I use the same config

batch=64
subdivisions=32     # 16 OOM
width=512
height=512
...
max_batches=10000

I check GPUs usage and almost GPUs ared used.

@AlexeyAB
Could you help me?
I use same batch, max_batches and subdivision for 1 GPU and multiple GPUs, but training time is same.

I read this issuse AlexeyAB/darknet#1165 and @AlexeyAB you also commented to this issue.

As my understanding if we use 4 GPUs, we need to reduce max_batches 4 times (compared to the case with 1 GPU) to get better speed (because with more GPUs, more images will be processed in 1 iteration) and change lr, burnin if needed as follow https://github.com/AlexeyAB/darknet/tree/64efa721ede91cd8ccc18257f98eeba43b73a6af#how-to-train-with-multi-gpu. Is that right?

darknet crashes when calculating mAP% at iteration #1000

User "cmorzy" reported today that they're still seeing the error/crash when Darknet reaches iteration #1000. A copy of the dataset, .names, and .cfg is available.

The exact message they're seeing is:

* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* A fatal error has been detected.  Darknet will now exit.
* Error location: ./src/convolutional_kernels.cu, forward_convolutional_layer_gpu(), line #546
* Error message:  cuDNN current error: status=3, CUDNN_STATUS_BAD_PARAM
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
backtrace (13 entries):
1/13: ./darknet(log_backtrace+0x38) [0x560b3fb79128]
2/13: ./darknet(darknet_fatal_error+0x19d) [0x560b3fb7936d]
3/13: ./darknet(cudnn_check_error_extended+0x83) [0x560b3fb7bf83]
4/13: ./darknet(forward_convolutional_layer_gpu+0x2c5) [0x560b3fc56985]
5/13: ./darknet(forward_network_gpu+0xe1) [0x560b3fc6af81]
6/13: ./darknet(network_predict_gpu+0x140) [0x560b3fc6d800]
7/13: ./darknet(validate_detector_map+0xa49) [0x560b3fc02f29]
8/13: ./darknet(train_detector+0x1ce0) [0x560b3fc05f70]
9/13: ./darknet(run_detector+0x9f6) [0x560b3fc09996]
10/13: ./darknet(main+0x4b3) [0x560b3fb308b3]
11/13: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f6ed5bd7d90]
12/13: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f6ed5bd7e40]
13/13: ./darknet(_start+0x25) [0x560b3fb32b25]
Segmentation fault (core dumped)

error: building tiff:x64-windows failed with: BUILD_FAILED

Hi! Im trying to follow the steps for building darknet on windows 11 but im facing this error when i try to install opencv:

C:\src\vcpkg>.\vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
Computing installation plan...
The following packages will be built and installed:

  • leptonica:[email protected]
  • libarchive[bzip2,core,crypto,libxml2,lz4,lzma,zstd]:[email protected]
  • libiconv:[email protected]#3
  • libxml2[core,iconv,lzma,zlib]:[email protected]
  • lz4:[email protected]#1
    opencv[contrib,core,default-features,dnn,freetype,jpeg,openmp,png,webp,world]:[email protected]#1
  • opencv4[contrib,core,default-features,dnn,freetype,jpeg,openmp,png,quirc,tiff,webp,world]:[email protected]#15
  • openssl:[email protected]#2
  • protobuf:[email protected]#1
  • quirc:[email protected]
  • tesseract:[email protected]
  • tiff[core,jpeg,lzma,zip]:[email protected]#4
  • vcpkg-cmake-get-vars:x64-windows@2023-12-31
  • vcpkg-get-python-packages:x64-windows@2024-01-24
  • zstd:[email protected]#2
    Additional packages (*) will be modified to complete this operation.
    Detecting compiler hash for triplet x64-windows...
    Compiler found: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe
    Restored 3 package(s) from C:\Users\Radu\AppData\Local\vcpkg\archives in 973 ms. Use --debug to see more details.
    Installing 1/15 tiff[core,jpeg,lzma,zip]:[email protected]#4...
    Building tiff[core,jpeg,lzma,zip]:[email protected]#4...
    -- Using cached libtiff-libtiff-v4.6.0.tar.gz.
    -- Cleaning sources at C:/src/vcpkg/buildtrees/tiff/src/v4.6.0-cea7694842.clean. Use --editable to skip cleaning for the packages you specify.
    -- Extracting source C:/src/vcpkg/downloads/libtiff-libtiff-v4.6.0.tar.gz
    -- Applying patch FindCMath.patch
    -- Applying patch requires-lerc.patch
    -- Using source at C:/src/vcpkg/buildtrees/tiff/src/v4.6.0-cea7694842.clean
    -- Found external ninja('1.11.0').
    -- Configuring x64-windows
    CMake Error at scripts/cmake/vcpkg_execute_required_process.cmake:112 (message):
    Command failed: "C:/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/Ninja/ninja.exe" -v
    Working Directory: C:/src/vcpkg/buildtrees/tiff/x64-windows-rel/vcpkg-parallel-configure
    Error code: 1
    See logs for more information:
    C:\src\vcpkg\buildtrees\tiff\config-x64-windows-dbg-CMakeCache.txt.log
    C:\src\vcpkg\buildtrees\tiff\config-x64-windows-rel-CMakeCache.txt.log
    C:\src\vcpkg\buildtrees\tiff\config-x64-windows-out.log

Call Stack (most recent call first):
installed/x64-windows/share/vcpkg-cmake/vcpkg_cmake_configure.cmake:252 (vcpkg_execute_required_process)
ports/tiff/portfile.cmake:33 (vcpkg_cmake_configure)
scripts/ports.cmake:175 (include)

error: building tiff:x64-windows failed with: BUILD_FAILED
Elapsed time to handle tiff:x64-windows: 7.7 s
Please ensure you're using the latest port files with git pull and vcpkg update.
Then check for known issues at:
https://github.com/microsoft/vcpkg/issues?q=is%3Aissue+is%3Aopen+in%3Atitle+tiff
You can submit a new issue at:
https://github.com/microsoft/vcpkg/issues/new?title=[tiff]+Build+error+on+x64-windows&body=Copy+issue+body+from+C%3A%2Fsrc%2Fvcpkg%2Finstalled%2Fvcpkg%2Fissue_body.md

Error after compiling on windows 11 using Cuda 12.3

Hello,
I'm having issues running the project on Windows 11. I receive same error with different models, input sources etc.

Full command with logs:
c:\Darknet\bin>C:\Darknet\bin\darknet.exe detect cfg/coco.names cfg/yolov7.cfg yolov7.weights test.jpg
Darknet v2.0-75-g31412920
CUDA runtime version 12030 (v12.3), driver version 12030 (v12.3)
cuDNN version 12020 (v8.9.6), use of half-size floats is ENABLED
=> 0: NVIDIA GeForce RTX 3070 Ti [#8.6], 8.0 GiB
OpenCV v4.8.0


  • Error location: C:\src\darknet\src-cli\darknet.cpp, darknet_signal_handler(), line #431
  • Error message: signal handler invoked for signal #11
  • Version v2.0-75-g31412920 built on Dec 29 2023 13:25:41

backtrace (15 entries):
1/15: řJ()
2/15: řJ()
3/15: řJ()
4/15: log2f()
5/15: log2f()
6/15: _C_specific_handler()
7/15: _chkstk()
8/15: RtlFindCharInUnicodeString()
9/15: KiUserExceptionDispatcher()
10/15: KiUserExceptionDispatcher()
11/15: KiUserExceptionDispatcher()
12/15: KiUserExceptionDispatcher()
13/15: KiUserExceptionDispatcher()
14/15: BaseThreadInitThunk()
15/15: RtlUserThreadStart()

I would much appreciate some help!
Thanks

yolo-seg

Is there yolo-seg implementation? thanks

Error Building in Docker

Issue: Darknet GPU Detection Error in Docker

I'm trying to get Darkmark working in Docker with GPU support. Although my GPU setup seems correct, I'm encountering errors while building the repository's version of Darknet. Here’s my Dockerfile:

# Use an official Ubuntu base image
FROM nvidia/cuda:11.0.3-devel-ubuntu18.04
ENV DEBIAN_FRONTEND=noninteractive

# Install necessary packages
RUN apt-get update && apt-get install -y \
    build-essential \
    wget \
    git \
    libopencv-dev \
    pkg-config \
    x11-apps \
    libgtk2.0-dev \
    libcanberra-gtk-module \
    libcanberra-gtk3-module \
    libtclap-dev \
    libmagic-dev \
    libx11-dev \
    libfreetype6-dev \
    libxrandr-dev \
    libxinerama-dev \
    libxcursor-dev \
    libpoppler-cpp-dev \
    software-properties-common && \
    add-apt-repository ppa:ubuntu-toolchain-r/test -y && \
    apt-get update && \
    apt-get install -y gcc-9 g++-9 && \
    update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave /usr/bin/g++ g++ /usr/bin/g++-9 && \
    rm -rf /var/lib/apt/lists/*

# Install a newer version of CMake
RUN wget https://github.com/Kitware/CMake/releases/download/v3.25.0/cmake-3.25.0-linux-x86_64.sh -O /tmp/cmake.sh && \
    mkdir /opt/cmake && \
    sh /tmp/cmake.sh --skip-license --prefix=/opt/cmake && \
    ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake && \
    rm /tmp/cmake.sh && \
    cmake --version

# Download and install cuDNN
RUN wget https://gitea.privateserver.com/private/OpenCV-Cudnn/raw/branch/main/cudnn.tar.xz -O /tmp/cudnn.tar.xz && \
    tar -xJf /tmp/cudnn.tar.xz -C /tmp && \
    cp /tmp/cudnn/include/cudnn*.h /usr/local/cuda/include/ && \
    cp /tmp/cudnn/lib/libcudnn* /usr/local/cuda/lib64/ && \
    chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn* && \
    rm -rf /tmp/cudnn.tar.xz /tmp/cudnn

ENV CUDA_HOME=/usr/local/cuda
ENV PATH=$CUDA_HOME/bin:$PATH
ENV LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH

# Download and unpack OpenCV and contrib modules
RUN wget https://gitea.privateserver.com/private/OpenCV-Cudnn/raw/branch/main/opencv.tar.gz -O /opencv.tar.gz && \
    wget https://gitea.privateserver.com/private/OpenCV-Cudnn/raw/branch/main/opencv_contrib.tar.gz -O /opencv_contrib.tar.gz && \
    tar -zxvf /opencv.tar.gz && mv opencv-* opencv && \
    tar -zxvf /opencv_contrib.tar.gz && mv opencv_contrib-* opencv_contrib && \
    rm /opencv.tar.gz /opencv_contrib.tar.gz

# Build and install OpenCV with CUDA support
WORKDIR /opencv/build
RUN /opt/cmake/bin/cmake -D CMAKE_BUILD_TYPE=Release \
                         -D CMAKE_INSTALL_PREFIX=/usr/local \
                         -D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules \
                         -D WITH_CUDA=ON \
                         -D WITH_CUDNN=ON \
                         -D OPENCV_DNN_CUDA=ON \
                         -D ENABLE_FAST_MATH=1 \
                         -D WITH_GTK=ON \
                         -D CUDA_FAST_MATH=1 \
                         -D WITH_CUBLAS=1 \
                         -D CUDA_ARCH_BIN="7.5" \
                         -D OPENCV_GENERATE_PKGCONFIG=ON \
                         -D CUDA_ARCH_PTX="" \
                         -D BUILD_EXAMPLES=OFF .. && \
    make -j$(($(nproc) - 2)) && make install && ldconfig

# Clone the darknet repository
WORKDIR /
RUN git clone https://github.com/hank-ai/darknet /darknet

# Create build directory and build Darknet
WORKDIR /darknet/build
RUN cmake -DCMAKE_BUILD_TYPE=Release .. && \
    make -j$(($(nproc) - 2))

Error Message

The error I receive during the build process is:

...
2.227 CMake Error in src-lib/CMakeLists.txt:
2.227   CUDA_ARCHITECTURES is set to "native", but no GPU was detected.
2.227 
2.233 CMake Error in src-lib/CMakeLists.txt:
2.233   CUDA_ARCHITECTURES is set to "native", but no GPU was detected.
2.233 
2.238 CMake Error in src-cli/CMakeLists.txt:
2.238   CUDA_ARCHITECTURES is set to "native", but no GPU was detected.
2.238 
2.242 -- Generating done
2.242 CMake Generate step failed.  Build files cannot be regenerated correctly.
...

When I add the flag -DDARKNET_CUDA_ARCHITECTURES="75", I still get the same error.

When I modify CM_dependencies.cmake directly with sed, i.e.,

RUN sed -i 's/SET (DARKNET_CUDA_ARCHITECTURES "native")/# SET (DARKNET_CUDA_ARCHITECTURES "native")/' CM_dependencies.cmake && \
    sed -i 's/#\s*SET (DARKNET_CUDA_ARCHITECTURES "75")/SET (DARKNET_CUDA_ARCHITECTURES "75")/' CM_dependencies.cmake

I get the following error:

...
nvcc fatal   : 'avx': expected a number
make[2]: *** [src-lib/CMakeFiles/darknetobjlib.dir/activation_kernels.cu.o] Error 1
src-lib/CMakeFiles/darknetobjlib.dir/build.make:107: recipe for target 'src-lib/CMakeFiles/darknetobjlib.dir/activation_kernels.cu.o' failed
...
make: *** [all] Error 2
...
ERROR: failed to solve: process "/bin/sh -c cmake -DCMAKE_BUILD_TYPE=Release -DDARKNET_CUDA_ARCHITECTURES=\"75\" .. && make -j$(($(nproc) - 2))" did not complete successfully: exit code: 2
Docker build failed, not running the container.

NVIDIA-SMI Information

Here’s the output of nvidia-smi for additional context:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2080 Ti     Off | 00000000:01:00.0  On |                  N/A |
|  0%   51C    P8              17W / 260W |    460MiB / 11264MiB |      2%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

Additional Information

I have GPU support set up and everything else is working correctly. Any help would be greatly appreciated!

Can't build on Ubuntu 22.04 with commands in README.md in Docker

In README.md it states that: "These instructions assume a system running Ubuntu 22.04." which implies that the following commands are sufficient to build this repo.

However Ubuntu 22.04 (at least the Docker image nvidia/cuda:12.2.2-cudnn8-devel-ubuntu22.04) has CMake version 3.22.1 which is in conflict with CMAKE_MINIMUM_REQUIRED (VERSION 3.24) in CMakeLists.txt and if one lowers the version that means SET (DARKNET_CUDA_ARCHITECTURES "native") in CM_dependencies.cmake does not work (one has to comment it out and uncomment # SET (DARKNET_CUDA_ARCHITECTURES "75;80;86").

It is not a huge deal but seems like an inaccuracy in the README.md.

reduce the detection frame rate of tiny

My video frame rate is 90HZ,the detection frame rate of the yolov4 tiny on ububtu18 reaches 200HZ on 4090 graphics card. How can I reduce the detection frame rate of tiny to reduce the power of the graphics card? Because the graphics card has to do other video detection.
thanks

Windows Vs Linux Performance Difference

Has anyone else ever taken the exact same dataset with the same bounding boxes and trained on Linux and run in Linux vs trained in Windows and running in Windows?

If so, did you witness a performance difference?

I'm getting thousands of boxes in Windows when I lower the confidence but that is not happening in Linux. In Linux I'm getting logical detections but in Windows the detentions are mostly illogical.

Curious if anyone else has witnessed this.

Thanks

What is difference between yolov4.weights and yolov4_conv_137.weights?

What is difference between yolov4.weights and yolov4_conv_137.weights? And I want to fine tuning YOLOv4 model on my dataset with no_classes=4, so which way should I use?

Anh I will use command

./darknet detector train ./my_data.data yolov4_custom.cfg <path-to-weights> -dont_show -clear

Is this command correct?
Thanks.

live webcam command does not work: "Video stream has stopped."

Attempting to run this command: darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights -c 0

Instead of it showing the webcam, console messages are logged:

Video stream has stopped.
Video stream has stopped.
Video stream has stopped.
Video stream has stopped.
...etc...

darknet.py fails to load network successfuly if darknet file paths contains white space. (Similar to #40)

This issue is similar to issue #40

When attempting to load a network using darknet.py using paths containing white space, an error is raised

image
The specific error is triggered by the darknet.data file containing the path to the names folder.
Using double quotes in darknet.data does not fix the error either, in-fact it includes the quotation marks too.
It seems like an internal function get_paths() in src-lib/data.cpp raises this error by trimming the white spaces from the path.

Input example:

# Dynamically locates Darknet Projects folder located in home/user/Documents/Darknet Projects/...
home_dir = os.path.expanduser("~")
documents_dir = os.path.join(home_dir, "Documents")
darknet_pth = os.path.join(documents_dir, "Darknet Projects/")

yolo_weights_pth = darknet_pth + str(yolo_weights)
config_pth = darknet_pth + str(yolo_weights).split('.')[0] + ".cfg"
data_pth = darknet_pth + str(yolo_weights).split('.')[0] + ".data"
model, class_names, class_colors = load_network(config_pth, data_pth, yolo_weights_pth)

If any of the paths config_pth, data_pth, yolo_weights_pth contains white space it'll fail at any circumstance.

Expected output:
Successfully load the network in any cases even if the path contains white space.

Considering this error is occurs in two different cases darknet.py and when using the CLI tool as mentioned in issue #40 , it's a low-level issue raised in src-lib/data.cpp.

Please look into it.

Training Fails When Folder Name Contains Spaces

I've encountered a bug while attempting to train a model using the darknet. It seems that the training process is interrupted and fails to continue when a folder name contains a space (e.g., "new folder"). Below are the details for replication and understanding of the issue.

Bug Description:
When initiating training with the darknet , if the dataset is located within a folder whose name contains a space character, the training does not start, and the process exits prematurely with error messages that point to the issue with the folder name.

Steps to Reproduce:

Create a folder with a space in the name, such as "new folder".
Place the dataset inside this folder.
Run the usual command to start training (darknet detector -map -dont_show train new folder.data animals.cfg).
Observe that the training process halts unexpectedly.
Expected Behavior:
The training process should either handle folder names with spaces or provide a clear error message indicating the issue.

Actual Behavior:
The training process stops with clear indication as to the reason.

Environment:

Darknet version: (Darknet v2.0-85-g8f8746d7-dirty)
Operating System: (windows 10/11 home and pro)
This issue was encountered on multiple attempts, and renaming the folder to eliminate the space character resolved the problem, confirming the bug's nature.

LOSS is zero when training on RTX4060

z5416596509303_577b87d0dcce6ab417cd3745525b1519
z5416596524828_6f57efe78adc5e11916616fb44ab3cfc

I change make file to
SET (DARKNET_CUDA_ARCHITECTURES "75;80;86;89")
Darknet can run on RTX4060, but after about 300 iterations, the Loss is always 0, the system running on RTX3060 is still fine.
How to fix it?
Thanks you.
@stephanecharette

segfault when training a network

Following the recent cleanup in free_layer_custom(), we're now seeing a segfault while training a network. Reported by CrazyBoris on discord:

darknet detector train ships.data yolov4-custom.cfg -map
Darknet v2.0-108-g4eaec0f7
...
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000 
Total BFLOPS 59.563 
avg_outputs = 489778 
Allocating workspace to transfer between CPU and GPU:  50.0 MiB

* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Error location: /home/borys/Tools/darknet/src-cli/darknet.cpp, darknet_signal_handler(), line #431
* Error message:  signal handler invoked for signal #11 (Segmentation fault)
* Version v2.0-108-g4eaec0f7 built on Feb 17 2024 00:31:24
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
backtrace (11 entries):
1/11: darknet(_Z13log_backtracev+0x38) [0x5f0b4d1b6568]
2/11: darknet(darknet_fatal_error+0x208) [0x5f0b4d1b6818]
3/11: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x74ef5ce42520]
4/11: /lib/x86_64-linux-gnu/libc.so.6(free+0x1e) [0x74ef5cea53fe]
5/11: darknet(free_layer_custom+0x8ef) [0x5f0b4d17827f]
6/11: darknet(train_detector+0x3220) [0x5f0b4d1150c0]
7/11: darknet(_Z12run_detectoriPPc+0xb3d) [0x5f0b4d11829d]
8/11: darknet(main+0x44f) [0x5f0b4d09790f]
9/11: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x74ef5ce29d90]
10/11: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x74ef5ce29e40]
11/11: darknet(_start+0x25) [0x5f0b4d09a8a5]

Could not build under EndeavourOS (Archlinux)

Compilation results only on the CPU version:

vfbsilva@isengard ~/Source/darknet_hank-ai/darknet/build $ cmake -DCMAKE_BUILD_TYPE=Release ..
-- Darknet v2.0-145-gc876e4e5
CMake Warning at CM_dependencies.cmake:47 (MESSAGE):
  CUDA not found.  Darknet will be CPU-only.
Call Stack (most recent call first):
  CMakeLists.txt:34 (INCLUDE)


-- Hardware is 32-bit or 64-bit, and seems to be Intel or AMD:  x86_64
-- Found Threads 
-- Found OpenCV 4.9.0
-- Found OpenMP 
-- Enabling AVX and SSE optimizations.
-- Making an optimized release build.
-- Setting up DARKNET OBJ
-- Setting up DARKNET LIB
-- Setting up DARKNET CLI
-- Configuring done (0.1s)
-- Generating done (0.0s)
-- Build files have been written to: /home/vfbsilva/Source/darknet_hank-ai/darknet/build

vfbsilva@isengard ~/Source/darknet_hank-ai/darknet/build $ nvidia-smi

But nvidia-smi
vfbsilva@isengard ~/Source/darknet_hank-ai/darknet/build $ nvidia-smi 
Thu Apr 18 22:15:56 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3060        Off |   00000000:01:00.0 Off |                  N/A |
|  0%   34C    P8              8W /  170W |       8MiB /  12288MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A       847      G   /usr/lib/Xorg                                   4MiB |
+-----------------------------------------------------------------------------------------+

And nvcc
nvcc nvcc fatal : No input files specified; use option --help for more information

Are available on the shell what am I missing?

Model Wont Train on Server

Hi there, I am trying to run a yolov3_tiny model on a server which and I get a few errors, here are the steps I took and the resulting messages:

module load nvidia/sdk/21.3

cd /users/XXX/Hank_Darknet/darknet/build

cmake -DCMAKE_BUILD_TYPE=Release \

-DCUDAToolkit_CUPTI_INCLUDE_DIR=/opt/software/nvidia/sdk/Linux_x86_64/21.3/cuda/11.2/extras/CUPTI/include \

-DCMAKE_CXX_FLAGS="-I/opt/software/nvidia/sdk/Linux_x86_64/21.3/cuda/11.2/include" ..

make -j$(nproc)

And this all seemed to build ok.

The darknet file was not where I expected it to be, it was in the src folder in the build folder.

I then tried to run the training command from the src folder:

./darknet detector train /users/XXX/Hank_Darknet/darknet/mydata/coco.data /users/XXX/Hank_Darknet/darknet/mydata/yolov3-tiny.cfg /users/XXX/Hank_Darknet/darknet/mydata/yolov3-tiny.conv.15 -map

It didn’t train as it aborted with the following:

CUDA runtime version 11020 (v11.2), driver version 11060 (v11.6) cuDNN is DISABLED => NVIDIA A100-PCIE-40GB [#8.0], 39.4 GiB OpenCV version: 4.5.5 Prepare additional network for mAP calculation... 0 : compute_capability = 800, cudnn_half = 0, GPU: NVIDIA A100-PCIE-40GB net.optimized_memory = 0 mini_batch = 1, batch = 64, time_steps = 1, train = 0 layer filters size/strd(dil) input output 0 Create CUDA-stream - 0 conv 16 3 x 3/ 1 640 x 640 x 3 -> 640 x 640 x 16 0.354 BF 1 max 2x 2/ 2 640 x 640 x 16 -> 320 x 320 x 16 0.007 BF 2 conv 32 3 x 3/ 1 320 x 320 x 16 -> 320 x 320 x 32 0.944 BF 3 max 2x 2/ 2 320 x 320 x 32 -> 160 x 160 x 32 0.003 BF 4 conv 64 3 x 3/ 1 160 x 160 x 32 -> 160 x 160 x 64 0.944 BF 5 max 2x 2/ 2 160 x 160 x 64 -> 80 x 80 x 64 0.002 BF 6 conv 128 3 x 3/ 1 80 x 80 x 64 -> 80 x 80 x 128 0.944 BF 7 max 2x 2/ 2 80 x 80 x 128 -> 40 x 40 x 128 0.001 BF 8 conv 256 3 x 3/ 1 40 x 40 x 128 -> 40 x 40 x 256 0.944 BF 9 max 2x 2/ 2 40 x 40 x 256 -> 20 x 20 x 256 0.000 BF 10 conv 512 3 x 3/ 1 20 x 20 x 256 -> 20 x 20 x 512 0.944 BF 11 max 2x 2/ 1 20 x 20 x 512 -> 20 x 20 x 512 0.001 BF 12 conv 1024 3 x 3/ 1 20 x 20 x 512 -> 20 x 20 x1024 3.775 BF 13 conv 256 1 x 1/ 1 20 x 20 x1024 -> 20 x 20 x 256 0.210 BF 14 conv 512 3 x 3/ 1 20 x 20 x 256 -> 20 x 20 x 512 0.944 BF 15 conv 18 1 x 1/ 1 20 x 20 x 512 -> 20 x 20 x 18 0.007 BF 16 yolo [yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00 17 route 13 -> 20 x 20 x 256 18 conv 128 1 x 1/ 1 20 x 20 x 256 -> 20 x 20 x 128 0.026 BF 19 upsample 2x 20 x 20 x 128 -> 40 x 40 x 128 20 route 19 8 -> 40 x 40 x 384 21 conv 256 3 x 3/ 1 40 x 40 x 384 -> 40 x 40 x 256 2.831 BF 22 conv 18 1 x 1/ 1 40 x 40 x 256 -> 40 x 40 x 18 0.015 BF 23 yolo [yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00 Total BFLOPS 12.894 avg_outputs = 768866 Allocating workspace to transfer between CPU and GPU: 56.2 MiB Remembering 1 class: -> class #0 (Pole) will use colour #FF00FF 0 : compute_capability = 800, cudnn_half = 0, GPU: NVIDIA A100-PCIE-40GB net.optimized_memory = 0 mini_batch = 1, batch = 64, time_steps = 1, train = 1 layer filters size/strd(dil) input output 0 conv 16 3 x 3/ 1 640 x 640 x 3 -> 640 x 640 x 16 0.354 BF 1 max 2x 2/ 2 640 x 640 x 16 -> 320 x 320 x 16 0.007 BF 2 conv 32 3 x 3/ 1 320 x 320 x 16 -> 320 x 320 x 32 0.944 BF 3 max 2x 2/ 2 320 x 320 x 32 -> 160 x 160 x 32 0.003 BF 4 conv 64 3 x 3/ 1 160 x 160 x 32 -> 160 x 160 x 64 0.944 BF 5 max 2x 2/ 2 160 x 160 x 64 -> 80 x 80 x 64 0.002 BF 6 conv 128 3 x 3/ 1 80 x 80 x 64 -> 80 x 80 x 128 0.944 BF 7 max 2x 2/ 2 80 x 80 x 128 -> 40 x 40 x 128 0.001 BF 8 conv 256 3 x 3/ 1 40 x 40 x 128 -> 40 x 40 x 256 0.944 BF 9 max 2x 2/ 2 40 x 40 x 256 -> 20 x 20 x 256 0.000 BF 10 conv 512 3 x 3/ 1 20 x 20 x 256 -> 20 x 20 x 512 0.944 BF 11 max 2x 2/ 1 20 x 20 x 512 -> 20 x 20 x 512 0.001 BF 12 conv 1024 3 x 3/ 1 20 x 20 x 512 -> 20 x 20 x1024 3.775 BF 13 conv 256 1 x 1/ 1 20 x 20 x1024 -> 20 x 20 x 256 0.210 BF 14 conv 512 3 x 3/ 1 20 x 20 x 256 -> 20 x 20 x 512 0.944 BF 15 conv 18 1 x 1/ 1 20 x 20 x 512 -> 20 x 20 x 18 0.007 BF 16 yolo [yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00 17 route 13 -> 20 x 20 x 256 18 conv 128 1 x 1/ 1 20 x 20 x 256 -> 20 x 20 x 128 0.026 BF 19 upsample 2x 20 x 20 x 128 -> 40 x 40 x 128 20 route 19 8 -> 40 x 40 x 384 21 conv 256 3 x 3/ 1 40 x 40 x 384 -> 40 x 40 x 256 2.831 BF 22 conv 18 1 x 1/ 1 40 x 40 x 256 -> 40 x 40 x 18 0.015 BF 23 yolo [yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00 Total BFLOPS 12.894 avg_outputs = 768866 Allocating workspace to transfer between CPU and GPU: 56.2 MiB Loading weights from /users/XXX/Hank_Darknet/darknet/mydata/yolov3-tiny.conv.15... seen 64, trained: 0 K-images (0 Kilo-batches_64) Done! Loaded 15 layers from weights-file Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005 Detection layer #16 is type 28 (yolo) Detection layer #23 is type 28 (yolo) mAP calculations will be every 100 iterations weights will be saved every 1000 iterations Resizing, random_coef = 1.40 928 x 928 Create 6 permanent cpu-threads Allocating workspace to transfer between CPU and GPU: 118.3 MiB Workspace begins at 0x14acee000000 loaded 64 images in 357.070 milliseconds v3 (mse loss, Normalizer: (iou: 0.75, obj: 1.00, cls: 1.00) Region 16 Avg (IOU: 0.000000), count: 1, class_loss = 529.588867, iou_loss = 0.000000, total_loss = 529.588867 v3 (mse loss, Normalizer: (iou: 0.75, obj: 1.00, cls: 1.00) Region 23 Avg (IOU: 0.212021), count: 2, class_loss = 2539.934326, iou_loss = 22.478271, total_loss = 2562.412598 total_bbox=2, rewritten_bbox=0.000000% terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.5.5) /users/acs03114/software/opencv/opencv-4.5.5_build/modules/highgui/src/window.cpp:1334: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvWaitKey' * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * A fatal error has been detected. Darknet will now exit. * Errno 2: No such file or directory * Error location: /users/XXX/Hank_Darknet/darknet/src/darknet.cpp, darknet_signal_handler(), line #443 * Error message: signal handler invoked for signal #6 (Aborted) * Version v2.0-11-gd01e285a-dirty built on Nov 10 2023 12:45:29 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * backtrace (20 entries): 1/20: ./darknet(_Z13log_backtracev+0x21) [0x5329b1] 2/20: ./darknet(darknet_fatal_error+0x181) [0x532bc1] 3/20: /lib64/libc.so.6(+0x37400) [0x14add1562400] 4/20: /lib64/libc.so.6(gsignal+0x10f) [0x14add156237f] 5/20: /lib64/libc.so.6(abort+0x127) [0x14add154cdb5] 6/20: /opt/software/anaconda/python-3.9.7/2021.11/lib/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xbc) [0x14ade6b91872] 7/20: /opt/software/anaconda/python-3.9.7/2021.11/lib/libstdc++.so.6(+0xacf6f) [0x14ade6b8ff6f] 8/20: /opt/software/anaconda/python-3.9.7/2021.11/lib/libstdc++.so.6(+0xacfb1) [0x14ade6b8ffb1] 9/20: /opt/software/anaconda/python-3.9.7/2021.11/lib/libstdc++.so.6(__cxa_rethrow+0) [0x14ade6b9019a] 10/20: /opt/software/anaconda/python-3.9.7/2021.11/opencv-4.5.5.20220304/lib64/libopencv_core.so.405(+0x96468) [0x14add2d7e468] 11/20: /opt/software/anaconda/python-3.9.7/2021.11/opencv-4.5.5.20220304/lib64/libopencv_core.so.405(_ZN2cv5errorEiRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKcS9_i+0x5f) [0x14add2fb7f6f] 12/20: /opt/software/anaconda/python-3.9.7/2021.11/opencv-4.5.5.20220304/lib64/libopencv_highgui.so.405(cvWaitKey+0x122) [0x14add8085832] 13/20: /opt/software/anaconda/python-3.9.7/2021.11/opencv-4.5.5.20220304/lib64/libopencv_highgui.so.405(_ZN2cv9waitKeyExEi+0x135) [0x14add8086815] 14/20: /opt/software/anaconda/python-3.9.7/2021.11/opencv-4.5.5.20220304/lib64/libopencv_highgui.so.405(_ZN2cv7waitKeyEi+0x21) [0x14add8086911] 15/20: ./darknet(train_network_waitkey+0x431) [0x50f861] 16/20: ./darknet(train_detector+0x1c86) [0x4a0866] 17/20: ./darknet(_Z12run_detectoriPPc+0x875) [0x4a3f55] 18/20: ./darknet(main+0x4c3) [0x437283] 19/20: /lib64/libc.so.6(__libc_start_main+0xf3) [0x14add154e493] 20/20: ./darknet(_start+0x2e) [0x439c1e]

I tried without the OpenCV flag

cmake -DCMAKE_BUILD_TYPE=Release \ -DCUDAToolkit_CUPTI_INCLUDE_DIR=/opt/software/nvidia/sdk/Linux_x86_64/21.3/cuda/11.2/extras/CUPTI/include \

-DCMAKE_CXX_FLAGS="-I/opt/software/nvidia/sdk/Linux_x86_64/21.3/cuda/11.2/include" \

-DENABLE_OPENCV=OFF ..

But that didn’t work either.

Switch DIVX encoding to something better?

Any reason why the DIVX encoder is being used instead of a traditional MP4V?
The output video is not compatible with <video> tags in a web browser.
Seems like on line 242 of demo.cpp it's stuck to DIVX encoding, which is at least archaic.

  • the output of the darknet version command
root@8f937a377045:/opt/nn/hats# darknet version
CUDA runtime version 12020 (v12.2), driver version 12020 (v12.2)
cuDNN is DISABLED
=> NVIDIA GeForce RTX 3070 Ti [#8.6], 7.8 GiB
OpenCV version: 4.5.4
Darknet v2.0-7-g67ba95cc-dirty
  • the exact command you ran
darknet detector demo -dont_show hats.data hats.cfg hats_best.weights GOPRO_1697242189.MP4 -out_filename prediction.mp4 -ext_output > prediction.txt
  • the operating system you are using

Ubuntu 22.04 Container based on nvcr.io/nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04


Output:

CUDA runtime version 12020 (v12.2), driver version 12020 (v12.2)
cuDNN is DISABLED
=> NVIDIA GeForce RTX 3070 Ti [#8.6], 7.8 GiB
OpenCV version: 4.5.4
Darknet v2.0-7-g67ba95cc-dirty
root@8f937a377045:/opt/nn/hats# darknet detector demo -dont_show hats.data hats.cfg hats_best.weights GOPRO_1697242189.MP4 -out_filename prediction.mp4 -ext_output > prediction.txt
 0 : compute_capability = 860, cudnn_half = 0, GPU: NVIDIA GeForce RTX 3070 Ti 
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 2    416 x 416 x   3 ->  208 x 208 x  32 0.075 BF
   1 conv     64       3 x 3/ 2    208 x 208 x  32 ->  104 x 104 x  64 0.399 BF
   2 conv     64       3 x 3/ 1    104 x 104 x  64 ->  104 x 104 x  64 0.797 BF
   3 route  2 		                       1/2 ->  104 x 104 x  32 
   4 conv     32       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  32 0.199 BF
   5 conv     32       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  32 0.199 BF
   6 route  5 4 	                           ->  104 x 104 x  64 
   7 conv     64       1 x 1/ 1    104 x 104 x  64 ->  104 x 104 x  64 0.089 BF
   8 route  2 7 	                           ->  104 x 104 x 128 
   9 max                2x 2/ 2    104 x 104 x 128 ->   52 x  52 x 128 0.001 BF
  10 conv    128       3 x 3/ 1     52 x  52 x 128 ->   52 x  52 x 128 0.797 BF
  11 route  10 		                       1/2 ->   52 x  52 x  64 
  12 conv     64       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x  64 0.199 BF
  13 conv     64       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x  64 0.199 BF
  14 route  13 12 	                           ->   52 x  52 x 128 
  15 conv    128       1 x 1/ 1     52 x  52 x 128 ->   52 x  52 x 128 0.089 BF
  16 route  10 15 	                           ->   52 x  52 x 256 
  17 max                2x 2/ 2     52 x  52 x 256 ->   26 x  26 x 256 0.001 BF
  18 conv    256       3 x 3/ 1     26 x  26 x 256 ->   26 x  26 x 256 0.797 BF
  19 route  18 		                       1/2 ->   26 x  26 x 128 
  20 conv    128       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 128 0.199 BF
  21 conv    128       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 128 0.199 BF
  22 route  21 20 	                           ->   26 x  26 x 256 
  23 conv    256       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x 256 0.089 BF
  24 route  18 23 	                           ->   26 x  26 x 512 
  25 max                2x 2/ 2     26 x  26 x 512 ->   13 x  13 x 512 0.000 BF
  26 conv    512       3 x 3/ 1     13 x  13 x 512 ->   13 x  13 x 512 0.797 BF
  27 conv    256       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x 256 0.044 BF
  28 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  29 conv     27       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x  27 0.005 BF
  30 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
  31 route  27 		                           ->   13 x  13 x 256 
  32 conv    128       1 x 1/ 1     13 x  13 x 256 ->   13 x  13 x 128 0.011 BF
  33 upsample                 2x    13 x  13 x 128 ->   26 x  26 x 128
  34 route  33 23 	                           ->   26 x  26 x 384 
  35 conv    256       3 x 3/ 1     26 x  26 x 384 ->   26 x  26 x 256 1.196 BF
  36 conv     27       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x  27 0.009 BF
  37 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
Loading weights from hats_best.weights...Done! Loaded 38 layers from weights-file 
OpenCV: FFMPEG: tag 0x58564944/'DIVX' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

CMake error 2 building on Linux

Hello,
I'm sorry if I'm missing something obvious here! I tried looking through the FAQs and I couldn't work out what I'm doing wrong. My issue is that I'm following the Linux build instructions in the README, and when trying to run 'make -j4 package', I get an Error 2 from CMake. As a result, there's no darknet-VERSION.deb for the next step. I'm using CMake version 3.29.1. Running 'cmake -DCMAKE_BUILD_TYPE=Release ..' I get:
-- Darknet v2.0-196-ga6c3224e -- CUDA detected. Darknet will use the GPU. -- Found cuDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so -- Hardware is 32-bit or 64-bit, and seems to be Intel or AMD: x86_64 -- Found Threads -- Found OpenCV 4.5.4 -- Found OpenMP -- Enabling AVX and SSE optimizations. -- Making an optimized release build. -- Skipping Doxygen (not found) -- Setting up DARKNET OBJ -- Setting up DARKNET LIB -- Setting up DARKNET CLI -- Configuring done (0.1s) -- Generating done (0.0s) -- Build files have been written to: /home/luigi/src/darknet/build
which looks sensible to me. Is there anything obvious I'm doing wrong?

cuDNN did not find FWD algo for convolution

Hi! Thanks for your document at first!

Here's the problem when I starts to fine-tune on TT100K dataset, using the pre-trained model of yolo-v4.

  • the output of the darknet version command
Darknet v2.0-225-g277ed9f4
CUDA runtime version 12030 (v12.3), driver version 12030 (v12.3)
cuDNN version 12020 (v8.9.7), use of half-size floats is ENABLED
=> 0: NVIDIA GeForce RTX 4060 Laptop GPU [#8.9], 8.0 GiB
OpenCV v4.8.0

If relevant, please attach a screenshot showing the problem.

[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000
 151 route  147                                            ->   38 x  38 x 256
 152 conv    512       3 x 3/ 2     38 x  38 x 256 ->   19 x  19 x 512 0.852 BF
 153 route  152 116                                ->   19 x  19 x1024
 154 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 155 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 156 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 157 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 158 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 159 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 160 conv    150       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 150 0.111 BF
 161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 125.193
avg_outputs = 1052114
Allocating workspace to transfer between CPU and GPU:  92.1 MiB
Warning: batch=... is set quite low!  It is recommended to set batch=64.
Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005
Detection layer #139 is type 28 (yolo)
Detection layer #150 is type 28 (yolo)
Detection layer #161 is type 28 (yolo)
mAP calculations will be every 297 iterations
weights will be saved every 10000 iterations

Resizing, random_coef = 1.40
Creating 6 permanent CPU threads to load images and bounding boxes.

 896 x 896

* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Error location: C:\src\darknet\src-lib\convolutional_layer.cpp, cudnn_convolutional_setup(), line #440
* Error message:  cuDNN did not find FWD algo for convolution
* Thread #29256: main darknet thread
* Version v2.0-225-g277ed9f4 built on Jun  3 2024 23:09:27
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Small bug when using the -thresh parameter

Hi,

we found a small bug. When you use the -thresh 0.15 parameter for example during the "detector test" call.
The 0.15 gets added into input_fn in run_detector. This is due the Darknet::CfgAndState::process_arguments does not handle params and puts them into additional_arguments. I think the easiest fixt might to skip the next index if the current obj expect a parameter. But since im not quite sure i rather let you decide about the fix

sample command
.\darknet.exe detector test .\xy.data .\xy.cfg .\backup\xy_best.weights -thresh 0.1

Darknet detector test interprets flags as image argument

I have successfully trained a test model with the repo. However, I want to evaluate my test set with the 'darknet detector test' command on a batch of images from the file test.txt.

Previously on a earlier version of the AlexeyAB fork I used a command like this which worked fine:
darknet detector test obj.data cfg\detector_test.cfg modelWeights\detector_test_best.weights -ext_output -dont_show -out result.json < test.txt

However, now this results in the following error:

Error location: C:\src\darknet\src-lib\image_opencv.cpp, load_image_mat_cv(), line #72
Error message: failed to load image file "result.json"
Thread #62760: main darknet thread
Version v2.0-190-g3c9722d5 built on May 1 2024 16:56:45

Funnily enough, if I use the command that writes to a text version it does work. The working command looks like this:
darknet detector test obj.data cfg\detector_test.cfg modelWeights\detector_test_best.weights -dont_show -ext_output < test.txt > result.txt

The results show up in results.txt but the same problem with the .json comes up when I try to add the -thresh or -iou_thresh flag to test performance with different values. If I use this command:

darknet detector test obj.data cfg\detector_test.cfg modelWeights\detector_test_best.weights -thresh 0.75 -dont_show -ext_output < test.txt > result.txt

It does not work and gives the same type of error as with the .json command:

Errno 2: No such file or directory
Error location: C:\src\darknet\src-lib\image_opencv.cpp, load_image_mat_cv(), line #72
Error message: failed to load image file "0.75"

I have tried all commands with absolute paths and relative paths and flags in different order but functionality remains the same. At this moment I'm not sure why the .json or the threshold flags are interpreted as the image argument in the command.

I am running darknet on windows 11. With the following output of the darknet --version command:

Darknet v2.0-190-g3c9722d5
CUDA runtime version 11080 (v11.8), driver version 12000 (v12.0)
cuDNN version 11080 (v8.9.6), use of half-size floats is ENABLED
=> 0: NVIDIA RTX 2000 Ada Generation Laptop GPU [#8.9], 8.0 GiB
OpenCV v4.8.0

red and blue channels are reversed when saving predictions.jpg

Reported by user kenmoini on the discord server. When Darknet saves the predictions.jpg file, the red and blue channels are reversed.

@stephanecharette reproduced the problem. This command seems to show the correct output:

darknet detector test bird_of_prey.data bird_of_prey.cfg bird_of_prey_best.weights video_import_2023-10-08_11-03-54_ezgif-4-5f619ef427_mp4/ezgif-4-5f619ef427_frame_000025.jpg

image

But if you add -dont_show to the command, then predictions.jpg shows up reversed:

image

Pretrained weights without a ".weights" suffix are silently ignored

To report a bug, please provide the following:

  • the output of the darknet version command
Darknet v2.0-208-ga7901e02-dirty
CUDA runtime version 12010 (v12.1), driver version 12020 (v12.2)
cuDNN version 12020 (v8.9.5), use of half-size floats is ENABLED
=> 0: Tesla T4 [#7.5], 14.6 GiB
OpenCV v4.2.0
  • the exact command you ran
    Something like this:
./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137
  • the operating system you are using
    Ubuntu 22.04

I found that pre-trained weights without the ".weights" suffix are silently ignored in this fork of darknet.
This is a critical issue for users transitioning from AlexeyAB, where pretrained-models or custom model weights
are listed without a .weights suffix (AlexeyAB documentation)... (Instead model files uses the final layer as a suffix e.g. 137 for yolov4.conv.137 )

Users may incorrectly assume that pre-trained weights are loaded with HankAI and they will either 1) achieve worse performance without ever noticing or 2) wonder why the model converges a lot slower than AlexeyAB - at least I did.

Screenshot from 2024-06-10 14-15-01
Showing hankai and Alexey trainings with and without pretrained weights

The line for mapping command line arguments to weight filename is:

# From line 146 in darknet_cfg_and_state.cpp
if (extension == ".weights"	and weights_filename.empty())	{ weights_filename	= path; }

A fix could be to throw an error if all arguments are not parsed correctly or throw an error message if a user tries to load a model with a integer-suffix.

What is not working on Darknet for ARM?

I noticed the following item on the roadmap "fix ARM build (Jetson devices)" and was wondering what exactly is not working on ARM devices.

(I've been trying to get Darknet working on Jetpack 5 for the last 48h and was wondering if you know why it's not working. You can read up on my progress here opendatacam/opendatacam#607)

Update Ubuntu build directions?

I had trouble building Darknet on my system according to the directions..it appears that they need an update to save others from going through a similar process.

TLDR: changes I think are needed in the Ubuntu build directions + maybe the build configuration:

  1. Document that the file package needs to be installed via apt-get
  2. Update docs to use the build_ubuntu.sh script
  3. Maybe check for the file package in the build (seems many similar checks already exist)

EDIT: I now realize my initial report was due to issues in v3.0.0..making some updates here but I hope at least part of my comments may still be helpful.


Context for the test: For now, I'm working on an M2 Macbook (Sonoma) and experimenting in a custom Dockerfile based on nvidia/cuda:12.5.1-cudnn-runtime-ubuntu22.04. Yes, I know CUDA won't work on my Mac..I'm just testing out my Dockerfile locally first. Trying to build Darknet from master

I initially tripped over problems running this command against the Darknet v3.0 tag.

-- Darknet v3.0
CMake Error at CMakeLists.txt:28 (PROJECT):
  VERSION ".." format invalid.

After switching to master and poking around, I found that there was a different way to do the build using build_ubuntu.sh that mostly worked, but failed at the end:

# ./build_ubuntu.sh
... [lots of output]
Run CPack packaging tool...
/usr/bin/cpack --config ./CPackConfig.cmake
CPack: Create package using DEB
CPack: Install projects
CPack: - Run preinstall target for: Darknet
CPack: - Install project: Darknet []
CPack: Create package
CMake Error at /usr/share/cmake-3.30/Modules/Internal/CPack/CPackDeb.cmake:224 (message):
  CPackDeb: file utility is not available.  CPACK_DEBIAN_PACKAGE_SHLIBDEPS
  and CPACK_DEBIAN_PACKAGE_GENERATE_SHLIBS options are not available.
Call Stack (most recent call first):
  /usr/share/cmake-3.30/Modules/Internal/CPack/CPackDeb.cmake:888 (cpack_deb_prepare_package_vars)


CPack Error: Error while execution CPackDeb.cmake
CPack Error: Problem compressing the directory
CPack Error: Error when generating package: darknet
make: *** [Makefile:74: package] Error 1

After searching around, I found that the file utility (not mentioned in the directions) wasn't installed on my system, so I added it and that seemed to work.

# apt-get install file
[lots of output]
./build_ubuntu.sh
[lots of output]
Run CPack packaging tool...
/usr/bin/cpack --config ./CPackConfig.cmake
CPack: Create package using DEB
CPack: Install projects
CPack: - Run preinstall target for: Darknet
CPack: - Install project: Darknet []
CPack: Create package
CPackDeb: - Generating dependency list
CPack: - package: /usr/local/yolo-test/src/darknet/build/darknet-2.0.226-Linux.deb generated.
Done!
Make sure you install the .deb file:
-rw-r--r-- 1 root root 1.4M Aug  9 17:52 darknet-2.0.226-Linux.deb

Error message: signal handler invoked for signal #11

I have successfully built Darknet,
I training on Yolo V7 is good, but when training on V3, error is occurred (I run the old C version and there is no error).

F:\xxx\x641>darknet detector -map -dont_show train cfg/obj.data cfg/obj_1.cfg
Darknet v2.0-143-g2d53a761
CUDA runtime version 12030 (v12.3), driver version 12040 (v12.4)
cuDNN version 12020 (v8.9.7), use of half-size floats is ENABLED
=> 0: NVIDIA GeForce RTX 2070 SUPER [#7.5], 8.0 GiB
OpenCV v4.8.0
Prepare additional network for mAP calculation...
 0 : compute_capability = 750, cudnn_half = 1, GPU: NVIDIA GeForce RTX 2070 SUPER
net.optimized_memory = 0
mini_batch = 1, batch = 40, time_steps = 1, train = 0
   layer   filters  size/strd(dil)      input                output
   0 Create CUDA-stream - 0
 Create cudnn-handle 0
conv     32       3 x 3/ 1    608 x 608 x   3 ->  608 x 608 x  32 0.639 BF
   1 conv     64       3 x 3/ 2    608 x 608 x  32 ->  304 x 304 x  64 3.407 BF
   2 conv     32       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  32 0.379 BF
   3 conv     64       3 x 3/ 1    304 x 304 x  32 ->  304 x 304 x  64 3.407 BF
   4 Shortcut Layer: 1,  wt = 0, wn = 0, outputs: 304 x 304 x  64 0.006 BF
   5 conv    128       3 x 3/ 2    304 x 304 x  64 ->  152 x 152 x 128 3.407 BF
   6 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
   7 conv    128       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x 128 3.407 BF
   8 Shortcut Layer: 5,  wt = 0, wn = 0, outputs: 152 x 152 x 128 0.003 BF
   9 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  10 conv    128       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x 128 3.407 BF
  11 Shortcut Layer: 8,  wt = 0, wn = 0, outputs: 152 x 152 x 128 0.003 BF
  12 conv    256       3 x 3/ 2    152 x 152 x 128 ->   76 x  76 x 256 3.407 BF
  13 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  14 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  15 Shortcut Layer: 12,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  16 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  17 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  18 Shortcut Layer: 15,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  19 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  20 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  21 Shortcut Layer: 18,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  22 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  23 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  24 Shortcut Layer: 21,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  25 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  26 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  27 Shortcut Layer: 24,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  28 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  29 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  31 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  32 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  34 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  35 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  37 conv    512       3 x 3/ 2     76 x  76 x 256 ->   38 x  38 x 512 3.407 BF
  38 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  39 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  40 Shortcut Layer: 37,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  41 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  42 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  43 Shortcut Layer: 40,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  44 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  45 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  46 Shortcut Layer: 43,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  47 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  48 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  49 Shortcut Layer: 46,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  50 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  51 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  52 Shortcut Layer: 49,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  53 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  54 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  55 Shortcut Layer: 52,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  56 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  57 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  58 Shortcut Layer: 55,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  59 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  60 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  61 Shortcut Layer: 58,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  62 conv   1024       3 x 3/ 2     38 x  38 x 512 ->   19 x  19 x1024 3.407 BF
  63 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  64 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  65 Shortcut Layer: 62,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  66 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  67 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  68 Shortcut Layer: 65,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  69 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  70 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  71 Shortcut Layer: 68,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  72 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  73 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  74 Shortcut Layer: 71,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  75 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  76 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  77 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  78 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  79 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  80 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  81 conv     18       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x  18 0.013 BF
  82 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  83 route  79                                     ->   19 x  19 x 512
  84 conv    256       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 256 0.095 BF
  85 upsample                 2x    19 x  19 x 256 ->   38 x  38 x 256
  86 route  85 61                                  ->   38 x  38 x 768
  87 conv    256       1 x 1/ 1     38 x  38 x 768 ->   38 x  38 x 256 0.568 BF
  88 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  89 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  90 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  91 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  92 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  93 conv     18       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x  18 0.027 BF
  94 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  95 route  91                                     ->   38 x  38 x 256
  96 conv    128       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 128 0.095 BF
  97 upsample                 2x    38 x  38 x 128 ->   76 x  76 x 128
  98 route  97 36                                  ->   76 x  76 x 384
  99 conv    128       1 x 1/ 1     76 x  76 x 384 ->   76 x  76 x 128 0.568 BF
 100 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 101 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 102 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 103 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 104 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 105 conv     18       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x  18 0.053 BF
 106 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 139.496
avg_outputs = 1103769
Allocating workspace to transfer between CPU and GPU:  50.0 MiB

Remembering 1 class:
-> class #0 (Object) will use colour #FF00FF

 0 : compute_capability = 750, cudnn_half = 1, GPU: NVIDIA GeForce RTX 2070 SUPER
net.optimized_memory = 0
mini_batch = 1, batch = 40, time_steps = 1, train = 1
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 1    608 x 608 x   3 ->  608 x 608 x  32 0.639 BF
   1 conv     64       3 x 3/ 2    608 x 608 x  32 ->  304 x 304 x  64 3.407 BF
   2 conv     32       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  32 0.379 BF
   3 conv     64       3 x 3/ 1    304 x 304 x  32 ->  304 x 304 x  64 3.407 BF
   4 Shortcut Layer: 1,  wt = 0, wn = 0, outputs: 304 x 304 x  64 0.006 BF
   5 conv    128       3 x 3/ 2    304 x 304 x  64 ->  152 x 152 x 128 3.407 BF
   6 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
   7 conv    128       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x 128 3.407 BF
   8 Shortcut Layer: 5,  wt = 0, wn = 0, outputs: 152 x 152 x 128 0.003 BF
   9 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  10 conv    128       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x 128 3.407 BF
  11 Shortcut Layer: 8,  wt = 0, wn = 0, outputs: 152 x 152 x 128 0.003 BF
  12 conv    256       3 x 3/ 2    152 x 152 x 128 ->   76 x  76 x 256 3.407 BF
  13 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  14 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  15 Shortcut Layer: 12,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  16 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  17 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  18 Shortcut Layer: 15,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  19 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  20 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  21 Shortcut Layer: 18,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  22 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  23 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  24 Shortcut Layer: 21,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  25 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  26 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  27 Shortcut Layer: 24,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  28 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  29 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  31 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  32 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  34 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  35 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  76 x  76 x 256 0.001 BF
  37 conv    512       3 x 3/ 2     76 x  76 x 256 ->   38 x  38 x 512 3.407 BF
  38 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  39 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  40 Shortcut Layer: 37,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  41 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  42 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  43 Shortcut Layer: 40,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  44 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  45 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  46 Shortcut Layer: 43,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  47 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  48 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  49 Shortcut Layer: 46,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  50 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  51 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  52 Shortcut Layer: 49,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  53 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  54 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  55 Shortcut Layer: 52,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  56 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  57 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  58 Shortcut Layer: 55,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  59 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  60 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  61 Shortcut Layer: 58,  wt = 0, wn = 0, outputs:  38 x  38 x 512 0.001 BF
  62 conv   1024       3 x 3/ 2     38 x  38 x 512 ->   19 x  19 x1024 3.407 BF
  63 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  64 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  65 Shortcut Layer: 62,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  66 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  67 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  68 Shortcut Layer: 65,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  69 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  70 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  71 Shortcut Layer: 68,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  72 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  73 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  74 Shortcut Layer: 71,  wt = 0, wn = 0, outputs:  19 x  19 x1024 0.000 BF
  75 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  76 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  77 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  78 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  79 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  80 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
  81 conv     18       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x  18 0.013 BF
  82 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  83 route  79                                     ->   19 x  19 x 512
  84 conv    256       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 256 0.095 BF
  85 upsample                 2x    19 x  19 x 256 ->   38 x  38 x 256
  86 route  85 61                                  ->   38 x  38 x 768
  87 conv    256       1 x 1/ 1     38 x  38 x 768 ->   38 x  38 x 256 0.568 BF
  88 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  89 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  90 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  91 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  92 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
  93 conv     18       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x  18 0.027 BF
  94 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  95 route  91                                     ->   38 x  38 x 256
  96 conv    128       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 128 0.095 BF
  97 upsample                 2x    38 x  38 x 128 ->   76 x  76 x 128
  98 route  97 36                                  ->   76 x  76 x 384
  99 conv    128       1 x 1/ 1     76 x  76 x 384 ->   76 x  76 x 128 0.568 BF
 100 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 101 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 102 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 103 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 104 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 105 conv     18       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x  18 0.053 BF
 106 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 139.496
avg_outputs = 1103769
Allocating workspace to transfer between CPU and GPU:  77.3 MiB
Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005
Detection layer #82 is type 28 (yolo)
Detection layer #94 is type 28 (yolo)
Detection layer #106 is type 28 (yolo)
mAP calculations will be every 100 iterations
weights will be saved every 1000 iterations

Resizing, random_coef = 1.40
Creating 6 permanent CPU threads to load images and bounding boxes.

 896 x 896
Allocating workspace:  165.7 MiB
Workspace begins at 0000000C04000000
loaded 40 images in 0.000 microseconds
calling Darknet's fatal error handler due to signal #11

* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Error location: C:\src\darknet\src-cli\darknet.cpp, darknet_signal_handler(), line #434
* Error message:  signal handler invoked for signal #11
* Version v2.0-143-g2d53a761 built on Apr 15 2024 15:17:56
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
backtrace (33 entries):
1/33: $c()
2/33: $c()
3/33: $c()
4/33: log2f()
5/33: log2f()
6/33: _C_specific_handler()
7/33: _chkstk()
8/33: RtlFindCharInUnicodeString()
9/33: KiUserExceptionDispatcher()
10/33: KiUserExceptionDispatcher()
11/33: KiUserExceptionDispatcher()
12/33: cuProfilerStop()
13/33: cuProfilerStop()
14/33: cuProfilerStop()
15/33: cuProfilerStop()
16/33: cuProfilerStop()
17/33: cuProfilerStop()
18/33: cuMemcpy()
19/33: cudaGetExportTable()
20/33: cudaGetExportTable()
21/33: cudaGetExportTable()
22/33: cudaMemcpyAsync()
23/33: cudaMemcpyAsync()
24/33: cudaMemcpyAsync()
25/33: cudaMemcpyAsync()
26/33: cudaMemcpyAsync()
27/33: cudaMemcpyAsync()
28/33: cudaMemcpyAsync()
29/33: cudaMemcpyAsync()
30/33: cudaMemcpyAsync()
31/33: cudaMemcpyAsync()
32/33: BaseThreadInitThunk()
33/33: RtlUserThreadStart()

Please check and support me.Thanks

Which weights to use ?

Hi, I am training Yolov4 for my own objects. At the end of the training (6000 iterations, 2 classes) I get 3 weights files : yolov4-custom_6000.weights, yolov4-custom_final.weights and yolov4-custom_last.weights, could you please tell me which one of these should I use to get the best results ?

Add more per-class information during MAP calculations

In my scenario, to properly judge the quality of training I need to know precision, accuracy, and ideally "false negatives" on a per-class basis. This can be reported out during any MAP calculations along with the TP & FP).

 detections_count = 4410, unique_truth_count = 2855
class_id = 0, name = Object, ap = 97.34%         (TP = 2783, FP = 125, FN = 72, Pr = 0.96, Rc = 0.97)  <===

 for conf_thresh = 0.25, precision = 0.96, recall = 0.97, F1-score = 0.97
 for conf_thresh = 0.25, TP = 2783, FP = 125, FN = 72, average IoU = 79.83 %

Where to find the MSCOCO weights?

I'll use this issue to track the location of the MSCOCO pre-trained .weights files.

These are the same weights you can download directly from Joseph Redmon or AlexeyAB's repos.

For the list of 80 classes used in MSCOCO, see the file cfg/coco.names which starts with "person", "bicycle", "car", etc...

Training with multiple GPUs is not faster than 1 GPU???

I follow the guide to train my dataset with multiple GPUs, I saw speed of 2 cases is same. I use the same config

batch=64
subdivisions=32     # 16 OOM
width=512
height=512

I check GPU usage and almost GPU is used. Does Darknet support multiple GPUs?

Can Darknet train a two-layer LSTM on R8 dataset?

I use Darknet for research purpose, and I need a executable file to run a model training process. I see there is a source code called LSTM.c, so I think there may be some way to realize LSTM training in Darknet. Can anyone tells me how to write the cfg file and how to write the command? like ./darknet rnn train.......

Thanks very much!

`illegal memory access was encountered` during first mAP calculations while training network

Training a new network. Once it has reached iteration 1000, this is logged:

 (next mAP calculation at 1000 iterations) 
 1000: 0.483266, 0.592546 avg loss, 0.002610 rate, 2.077195 seconds, 64000 images, 2.909745 hours left
4* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* A fatal error has been detected.  Darknet will now exit.
* Error location: ./src/network_kernels.cu, network_predict_gpu(), line #744
* Error message:  current CUDA error: status=700, an illegal memory access was encountered
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
backtrace (11 entries):
1/11: /home/stephane/src/darknet/darknet(log_backtrace+0x38) [0x55ab1d538e18]
2/11: /home/stephane/src/darknet/darknet(darknet_fatal_error+0x178) [0x55ab1d539038]
3/11: /home/stephane/src/darknet/darknet(check_error+0x5c) [0x55ab1d53becc]
4/11: /home/stephane/src/darknet/darknet(check_error_extended+0x7c) [0x55ab1d53bf8c]
5/11: /home/stephane/src/darknet/darknet(network_predict_gpu+0x15f) [0x55ab1d63bb5f]
6/11: /home/stephane/src/darknet/darknet(validate_detector_map+0x9af) [0x55ab1d5cbdcf]
7/11: /home/stephane/src/darknet/darknet(train_detector+0x1698) [0x55ab1d5ceaf8]
8/11: /home/stephane/src/darknet/darknet(run_detector+0x897) [0x55ab1d5d2ad7]
9/11: /home/stephane/src/darknet/darknet(main+0x375) [0x55ab1d4ed685]
10/11: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f424cd09083]
11/11: /home/stephane/src/darknet/darknet(_start+0x2e) [0x55ab1d4ef8fe]

This error does not happen when I turn off CUDNN in Darknet's Makefile:

CUDNN=0
CUDNN_HALF=0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.