Git Product home page Git Product logo

acf's People

Contributors

headupinclouds avatar ruslo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acf's Issues

test halide

Halide is currently being used for DNN optimization in OpenCV. The OpenGL ES 2.0 ACF computation (currently using ogles_gpgpu) would be well suited to conversion to Halide:

https://github.com/halide/Halide

promote GPUDetectionPipeline to public API

Hiding/scheduling GPU->CPU transfers costs (>= 1 frame latency) is important to see actual throughput gains when using the GPGPU ACF pyramid acceleration. Simply using the GPGPU acceleration sequentially for each frame may actually slow down the frame rate due to transfer overhead (glFlush(), etc).. Since one of the main advantages of this module is speed, it makes sense to add this functionality to the API. Currently src/app/pipeline/GPUDetectionPipeline.{h, cpp} provides some sample code for this. After a review/cleanup this can be added to src/lib/acf/acf/GPUDetectionPipeline.{h, cpp}.

cpb file

Hi,

Thanks for your work. I'm trying to port the pedestrian detector in Piotr's Matlab Toolbox to this package. However, I have no idea on how to generate cpb file from the pretrained model. Would you mind sharing the steps to generate the .cpb file?

Thanks,
Xing

doxygen api docs

Document public API calls and make the cv::Mat column-major (i.e., I.t()) input pre-condition clear.

How to build libacf in iOS?

Hi,
Thanks for great work. Could you show me the step by step to build libraries for iOS? I try to run command polly.py --toolchain ios --config RELEASE --fwd HUNTER_CONFIGURATION_TYPES=RELEASE --install --verbose
But I get the error:
Environment variable POLLY_IOS_DEVELOPMENT_TEAM is empty
(see details: http://polly.readthedocs.io/en/latest/toolchains/ios/errors/polly_ios_development_team.html)

CMake Error at /Users/mac/Documents/Workspace/acf/polly-master/utilities/polly_fatal_error.cmake:10 (message):
Call Stack (most recent call first):
/Users/mac/Documents/Workspace/acf/polly-master/utilities/polly_ios_development_team.cmake:15 (polly_fatal_error)
/Users/mac/Documents/Workspace/acf/polly-master/ios.cmake:47 (include)
/Users/mac/Documents/Workspace/acf/_ci/cmake/share/cmake-3.11/Modules/CMakeDetermineSystem.cmake:94 (include)
CMakeLists.txt:2 (project)

-- Configuring incomplete, errors occurred!

[hunter ** INTERNAL **] Configure project failed
[hunter ** INTERNAL **] [Directory:/Users/mac/Documents/Workspace/acf]

------------------------------ WIKI -------------------------------
https://github.com/ruslo/hunter/wiki/error.internal

CMake Error at cmake/HunterGate.cmake:83 (message):
Call Stack (most recent call first):
cmake/HunterGate.cmake:93 (hunter_gate_wiki)
cmake/HunterGate.cmake:333 (hunter_gate_internal_error)
cmake/HunterGate.cmake:513 (hunter_gate_download)
CMakeLists.txt:84 (HunterGate)

-- Configuring incomplete, errors occurred!

Could yoi help me to solve this?
Thanks.

build issues for X86 Linux

I try to build the libs for x86 linux with command: polly.py --toolchain gcc --install --verbose,
But I can't get the tools needed for check_ci_tags corresponding to Toolchain-ID.
Some info as follows:
------------------------------------------
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] HUNTER_TOOLCHAIN_ID_PATH: /home/bourne/.hunter/_Base/e7fe3f0/427fd52
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] HUNTER_CONFIGURATION_TYPES: Release;Debug
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] HUNTER_BUILD_SHARED_LIBS:
-- [hunter] [ Hunter-ID: e7fe3f0 | Toolchain-ID: 427fd52 | Config-ID: a0274a8 ]
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] load: /home/bourne/.hunter/_Base/Download/Hunter/0.22.8/e7fe3f0/Unpacked/cmake/projects/check_ci_tag/hunter.cmake
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] check_ci_tag versions available: [1.0.0]
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Package 'check_ci_tag' CONFIGURATION_TYPES: Release;Debug
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Package 'check_ci_tag' is cacheable: YES
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Install to: /home/bourne/.hunter/_Base/e7fe3f0/427fd52/a0274a8/Install
-- [hunter] CHECK_CI_TAG_ROOT: /home/bourne/.hunter/_Base/e7fe3f0/427fd52/a0274a8/Install (ver.: 1.0.0)
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Locking directory: /home/bourne/.hunter/_Base/Download/check_ci_tag/1.0.0/f220960
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Lock done
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Already locked: /home/bourne/.hunter/_Base/Download/check_ci_tag/1.0.0/f220960
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Locking directory: /home/bourne/.hunter/_Base/e7fe3f0/427fd52/a0274a8
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Lock done
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Locking directory: /home/bourne/.hunter/_Base/Cache
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Lock done
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Using CMake variable HUNTER_PASSWORDS_PATH
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] Downloading DONE metafile (try #0 of 10):
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] https://raw.githubusercontent.com/elucideye/hunter-cache/master/427fd52/check_ci_tag/1.0.0/f220960/da39a3e/a49b0e5/da39a3e/basic-deps.DONE
-- [hunter *** DEBUG *** 2019-04-19T18:18:41] -> /home/bourne/.hunter/_Base/Cache/meta/427fd52/check_ci_tag/1.0.0/f220960/da39a3e/a49b05/da39a3e/basic-deps.DONE
---------------------------------------
I look forward to your suggestions and tips.

ogles_gpgpu compile error

Hello, I'm trying to compile this project in my local Ubuntu 14.04, but I suffered from the compile error.

I'm not friendly with hunter as well, I want you to figure out what causes these problems.

On project root directory first I enter the following on the console:
source ./bin/acf/hunter_env.sh
and next:
polly.py --config Debug --install --verbose
for compiling it with default native compiler(gcc, g++ 4.8.4) on my ubuntu.

compile is processing on openCV and other sub projects, but It ended at ogles_gpgpu and FAIL with following grammer error messages:

proc/base/filterprocbase.h:114:47: error: ‘nullptr’ was not declared in this scope
const char* fragShaderSrcForCompilation = nullptr; // used fragment shader source for shader compilation

proc/base/procbase.h:164:65: error: ISO C++ forbids declaration of ‘delegate’ with no type [-fpermissive]
virtual void getResultData(const FrameDelegate& delegate = {}, int index = 0) const;

proc/base/procbase.cpp:111:5: error: ‘fbo’ was not declared in this scope
fbo->readBuffer(data, index);
^
there are plenty of error messages like these. What should I do for these problems?

Calculated Scales for ACF-Pyramid are wrong

I noticed earlier that calculating the pyramid with the same parameters on the same image returns a different number of scales when using this library and when using the original matlab code. After some debugging I found that d0 and d1 are apparently the opposite of what they are in the original code:

My image is 500px in width and 375px in height, the original code is if(sz(1)<sz(2)), d0=sz(1); d1=sz(2); else d0=sz(2); d1=sz(1); end with sz = [375 500] which results in d0 = 375; d1 = 500 (or in general: d0 is the smaller dimension and d1 the larger).

However the code here is:

double d0 = sz.height, d1 = sz.width;
if (sz.height < sz.width)
{
    std::swap(d0, d1);
}

Which results in the d0/d1 swapping only happening when d0 is already the smaller value. In my case d0 = 500 and d1 = 375.

This resulted in 34 instead of 33 scales. Simply changing that if in a way that d0 is always the smaller value fixed the issue (either by changing the initialization or by using >=).

GPUACF tuning

Resolve discrepancies between CPU and GPU (OpenGL ES shader) ACF features (the GPU output uses a texture "packing" and the layout/geometry is not relevant to the issue -- only the individual pixel values). Differences are most prominent in the normalized gradient output (4th channel), which is most visible at the highest resolution:

CPU:
acf_cpu

GPU:
acf_gpu

Calculating scales with nOctUp != 0 causes issues

Similar to #62: If nOctUp is not 0 the scaling doesn't work. This is caused by double s = std::pow(2.0, -double(i) / double(nPerOct + nOctUp)); which adds nOctUp to nPerOct before the division while the matlab code does scales = 2.^(-(0:nScales-1)/nPerOct+nOctUp); which adds it after the divion. Simple fix would be to cast both variables seperatly: double s = std::pow(2.0, -double(i) / double(nPerOct) + double(nOctUp));

Ubuntu 18.04 w/ linux-gcc-armhf-neon

Migrated from (2) #102 (comment)

  1. As mentioned in the previous discussion, I also have tried to build with toolchain linux-gcc-armhf-neon on X86 Ubuntu 18.04. ; Case 1: enable GPGPU, there was an build error as mentioned previous; Case 2: disable GPGPU wirh '--fwd ACF_BUILD_OGLES_GPGPU=OFF', there is a fresh build error as follows:
===========
[ 3%] Building CXX object src/lib/CMakeFiles/acf.dir/acf/acf/transfer.cpp.o
cd /home/bourne/workbase/app_tools/acf/_builds/linux-gcc-armhf-neon/src/lib && /usr/bin/arm-linux-gnueabihf-g++ -DACF_DO_HALF=1 -DACF_SERIALIZE_WITH_CVMATIO=1 -DHALF_ENABLE_CPP11_CMATH=1 -I/home/bourne/workbase/app_tools/acf/_builds/linux-gcc-armhf-neon -I/home/bourne/workbase/app_tools/acf/src/lib/acf -isystem /home/bourne/.hunter/_Base/6421d63/d018056/8948932/Install/include/opencv4 -isystem /home/bourne/.hunter/_Base/6421d63/d018056/8948932/Install/include -isystem /home/bourne/.hunter/_Base/6421d63/d018056/8948932/Install/include/cvmatio -mfpu=neon -mfloat-abi=hard -std=c++11 -std=c++11 -o CMakeFiles/acf.dir/acf/acf/transfer.cpp.o -c /home/bourne/workbase/app_tools/acf/src/lib/acf/acf/transfer.cpp
In file included from /home/bourne/workbase/app_tools/acf/src/lib/acf/acf/transfer.cpp:11:0:
/home/bourne/workbase/app_tools/acf/src/lib/acf/acf/transfer.h:15:10: fatal error: ogles_gpgpu/common/proc/base/procinterface.h: No such file or directory
#include <ogles_gpgpu/common/proc/base/procinterface.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
=============

Pipeline for ZED Camera on Terga TX2

Hi, I run the pipeline on Terga TX2 with a ZED Camera, but a problem occurred.
To promise the ZED Camera can be used, I modified the structure Application as follow:

  1. I add a new structure VideoSource to initialize the zed camera.
struct VideoSource
{
        sl::Mat frame_zed;
        sl::Camera zed_camera;

        VideoSource()
        {
          sl::InitParameters init_params;
          init_params.camera_resolution = sl::RESOLUTION_HD720;
          init_params.depth_mode = sl::DEPTH_MODE_PERFORMANCE;
          init_params.coordinate_units = sl::UNIT_METER;
          init_params.camera_fps = 30;

          sl::ERROR_CODE err = zed_camera.open(init_params);
          if (err != sl::SUCCESS) {
                  std::cout << sl::toString(err) << std::endl;
                  zed_camera.close();
                  //return; // Quit if an error occurred
          }
          else
            std::cout << "ZED Camera created!!" << std::endl;
        }

        // Convert the zed camera Mat to opencv Mat
        virtual cv::Mat slMat2cvMat(sl::Mat &input)
        {
                // Mapping between MAT_TYPE and CV_TYPE
                int cv_type = -1;
                switch (input.getDataType())
                {
                        case sl::MAT_TYPE_32F_C1: cv_type = CV_32FC1; break;
                        case sl::MAT_TYPE_32F_C2: cv_type = CV_32FC2; break;
                        case sl::MAT_TYPE_32F_C3: cv_type = CV_32FC3; break;
                        case sl::MAT_TYPE_32F_C4: cv_type = CV_32FC4; break;
                        case sl::MAT_TYPE_8U_C1: cv_type = CV_8UC1; break;
                        case sl::MAT_TYPE_8U_C2: cv_type = CV_8UC2; break;
                        case sl::MAT_TYPE_8U_C3: cv_type = CV_8UC3; break;
                        case sl::MAT_TYPE_8U_C4: cv_type = CV_8UC4; break;
                default: break;
                }
                return cv::Mat(input.getHeight(), input.getWidth(), cv_type, input.getPtr<sl::uchar1>(sl::MEM_CPU), input.getStepBytes(sl::MEM_CPU));
        }

        virtual void operator>>(cv::Mat &output)
        {
            // get image from zed camera by zed SDK
            zed_camera.retrieveImage(frame_zed, sl::VIEW_LEFT);
            output = slMat2cvMat(frame_zed);
        }

        virtual int getWidth(){return zed_camera.getResolution().width;}
        virtual int getHeight(){return zed_camera.getResolution().height;}
};
  1. I modified the Application structure constructor
    // clang-format off
    Application
    (
        const std::string &input,
        const std::string &model,
        float acfCalibration,
        int minWidth,
        bool window,
        float resolution
    ) : resolution(resolution)
    // clang-format on
    {
        // Create a video source:
        // 1) integar == index to device camera
        // 2) filename == supported video formats
        // 3) "/fullpath/Image_%03d.png" == list of stills
        // http://answers.opencv.org/answers/761/revisions/
        //video = create(input);
        //zed_camera = create();

        // create zed camera
        zed_source = std::make_shared<VideoSource>();

        //video = create(0);

        // Create an OpenGL context:
        cv::Size size(zed_source->getWidth(),zed_source->getHeight());
        //const auto size = getSize(*video);

        context = aglet::GLContext::create(aglet::GLContext::kAuto, window ? "acf" : "", size.width, size.height);

        // Create an object detector:
        detector = std::make_shared<acf::Detector>(model);
        detector->setDoNonMaximaSuppression(true);

        if (acfCalibration != 0.f)
        {
            acf::Detector::Modify dflt;
            dflt.cascThr = { "cascThr", -1.0 };
            dflt.cascCal = { "cascCal", acfCalibration };
            detector->acfModify(dflt);
        }

        // Create the asynchronous scheduler:
        pipeline = std::make_shared<acf::GPUDetectionPipeline>(detector, size, 5, 0, minWidth);

        // Instantiate an ogles_gpgpu display class that will draw to the
        // default texture (0) which will be managed by aglet (typically glfw)
        if (window && context->hasDisplay())
        {
            display = std::make_shared<ogles_gpgpu::Disp>();
            display->init(size.width, size.height, TEXTURE_FORMAT);
            display->setOutputRenderOrientation(ogles_gpgpu::RenderOrientationFlipped);
        }
    }
  1. The update function is also modified correspondingly
cv::Mat frame;
(*zed_source)  >>  frame;

The program is compiled successfully, but no image in window, only a black frame.
Does the code has any mistakes?
Thank you for help me.

P.S. Another question, when I run the project acf-detect, I want show the capture frame in real time, but a opencv error is occurred as follow:

OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp, line 636
Exception: OpenCV(3.4.1) /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

How can I add the two libraries during the hunter compile. Thank you.

acf model parsing edge cases

@xsacha posted:

Even if you name it pcb but the file doesn't exist it will error out ... if you name it .dat and the file doesn't exist (or even if it does), it doesn't error

submit hunter package

This is already hunterized and the package config setup should be ready. It should be a matter of testing and send the PR.

GLSL varying limit reached in triangle shader

On some OpenGL ES platforms the separable TriangleProcPass shader can exceed a varying array limit. In this case there error is reported in the Anndroid QEMU swiftshader emulator:

1: ERROR: 0:12: 'fragment shader' : Varyings packing failed: Too many varyings
1:
1:
1: precision highp float;
1:
1: uniform sampler2D inputImageTexture;
1: uniform float texelWidthOffset;
1: uniform float texelHeightOffset;
1:
1: varying vec2 blurCoordinates[11];
1:
1: void main()
1: {
1:    vec4 sum = vec4(0.0);
1:    vec4 center = texture2D(inputImageTexture, blurCoordinates[5]);
1:    sum += texture2D(inputImageTexture, blurCoordinates[0]) * 0.027778;
1:    sum += texture2D(inputImageTexture, blurCoordinates[1]) * 0.055556;
1:    sum += texture2D(inputImageTexture, blurCoordinates[2]) * 0.083333;
1:    sum += texture2D(inputImageTexture, blurCoordinates[3]) * 0.111111;
1:    sum += texture2D(inputImageTexture, blurCoordinates[4]) * 0.138889;
1:    sum += texture2D(inputImageTexture, blurCoordinates[5]) * 0.166667;
1:    sum += texture2D(inputImageTexture, blurCoordinates[6]) * 0.138889;
1:    sum += texture2D(inputImageTexture, blurCoordinates[7]) * 0.111111;
1:    sum += texture2D(inputImageTexture, blurCoordinates[8]) * 0.083333;
1:    sum += texture2D(inputImageTexture, blurCoordinates[9]) * 0.055556;
1:    sum += texture2D(inputImageTexture, blurCoordinates[10]) * 0.027778;
1:    gl_FragColor = vec4( center.r/(sum.a + 0.005000), center.gb, 1.0);
1: }

An SO post with OpenGL ES specification discussion can be found here:
https://stackoverflow.com/questions/26682631/webgl-shaders-maximum-number-of-varying-variables

In this case the varying array can be reduced (and optimized) by fixing the optimization related issue here #48 , which is a matter of leveraging free OpenGL telex interpolation as described here https://www.buildaworld.net/forum/developers-corner/matts-shady-corner/super-fast-opengl-32-gaussian-blur which is already implemented in the similar separable gaussian filter in ogles_gpgpu taken from GPUImage.

Failed to load AcfCaltech+Detector.mat

Hi David,
As I said before, I could successfully load AcfInriaDetector.mat and perform pedestrian detection. However, when I tried to load AcfCaltech+Detector.mat without making any change of the code, I got the following error
"*** Error in './acf-detect': corrupted size vs. prev_size: 0x01a91098 ***"
It seems to me the code is using memory which was not allocated to it.

After further investigation, I found the error happed here

 int Detector::operator()(const MatP& IpTranspose, std::vector<cv::Rect>& objects, std::vector<double>* scores)
{
    // Create features:
    Pyramid P;
    chnsPyramid(IpTranspose, &opts.pPyramid.get(), P, true);

For AcfInriaDetector.mat, the window size is 100x41 while it is 50x20 for AcfCaltech+Detector.mat. These parameters are loaded automatically from .mat files.

Do I need to make any changes to the code if I want to load AcfCaltech+Detector.mat?

support multiple detectors

The detector should support running multiple detectors on the same ACF pyramid, since most of the overhead is associated with feature computation.

OpenCV error on Raspberry Pi 3

Hi,

I installed the package on raspberry pi 3 with TOOLCHAIN=linux-gcc-armf-neon-vfpv4. There is no error during complilation. I can succefully install it.

However, when I tried to run it with "./acf-detect --input=lene512color.png --output=/tmp/ --model=drishti_face_gray_80x80.cpb --nms --annotate --calibration=0.00001", I got the following OpenCV error
"OpenCV Error: One of arguments' values is out of range (The total matrix size does not fit to "size_t" type) in setSize, file /home/ubuntu/.hunter/_Base/57d0748/f2f4970/0352f92/Build/OpenCV/Source/modules/core/src/matrix.cpp, line 309 Exception: /home/ubuntu/.hunter/_Base/57d0748/f2f4970/0352f92/Build/OpenCV/Source/modules/core/src/matrix.cpp:309: error: (-211) The total matrix size does not fit to "size_t" type in function setSize".

I didn't do any change to the code. I thought the image size might be too large for Pi since the memory of Pi is only 1GB. So I also tried to use Lena256. Unfortunately, I had the same error.

Have you ever met this problem before?

Thanks,
Xing

acf::DetectorAsync

Actually making use of the GPU pyramid computation requires scheduling gpu->cpu transfers of ACF pyramids properly with a designated detection thread. This currently has to be done at the application layer by users of the lib. We can implement this within the lib by adding a simple acf::AsyncDetector class, which receives input images or textures and a user provided callback, and sends detection output to the provided callback with 1 or 2 frames of latency.

api tests vs unit tests

Review API tests (public/exported symbols) vs unit tests (private or hidden utility classes and functions). Most of the ACF tests fall under the API test category (they link against public symbols in the ACF_EXPORT classes + functions. We may also have utility hidden/private code that is tested only indirectly through the API calls, and that warrants more rigorous direct testing elsewhere. Two examples of private code that warrants direct testing are the local ACF shader classes and the optimized SIMD RGBA/texture unpacking code. We don't want to add such functionality to the API just to support devops testing tasks, so we have a few options.

  1. Ideally, we would build this code in OBJECT libraries so it could be reused in a test app and by the main ACF library, but OBJECT libraries aren't portable in general.
  • pros: optimal
  • cons: OBJECT libraries aren't portable
  1. As an alternative we can build that code as a support STATIC library, and the main ACF library can link to that code. The additional library should be more or less transparent to the user if they are using a SHARED ACF library (it is absorbed by the library) or if they are using a STATIC ACF library through CMake find_package(acf CONFIG REQUIRED) target_link_libraries(foo PUBLIC acf::acf) since the generated package configuration code will provide the transitive linking transparently.
  • pros: supports reuse (avoid recompilation)
  • cons: adds another library (mostly an issue for STATIC builds and non-cmake users -- in practice, it would be fairly messy to use ACF as a STATIC lib without CMake/Hunter anyway, due to the STATIC 3rdparty dependencies, so that seems to weaken the argument against introducing support STATIC libraries)
  1. If we want to avoid exporting an additional private library (it wouldn't have public headers but would be exported in the installation step), then we could collect the required source (sugar, etc) and just recompile the code directly into the test exe or via a test only static lib.
  • pros: avoid introducing an extra support lib in the ACF export set
  • cons: requires recompilation of the code (not a huge concern in ACF) and adds some additional build complexity

Since the ACF API tests will link to the compiled ACF library, and the ACF unit tests will link to some copy of private ACF code (already inside the the ACF library) the two tests should be managed in separate executables to avoid ODR conflicts.

ogles_gpgpu::TriangleProc efficiency

The separable triangle filter used for gradient magnitude normalization is better than the GaussianProc model (in terms of ACF similarity). This filter should be updated to use the interstitial texel weighting trick that that GaussianProc kernel does (from GPUImage).

In this approach the following 11 tap filter can be achieved with 7 texture lookups:

[1 2 3 4 5 6 7 6 5 4 3 2 1] = [{1 2} {3 4} {5 6} 7 {6 5} {4 3} {2 1}]

acf::Detector can return cv::Rect with slighlty out of bounds pixels

In some cases the acf::Detector can return a cv::Rect with out of bounds pixels. This could be happening in the scaling from the detection resolution back to the full resolution image. Enforcing (detection_roi & frame_roi).area() == detection_roi.area() is a reasonable post condition for the main API calls. A near term workaround is simply to clip the output detection rectangles before any cropping is performed.

Clarification for return type of chnsPyramid needed

While using chnsPyramid and looking at the return type of it I've noticed that you're using std::vector<std::vector<MatP>> for the data element. My first guess was that the first vector contains the different scales while the second contains the different channels. However after some tests I've noticed that the channels are saved in one MatP object and the second vector apparently always contains only one object. Am I missing something or could the type be changed to std::vector<MatP> for simplicity?

Build for NVIDIA Terga TX2 Platform

Errors occured when we ran it on the NVIDIA Terga TX2 Platform. Can it run on Terga Tx2 ? Details are as follows. Thank you!

UNAME_MACHINE = aarch64
UNAME_RELEASE = 4.4.38-tegra
UNAME_SYSTEM = Linux
UNAME_VERSION = #1 SMP PREEMPT Thu Mar 1 20:49:20 PST 2018
configure: error: cannot guess build type; you must specify one
CMakeFiles/xproto.dir/build.make:108: recipe for target 'xproto-prefix/src/xproto-stamp/xproto-configure' failed
make[8]: Leaving directory '/home/nvidia/.hunter/_Base/8fee57e/3942445/a0ab86d/Build/xproto/Build'
make[8]: *** [xproto-prefix/src/xproto-stamp/xproto-configure] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/xproto.dir/all' failed
make[7]: Leaving directory '/home/nvidia/.hunter/_Base/8fee57e/3942445/a0ab86d/Build/xproto/Build'
make[7]: *** [CMakeFiles/xproto.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make[6]: Leaving directory '/home/nvidia/.hunter/_Base/8fee57e/3942445/a0ab86d/Build/xproto/Build'
make[6]: *** [all] Error 2

[hunter ** FATAL ERROR **] Build step failed (dir: /home/nvidia/.hunter/_Base/8fee57e/3942445/a0ab86d/Build/xproto
[hunter ** FATAL ERROR **] [Directory:/home/nvidia/.hunter/_Base/Download/Hunter/0.20.28/8fee57e/Unpacked/cmake/projects/xproto]

acf shared library link ogles_gpgpu errors

I try to build the libs for x86 linux with command:
export TOOLCHAIN=cxx11
polly.py --toolchain ${TOOLCHAIN} --install --verbose
And build out the libacf.a  Successfully.
But, Curiously, when I try to build ACF as a single shared library with command
polly.py --toolchain ${TOOLCHAIN} --fwd ACF_BUILD_SHARED_SDK=ON --install --verbose

There is eorrs on linking:
/usr/bin/ld: /home/bourne/.hunter/_Base/e7fe3f0/91dc4f7/a0274a8/Install/lib/libogles_gpgpud.a(memtransfer.cpp.o): relocation R_X86_64_PC32 against symbol `_ZTVN11ogles_gpgpu11MemTransferE' can not be used when making a shared object; recompile with -fPIC

It means should pass -fPIC to gles_gpgpu? and How?

acf: cpu vs gpu

  • the gradients in the textured region for the CPU pyramid get noticeably darker in column/level 6 -- presumably this is due to gaussian smoothing + decimation at the full octave
  • the gpu output is doing the scale space processing in a single batch multi-resolution texture, so the edges have discontinuities as expected -- these can be erased or softened somewhat by special padding or border alpha channel tricks in the ogles_gpgpu::PyramidProc render stage
  • the gpu output looks a little more consistent overall, possibly since there is no need for approximation heuristics that are used for speeding up the cpu version -- for the purpose of object detection these differences may not matter much

pyramid_synth_gpu_cpu

Calculating the pyramid with colorSpace set to gray results in two black channels

For a project of mine I've to work with colorSpace set to gray. While chnsCompute seems to respect that and returns only one color channel, chnsPyramid calculates only one channel but returns three channels where the second and third channel are simply black. The original matlab toolbox seems to discard those two channels, would be nice if this library could do the same.

ACF w/o SIMD

We have GPU and CPU options for ACF computation. The shader based GPU implementation may need more work to achieve desired accuracy. For the CPU path there is NEON and SSE support (thanks to NEONvsSSE.h header). Some of this code will currently no run on the CPU without SIMD. Boost.SIMD might be worth a look to generalize some of that code.

Support OpenGL ES 2.0/3.0 on Desktop systems

Migrated from (3) (and other discussion) in #102 (comment)

The current code assumes OpenGL ES 2.0 or 3.0 on iOS and Android, but OpenGL for all other platforms. This is a limiting assumption, since it is possible to run OpenGL ES 2.0/3.0 on other platforms. Both the back end ogles_gpgpu lib and the aglet test lib support this configuration, so ACF can be updated to support it too.

terminated with 'std::regex_error'

Hi, thanks for sharing your work.

I got a problem when I just run the binary 'acf-detect' or 'acf-mat2cpb'.

$ ./acf-detect
terminate called after throwing an instance of 'std::regex_error'
what():  regex_error
Aborted (core dumped)

I just follow your instructions with gcc-4.8 toolchain.
Any hint for this?

Documentation request: model files; use of library

Quick start | HOWTO is pretty self-explanatory and seems to find Lena :) but the following would be useful:

  1. minimal example or link on how to train your own models, ideally without Matlab
    • I assume that mat2cpb converts Matlab models to Cereal (?)
  2. minimal example on how to include the library (libacf?) in your own code

gradient hist channels are swapped

Hi, it seems that the (by default 6) histogram channels are swapped compared to the matlab code. I'm not yet sure what exactly causes this or whether this even counts as an issue but at least for #71 this needs to be addressed in some way. If using the default options in gray color space which results in 8 channels (one color, one magnitude and the six hist channels the following channels are swapped:

  • 2 and 5
  • 3 and 4
  • 6 and 7

Also the calculated values are a bit off compared to the matlab code though I'm not yet sure whether that's caused by the library or by OpenCV (even calculating the pyramid in matlab and importing the images with OpenCV causes a little change in the values). I'll keep searching^^

Execute issues in cross platform board

I compile this project with linux-gcc-armhf-neon (bin name : arm-linux-gnueabihf-c++ 5.4.0) and success compile.
When I execute this in target board(SoC including cortex-a9 quad-core with fpu support), It fails at glfwInit() of aglet hunter package. Are there any dependent library for aglet&acf to load or do I have to install other packages(mesa-glx, etc ) with cross-compiling?

Thank you in advance for your support.

convTri is not correctly called when calculating the pyramid

This is the matlab variant:

% compute image pyramid [approximated scales]
for i=isA
  iR=isN(i); sz1=round(sz*scales(i)/shrink);
  for j=1:nTypes, ratio=(scales(i)/scales(iR)).^-lambdas(j);
    data{i,j}=imResampleMex(data{iR,j},sz1(1),sz1(2),ratio); end
end

% smooth channels, optionally pad and concatenate channels
for i=1:nScales*nTypes, data{i}=convTri(data{i},smooth); end

and this is yours:

    util::ParallelHomogeneousLambda harness = [&](int j) {
        const int i = isA[j];

        int iR = isN[i - 1];
        cv::Size sz1 = round(cv::Size2d(sz) * scales[i - 1] / double(shrink));
        for (int j = 0; j < nTypes; j++)
        {
            double ratio = std::pow(scales[i - 1] / scales[iR - 1], -lambdas[j]);
            imResample(data[iR - 1][j], data[i - 1][j], sz1, ratio);
        }
        for (auto& img : data[i - 1])
        {
            convTri(img, img, smooth, 1);
        }
    };

cv::parallel_for_({ 0, int(isA.size()) }, harness);

However isA may be empty (it was in my case) which results in convTri not called at all. However the matlab code always applies convTri to all channels. So the convTri needs to be outside of the parallel_for call, maybe in it's own.

acf performance on android

I analyzed dependences of acf GLDetector through android studio clang compiler, and
succeed to complie GLDetector through JNI .

but i test GLDetector's performance on lena512color.png , it's 1750.45 ms,

my complie option is

LOCAL_ARM_NEON := true
LOCAL_CFLAGS += -O3 -mfloat-abi=softfp -mfpu=neon -march=armv7

how can i improve performance to reach 30 fps (about 30ms per frame)?

Inference time issue

Thanks for quick reply!

I tried to change compiler which supports std:regex such as gcc-4.9 or clang-3.5 & libcxx.
But the polly.py seems not to support gcc-4.9.
(I cannot find gcc-4-9 in the list when I type polly.py --help)

In the case of libcxx toolchain, I failed to build with some error messages. Here is log file.

Anyway, my first goal is to compare running time to piotr's matlab implementation.
I commented cxxopts things in acf.cpp and measure inference time using gettimeofday function.

Even though the inference time of the classifier heavily depends on the image content and casc thershold, somthings are wrong.
It takes 54ms for lena512color.png using drishti_face_gray_80x80.cpb.
(As you know, ~100ms in piotr's MATLAB code for 640x480 image)

I expect <1ms with my GPU (Titan X Pascal).

I think, I turn on the flag to use GPU.

option(ACF_BUILD_OGLES_GPGPU "Build with OGLES_GPGPU" ON)

How about the inference time on your machine?

iOS OpenCV 3.4.1-p0 compiler crash in protobuf

/Users/dhirvonen/.hunter/_
Base/8fee57e/a6ab714/d4f624e/Build/OpenCV/Build/OpenCV-Release-prefix/src/OpenCV-Release-build/3rdparty/protobuf/OpenCV.build/Release-iphoneos/libprotobuf.build/Objects-n
ormal/arm64/extension_set.o
fatal error: error in backend: Cannot select: 0x7f81d5336040: v2i64 = ctlz 0x7f81d5330200
  0x7f81d5330200: v2i64 = or 0x7f81d53357f0, 0x7f81d5335330
    0x7f81d53357f0: v2i64 = xor 0x7f81d629f8b0, 0x7f81d3ab4050
      0x7f81d629f8b0: v2i64 = AArch64ISD::VSHL 0x7f81d5335b80, Constant:i32<1>
        0x7f81d5335b80: v2i64,ch = load<LD16[%104](align=8)(tbaa=<0x7f81d37751d8>)> 0x7f81d5e493e0, 0x7f81d440a390, undef:i64
          0x7f81d440a390: i64 = add 0x7f81d3ab3f20, Constant:i64<-16>
            0x7f81d3ab3f20: i64,ch = CopyFromReg 0x7f81d5e493e0, Register:i64 %vreg25
              0x7f81d5330a50: i64 = Register %vreg25
            0x7f81d3ab4510: i64 = Constant<-16>
          0x7f81d5334f10: i64 = undef
        0x7f81d629ee00: i32 = Constant<1>
      0x7f81d3ab4050: v2i64 = AArch64ISD::VASHR 0x7f81d5335b80, Constant:i32<63>
        0x7f81d5335b80: v2i64,ch = load<LD16[%104](align=8)(tbaa=<0x7f81d37751d8>)> 0x7f81d5e493e0, 0x7f81d440a390, undef:i64
          0x7f81d440a390: i64 = add 0x7f81d3ab3f20, Constant:i64<-16>
            0x7f81d3ab3f20: i64,ch = CopyFromReg 0x7f81d5e493e0, Register:i64 %vreg25
              0x7f81d5330a50: i64 = Register %vreg25
            0x7f81d3ab4510: i64 = Constant<-16>
          0x7f81d5334f10: i64 = undef
        0x7f81d3ab3cc0: i32 = Constant<63>
    0x7f81d5335330: v2i64 = AArch64ISD::DUP Constant:i64<1>
      0x7f81d5335040: i64 = Constant<1>
In function: _ZNK6google8protobuf8internal12ExtensionSet9Extension8ByteSizeEi
clang: error: clang frontend command failed with exit code 70 (use -v to see invocation)
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: aarch64-apple-darwin16.0.0
Thread model: posix
InstalledDir: /Applications/develop/ide/xcode/8.1/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
clang: note: diagnostic msg: PLEASE submit a bug report to http://developer.apple.com/bugreporter/ and include the crash backtrace, preprocessed source, and associated run script.
clang: note: diagnostic msg:
********************

PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
clang: note: diagnostic msg: /var/folders/03/f9zk5wl94437_7j7vcssvd5r0000gn/T/extension_set-a9851e.cpp
clang: note: diagnostic msg: /var/folders/03/f9zk5wl94437_7j7vcssvd5r0000gn/T/extension_set-a9851e.sh
clang: note: diagnostic msg:

********************

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.