Git Product home page Git Product logo

opencvblueprints's Introduction

OpenCVBlueprints Code Repository

OpenCV Blueprints: Expand your knowledge of computer vision by building amazing projects with OpenCV!

This GitHub repository contains the source code associated with the PacktPub released book called "OpenCV Blueprints". It contains all the software and data used for each chapter from the book, as wel as guidelines for setting up a stable OpenCV environment to use with this book. All the software is tested and working on Linux, MacOSX and Windows systems. If you do encounter any problems, do not hesitate to open up an issue so we can get back to you with help.

Topic of the book

Computer vision is becoming accessible to a large audience of software developers, who can leverage mature libraries such as OpenCV. However, as they move beyond their first experiments in computer vision, developers may struggle to ensure that their solutions are sufficiently well optimized, well trained, robust, and adaptive in real-world conditions. For readers who already know the basics of computer vision programming, this book shows how to capture high-quality data quickly, fuse multiple kinds of data, train reliable recognition and spatial models, and integrate everything into usable and responsive applications.

Description of the book

This book is great for programmers who already have some basic knowledge of OpenCV, and want to tackle increasingly challenging computer vision problems in their careers. Inside these pages, you will find practical and innovative approaches that are battle-tested in the authors’ industry experience and research. Each chapter covers the theory and practice of multiple, complementary approaches so that you will be able to choose wisely in your future projects. You will also gain insights into the architecture and algorithms that underpin OpenCV’s functionality.

We begin by taking a critical look at inputs, in order to decide which kinds of light, cameras, lenses, and image formats are best suited to a given purpose. We proceed to consider the finer aspects of computational photography as we build an automated camera to assist nature photographers. We gain a deep understanding of some of the most widely applicable and reliable techniques in object detection, feature selection, tracking, and even biometric recognition. We also build Android projects in which we explore complexities of camera motion: first in panoramic image stitching, and then in video stabilization.

By the end of the book, you will have a much richer understanding of imaging, motion, machine learning, and the architecture of computer vision libraries and applications!

What are the key features of this book?

  • Build computer vision projects to capture high-quality image data, detect and track objects, process the actions of humans or animals, and much more.
  • Discover practical and interesting innovations in computer vision while building atop a mature open-source library, OpenCV 3..
  • Familiarize yourself with multiple approaches and theories wherever critical decisions need to be made.

What will you learn in this book?

  1. Select and configure camera systems to see invisible light, fast motion, and distant objects.
  2. Build a “camera trap”, as used by nature photographers, and process photos to create beautiful effects.
  3. Build a facial recognition system with various feature extraction techniques and machine learning methods.
  4. Build a panorama android application using OpenCV stitching module in C++ with NDK support.
  5. Optimize your object detection model, make it rotation invariant and apply scene specific constraints to make it faster and more robust.
  6. Build a person identification and registration system based on biometric properties of that person, such as their fingerprint, iris, and face.
  7. Fuse data from videos and gyroscopes to stabilize videos shot from your mobile phone and create hyperlapse style videos.

Target audience for the book?

This is an advanced book intended for readers who already have some experience in setting up an OpenCV development environment and building applications with OpenCV. The reader is generally comfortable with computer vision concepts, object-oriented programming, graphics programming, IDEs, and the command line. The reader aspires to build computer vision systems that are smarter, faster, more complex, and more practical than the competition.

This book covers a combination of theory and practice. We examine blueprints for specific projects and discuss the principles behind these blueprints so that you can become a better architect of computer vision systems. We consider many components, from optics to AI to UI, along with many requirements, from cost to speed to reliability.

Author biographies

Chapter 1 & 2: Joseph Howse

Joseph Howse lives in Canada. During the cold winters, he grows a beard and his four cats grow thick coats of fur. He combs the cats every day. Sometimes the cats pull his beard.

Joseph has been writing for Packt Publishing since 2012. His books include OpenCV for Secret Agents, OpenCV Blueprints, Android Application Programming with OpenCV 3, OpenCV Computer Vision with Python, and Python Game Programming by Example.

When he is not writing books or grooming cats, Joseph provides consulting, training, and software development services through his company, Nummist Media http://nummist.com.

Chapter 3 & 4: Quan Hua

Quan Hua is a Software Engineer at Autonomous, a startup company in robotics, where he focuses on developing Computer Vision and Machine Learning applications for Personal robots. He earned a Bachelor of Science degree at the University of Science, Vietnam, specializing in computer vision, and published a research paper in CISIM 2014. As the owner of , he also blogs about various computer vision techniques to share his experience to the community.

Chapter 5 & 6: Steven Puttemans

Steven Puttemans is a PhD research candidate at the Katholieke Universiteit Leuven. He is an enthusiastic researcher whose goal is to combine state-of-the-art computer vision algorithms with real-life industrial problems to provide robust and complete solutions for the industry. His previous projects include TOBCAT , an open source object detection based solution for industrial object detection problems using advanced object categorization techniques.

Steven is also an active participant in the OpenCV community. He is a moderator of the OpenCV Q&A Forum, and has submitted or reviewed many bugfixes and improvements for OpenCV 3.0.

More info about Steven’s research, projects and interests can be found at https://stevenputtemans.github.io!

Chapter 7: Utkarsh Sinha

Utkarsh Sinha lives in Bangalore, India and works as a Technical Director at Dreamworks Animation. He earned his Bachelor of Engineering in Computer Science and Master of Science in Mathematics from BITS-Pilani, Goa. He has been working in the field of computer vision for about 6 years as a consultant and as a software engineer at startups.

He blogs at http://utkarshsinha.com/ about various topics in technology - most of which revolve around computer vision. He also publishes computer vision tutorials on the internet through his website AI Shack http://aishack.in/. His articles help thousands of people understand concepts in computer vision every day.

opencvblueprints's People

Contributors

bneiluj avatar dylanvanassche avatar er3do avatar joehowse avatar liquidmetal avatar quanhua92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opencvblueprints's Issues

installation issue revisited

hello
I installed code block and I copy and paste the
code

include "opencv2/opencv.hpp"

using namespace std;
using namespace cv;

int main( int argc, char* argv[] )
{
// Load in test image
Mat image = imread("/home/koj/123jpeg", IMREAD_GRAYSCALE);
imshow("original", image); waitKey(0);

return 0;

}

and went to" build option" as recommended by steven and "linker setting" ->"other linker options" ->I typed in the "pkg-config opencv --libs"
and started build and run
the result was hello world inthe console screen which I could close by entering some key.
ubuntu 64-bit 3 -2016-08-29-19-48-46

1.is that right???
2.where can I use the command $ g++ test.cpp -o test `pkg-config opencv --libs'?

it is very frustrating .
3.what project should I make to begin with? opencv project or console application project.
4.and why does the project file name becomes cbp instead of cpp?

Chapter 5 - precision_recall

Hi,

I'm reading the book and trying to implement the code in chapter 5 to plot the PRC curve for a classifier correctly.

I don't understand why, but after doing the annotation and generate the output of the classifier detection I am obtaining values very similar to this Link

I cannot understand what is wrong, can you help me with this?

If needed I can provide the used annotation and output txt files.

Regards

why code block does not detect g++ install in ubuntu

ubuntu 64-bit 3 -2016-08-30-00-42-13
ubuntu 64-bit 3 -2016-08-30-00-43-25
I installed g++ and when i go into the code block and open a console application project.
ANd I seleted C++ as my language and then the next page shows gnu GCC compiler and all the other things except g++.
I heard that I should have g++ compiler to follow this book but why the code block does not detect installed g++?
I searched google for hours but everybody just says about installing build essential which i already did two days ago.
Still struggling for installation
give me some help.

Add OpenCV3 Blueprints source code to OpenCV

I am already using opencv on my windows 64bit PC. I had installed it using CMake and Visual Studio. Now, I want to add the source code this repo to my already existing OpenCV directory. How should I proceed ?

Chapter 5 possible errata and doubt

Hi guys! As the title says, this is about Chapter 5 of the book, so this is mainly for you @StevenPuttemans

1. Reporting a possible trivial errata. The cascade classification process in detail > Step2 > First paragraph: [...] especially when knowing that a model of 24x24 pixels can yield more than 16,000 features -> Shouldn't it be 160,000 features? Not the most important errata at all, but just letting you know.

2. A question about the explanation of the maxFalseAlarmRate param. I'm not sure if this is an errata, a confusing explanation, or just that I'm plain wrong. I'll start by copying the book explanation:

- maxFalseAlarmRate: This is the threshold that defines how much of your negative samples need to be classified as negatives before the boosting process should stop adding weak classifiers to the current stage. The default value is 0.5 and ensures that a stage of weak classifier will only do slightly better than random guessing on the negative samples. Increasing this value too much could lead to a single stage that already filters out most of your given windows, resulting in a very slow model at detection time due to the vast amount of features that need to be validated for each window. This will simply remove the large advantage of the concept of early window rejection.

So, what's confusing me is the second half the of paragraph. You say increasing the param value results in more filtering and more number of features to validate per stage. As I understand it, increasing the maxFalseAlarmRate implies letting the stage to misclassify more negative samples as positive ones. Therefore, less negative samples are rejected in that stage, and a higher number of accepted windows passes to the next stage. Also, I think (though I'm not too sure here) that the number of weak classifiers/features per stage is lower as the maxFalseAlarmRate is higher. In fact, as you say later on, This process continues until the false acceptance ratio on the negative samples is lower than the maximum false alarm rate set. Therefore, with a higher maxFalseAlarmRate, the chances to get a lower acceptance ratio increases, leading to stages with less features.

To sum up, did you really want to say increasing this value instead of decreasing this value, or did I simply get it wrong?

3. An additional doubt (I can post it in the OpenCV forum if you prefer). Based on your experience, is there any con (at detection time) of using more number of stages, but with less restrictive conditions? e.g. using 20 stages with minHit 0.990 instead of using 10 stages with minHit 0.995 (supposing you have enough data to tackle training in both cases, and that the final minHit is the same -haven't done math-)

4. Good job on the chapter ;)

how to modify capture_exposure_bracket.sh for Canon G9

Hi,

I am running through chapter 2 with my Canon PowerShot G9, which has the following configurations for exposurecompensation.

/main/capturesettings/exposurecompensation
Label: Exposure Compensation
Type: RADIO
Current: +1 2/3
Choice: 0 +2
Choice: 1 +1 2/3
Choice: 2 +1 1/3
Choice: 3 +1
Choice: 4 +2/3
Choice: 5 +1/3
Choice: 6 0
Choice: 7 -1/3
Choice: 8 -2/3
Choice: 9 -1
Choice: 10 -1 1/3
Choice: 11 -1 2/3
Choice: 12 -2

So I need to mofify capture_exposure_bracket.sh, perhaps starting from somewhere below 6, and to going smaller.

Could you suggest where should I start, and end by default? Like, 9, 6, 3?

Thanks!

ORB descriptor can't work

when running "fingerprint_process", I found orb_descriptor that's under "feature2d.hpp",
seems "xfeatures2d.hpp" does not use? (my opencv version is 3.1.0)
And "orb_descriptor->compute(input_thinned, keypoints, descriptors);"

this line will throw a exception :

C:\buildslave64\win64_amdocl\master_PackSlave-win64-vc14-shared\opencv\modules\features2d\src\orb.cpp:997: error: (-215) level >= 0 in function cv::ORB_Impl::detectAndCompute

I don't know what they mean...can't solve it.
Does I use the wrong library? This version's method has some changes?
Would anyone help me~Please!

./object_annotation does not write to the file specified

I tried using sudo too. The program works, but it does not write to the annotations file. I noticed in another issue that this program should no longer be used. Would be great if the updatable book said that!

If someone knows how to get it working, please let me know. I see there's an OpenCV provided tool, but haven't found the params for that yet.

Chapter 7 - Python Code error

Hi,

I'm running the python code given in chapter 7 after importing the video and gyroscope files on a mac. The application crashes in the meshwarp function with the following stack trace.

Traceback (most recent call last):
  File "/Users/jeetdholakia/PycharmProjects/HyperlapseTest/stable.py", line 601, in <module>
    stabilize_video(mp4path, csvpath)
  File "/Users/jeetdholakia/PycharmProjects/HyperlapseTest/stable.py", line 577, in stabilize_video
    prev, focal_length, gyro_delay, gyro_drift);
  File "/Users/jeetdholakia/PycharmProjects/HyperlapseTest/stable.py", line 265, in accumulateRotation
    o = utilities.meshwarp(src, pts_transformed)
  File "/Users/jeetdholakia/PycharmProjects/HyperlapseTest/utilities.py", line 120, in meshwarp
    mapx = np.append([], [ar[:,0] for ar in g_out]).reshape(mapsize).astype('float32')
ValueError: cannot reshape array of size 2332800 into shape (1080,1080,1)

The line which produces the error is:

utilities.py -> def meshwarp(src, distorted_grid)
mapx = np.append([], [ar[:,0] for ar in g_out]).reshape(mapsize).astype('float32')

P.S. - There are small errors in other classes as well, which required minor changes in code to get it up and running, they include :

  1. Removing calibration_worker import (module not found error!) and importing from calibration import GyroscopeDataFile to make the gyroscope data work.
  2. The gyroscope file generated does not have the extension .gyro.csv, its just XYZgyro.csv, where as python code looks for the aforementioned extension.
  3. pts_transformed variable in def accumulateRotation, is not initialized anywhere, hence i initialized it as an empty array pts_transformed = [] in the beginning of that function.
  4. In the same function, the supplied code output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], transform) throws up numpy error data type could not be understood. Hence, I changed it to output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"),transform), and the code worked.

Please let me know how to fix the aforementioned issue, and if I have made any mistakes while correcting the code which threw some compile time and run time errors.

Thanks,
Jeet

Building opencv 3 and tesseract ocr.

Following the instructions, I

  1. created a directory in "C:" called "CVmodules" where I put the zips downloaded from the Itseez link(opencv-master and opencv_contrib-master).
  2. In the Cmake GUI selected the opencv source code folder (C:\CVmodules\opencv) and the folder where binaries will be built (C:\CVmodules\contribBuild)
  3. pressed the configure button.
  4. browsed the parameters in the interface and look for the form called OPENCV_EXTRA_MODULES_PATH
  5. completed this OPENCV_EXTRA_MODULES_PATH by the proper pathname to the <opencv_contrib>/modules value (C:\CVmodules\opencv_contrib\modules).
  6. then pressed the configure button

However, the picture below is all I see when research tessearct. There is no way to enable tesseract.
I also realized that there seem not to be a tesseract library, in the tesseract that I installed,to point it to. I downloaded it from
https://code.google.com/p/tesseract-ocr/downloads/detail?name=tesseract-ocr-setup-3.02.02.exe&can=2&q=

What am I doing wrong? Did download the wrong version of the tesseract. This is for visual studio 2013 in windows 8. I already have opencv 3 installed using the prebuilt libraries method. Should uninstall and build it instead?

image

Some packages are not found in Linux Installation

Hi, thanks for those detailed instructions.

My env is Ubuntu 14.04.3.

apt-get install build-essentials

This is supposed to be:

apt-get install build-essential

And this is:

apt-get install cmake cmake-qt-gui git pkg-config libgtk2.0-dev libavcodec-dev libformat-dev libswscale-dev

supposed to be:

apt-get install cmake cmake-qt-gui git pkg-config libgtk2.0-dev libavcodec-dev libavformat-dev libswscale-dev

One more. libeigan-dev not found.

apt-get install libeigen-dev libeigen2-dev libeigen3-dev

No alternative I came up with.

apt-get install libeigen2-dev libeigen3-dev

Also, I suggest to add sudo for Ubuntu installation commands.

i cannot install open cv on ubuntu14.04

I read the step by step guide and I am stuck at the directory explanation.
my root folder is " /home/koj"
when I made a directory named "opencv" in that root directory and inside that "build" and inside that "opencv_contrib" as explained in the guide.
and then if I try
git clone https://github.com/Itseez/opencv.git opencv
then there comes an error that there already exist opencv directory and not empty
So I executed
git clone https://github.com/Itseez/opencv.git opencv
in the root directory (" /home/koj") without creating (a directory named "opencv" and inside that "build" and inside that "opencv_contrib") that directory chain and
after downloading the file in root directory I could find that there is an "opencv" directory made by downloading itself.
So I add "build" directory and then at that build directory I executed
git clone https://github.com/Itseez/opencv_contrib.git opencv_contrib
then there was automatically made directory in the "build" directory named "opencv_contrib".

So after that I went to build directory and executed "cmake-gui .."
but the return was cmake-gui:cannot connect to x server

And I cannot find any answer at googling.
Please help me to install opencv.
I am at chapter 1 and cannot progress anymore.

i am struggling in installation for two days please help me final paragraph of installation needs help

hello
I cannot understand the last part of the installation guide.
I opened gedit and pasted the test code with the image path replaced with "/media/koj/Ubuntu-Server 16.04.1 LTS amd64"
and I activated external tools at gedit so there appeared "build" at the "external tools"
so when i pressed build at the gedt
Running tool: Build

No Makefile found!

Done
can you explain why this happen? and what does it mean in the below which says "configure your editor to include the build and opencv repositories"

ubuntu 64-bit 3 -2016-08-29-16-41-21

Make sure that you replace the image path with an actual image path that exists! Then, configure your editor to include the build and installed OpenCV repositories. Under Ubuntu, you can do that by pointing your linker to

pkg-config opencv --libs
which will auto grab all the dependencies. This can be done inside your IDE on the linker settings tab or by executing the following command

$ g++ test.cpp -o test pkg-config opencv --libs
using the command line. Since all headers are installed into /usr/local/lib/, you do not need to explicitly tell that to the system, since Ubuntu can handle that itself.

Chapter 4 : undefined reference to `cv::Stitcher::createDefault(bool)'

I have added
1> "ndk.dir" variable in local.properties file.
2 > Change the absolute path to OpenCV SDK

In JNI we have com_example_panorama_NativePanorama.cpp file.
In that we have
#include opencv2\opencv.hpp"
#include opencv2\stitching.hpp"

but in our jni folder we dont have opencv.
How to solve this issue?

blinking eye program is not working

hello
i bought playstation eye and inserted to my notebook which has vmware pro12 and ubuntu 14.04 installed.
I cannot find how can i install my playstation eye driver in this book to my ubuntu so i just connected to my usb slot and i cannot figure out it is working or not.
and I copied the cpp file of the blinking eye source code and then I made a console project at codeblocks.
and then build and run .
First time it made an error so I add to linker setting the pkg-config opencv --libs
and then it made no error

when i ran it first time it made a blank x terminal with no message i waited but nothing appeared.
So I quit the terminal by clicking x button in the terminal window and when I go back to the project folder there was a slowmo video file.but when i played it says it is empty .
So I thought there is a problem with the playstation eye installation.
So I searched google and find out following site
http://oftw.wikidot.com/install-ps3-eye-camera-in-ubuntu
but it did not worked at second the line
"sudo patch -p1 < ps3eyeMT-2.6.31-10-generic.patch
only patch the driver in drivers/media/video/gspca/ov534.c"
I could not understand the meaning of "only patch the driver ..."
So it made an error at console window and i gave up following that method.
So I came back to my codeblock and run again this time the following attached screen appeared at the window.
So I changed the frame rate in the source code to 120 but this time again the same window appeared.
how can I install the playstation eye driver to my computer and why does this slowmo video is empty?
how do i know my playstation eye is working in ubuntu.?
what is the playstation eye dial at the lens functions is it a power button?
dsc_0225

Chapter 5 object_annotation not working as expected

Hello, I am trying to follow along with chapter 5 and am having problems with the object annotation code. I am using Ubuntu 14.04.05 and have installed opencv 3.1. I cloned the entire OpenCVBlueprints rep. and once inside the object annotation dir I compiled the code by doing

cmake .
make

Then I made sure that I had some images handy in the ~/Pictures dir.

When I execute the command

./object_annotation -images ~/Pictures/ -annotations ~/Pictures/annotation.txt

the terminal simply returns to a new prompt with out providing any errors or feedback.

However if I run the code provided by opencv

./opencv_annotation -images ~/Pictures/ -annotations ~/Pictures/annotation.txt

everything works as expected

Any Ideas?

Thanks,

Ryan

building test.cpp

Hi,

Under Ubuntu, you can do that by pointing your linker to

pkg-config opencv --libs

which will auto grab all the dependencies.

Is this what you meant?

$ g++ test.cpp -o test `pkg-config opencv --libs`

Then, writing so is more helpful, though it might be obvious.

Chapter 7 - UnboundLocalError: local variable 'rot_end' referenced before assignment

Good day,

I'm getting the following error in Chapter 7: UnboundLocalError: local variable 'rot_end' referenced before assignment

This seems to happen in stable.py in the following function:

def getAccumulatedRotation(w, h, theta_x, theta_y, theta_z, timestamps, prev, current, f, gyro_delay=None, gyro_drift=None, shutter_duration=None, doSub=False):

    if not gyro_delay:
        gyro_delay = 0

    if not gyro_drift:
        gyro_drift = (0, 0, 0)

    if not shutter_duration:
        shutter_duration = 0

    x = np.array([[1, 0, -w/2],
                      [0, 1, -h/2],
                      [0, 0, 0],
                      [0, 0, 1]])
    A1 = np.asmatrix(x)
    transform = A1.copy()

    prev = prev + gyro_delay
    current = current + gyro_delay

    if prev in timestamps and current in timestamps:
        start_timestamp = prev
        end_timestamp = current
    else:
        (rot_start, start_timestamp, t_next) = fetch_closest_trio(theta_x, theta_y, theta_z, timestamps, prev)
        (rot_end, end_timestamp, t_next) = fetch_closest_trio(theta_x, theta_y, theta_z, timestamps, current)

    gyro_drifted = (float(rot_end[0] + gyro_drift[0]),
                    float(rot_end[1] + gyro_drift[1]),
                    float(rot_end[2] + gyro_drift[2]))
    if doSub:
        gyro_drifted = (gyro_drifted[0] - rot_start[0],
                        gyro_drifted[1] - rot_start[1],
                        gyro_drifted[2] - rot_start[2])
    R = getRodrigues(gyro_drifted[1], -gyro_drifted[0], -gyro_drifted[2])

    x = np.array([[1.0, 0, 0, 0],
                     [0, 1.0, 0, 0],
                     [0, 0, 1.0, f],
                     [0, 0, 0, 1.0]])
    T = np.asmatrix(x)
    x = np.array([[f, 0, w/2, 0],
                  [0, f, h/2, 0],
                  [0, 0, 1, 0]])
    transform = R*(T*transform)

    A2 = np.asmatrix(x)

    transform = A2 * transform

    return transform

Any ideas why rot_end would be empty or am I missing something related to Python3?

UnblinkingEye makes error at ubuntu 16.04 LTS which is installed without virtual machine

please try the unblinking eye code with ubuntu 16.04 again.
changin camera index to 0 and 1 gives the same result.
and now cheese program is working with photo, video and burst mode.
dsc_0255
dsc_0256

and another question is at the asus xtion pro running page.
I could not understand the following command how should i type this command is which folder what does the backslash means at the end of each line ?
dsc_0259

axus xtion pro cannot be stopped in chapter 1 infravision.

hello again
I installed asus xtion pro .
one thing i noticed was the command line of the book regarding usb3.0 and usb2.0 problem did not work.
i tried wine cmd with running the usbupdate.bat file and it worked.
And another issue was installing openni2 the installation guide maybe outdated that openjdk-6-jdk nolonger exist and openjdk-8-jre was what i have done.
and when i added all the library files in the linker setting I added 7 which was in the folder which was pointed by the author in the book.
and included the search directory two of them openni2 and opencv .
and ran the infravision and it run good
but when i want to exit the program so i clicked the red button on the infrared camera screen.
but it reappeared soon and the console was saying some error message about the usb priority problem.
I did it both on the vmshare and ubutu alone computer but both had exactly the same problem.
when i turned off the console screen the infrared filming screen and console was gone but the red light of the xtion pro which i think is indicating the ir light is emitting is stlll on.
So i had to pull off the usb cord from my computer to trun off those red light of asus xtion pro.
do you know why this happens?
i attached some photo
dsc_0291
dsc_0292

Required OpenCV version for fingerprint_process.cpp

Hello,

I am trying to get the fingerprint functionality up and running (OpenCVBlueprints/chapter_6/source_code/fingerprint/fingerprint_process/fingerprint_process.cpp). The file shows OpenCV2 is required.

I tried using OpenCV 2.4, however it did not work with the fingerprint algorithm. Could someone point me to the correct OpenCV version i should use?

Thanks,
Mike

Chapter 5 Text and Code Don't Match

The book text in page 190 calls for a -images switch, but the code is looking for a -output switch

Book:

First, parse the content of your positive image folder to a le (by using the supplied folder_listing software inside the object annotation folder), and then follow this by executing the annotation command:
./folder_listing –folder <folder> -images <images.txt>

Code:

Line 38 in folder_listing.cpp

        }else if( !strcmp( argv[i], "-output" ) )
        {
            output = argv[++i];
        }

Chapter 7 - Error in Python Code

Good day,

I'm currently working on Chapter 7. I've created an Android application to capture video and write to the gyro file.

When I try to use the captured video and gyro file with stable.py I get the following error:

  File "stable.py", line 572, in <module>
    stabilize_video(mp4path, csvpath)
  File "stable.py", line 548, in stabilize_video
    rot = accumulateRotation(frame, delta_theta[0], delta_theta[1], delta_theta[2], timestamps, previous_timestamp, prev, focal_length, gyro_delay, gyro_drift)
  File "stable.py", line 237, in accumulateRotation
    output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform)
cv2.error: /tmp/opencv-20170224-1869-10nlf6f/opencv-2.4.13.2/modules/core/src/matmul.cpp:1943: error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F) in function perspectiveTransform

Any idea as to what the problem could be?

camera stays busy after terminating set_motion_trap.py

Hi,

After terminating set_motion_trap.py by interrupting with Ctrl+C, my camera stays busy, and won't respond with the following error.

$ gphoto2 --list-all-config

*** Error ***              
An error occurred in the io-library ('Could not claim the USB device'): Could not claim interface 0 (Device or resource busy). Make sure no other program (gvfs-gphoto2-volume-monitor) or kernel module (such as sdc2xx, stv680, spca50x) is using the device and you have read/write access to the device.
*** Error (-53: 'Could not claim the USB device') ***       

For debugging messages, please use the --debug option.
Debugging messages may help finding a solution to your problem.
If you intend to send any error or debug messages to the gphoto
developer mailing list <[email protected]>, please run
gphoto2 as follows:

    env LANG=C gphoto2 --debug --debug-logfile=my-logfile.txt --list-all-config

Please make sure there is sufficient quoting around the arguments.

After manually plug it out and in, it starts responding again.

Is there anyway to cleanly terminate the program? Or anything wrong?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.