Git Product home page Git Product logo

shadow1runner / qgroundcontrol Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mavlink/qgroundcontrol

0.0 0.0 0.0 187.04 MB

QGroundControl Ground Control Station with Obstacle Detection

Home Page: https://bitbucket.org/shadow1runner/uavobstacledetection/

License: Other

Python 0.05% C++ 65.53% QML 9.69% QMake 0.71% Java 0.81% Shell 0.06% NSIS 0.04% CMake 0.05% C 22.35% CSS 0.18% Objective-C++ 0.02% PowerShell 0.01% Objective-C 0.13% Roff 0.40%

qgroundcontrol's People

Contributors

amolinap avatar billbonney avatar birchera avatar bkueng avatar crashmatt avatar dagar avatar dagoodma avatar dogmaphobic avatar dongfang avatar donlakeflyer avatar drton avatar gregd72002 avatar hugovincent avatar jgoppert avatar johnflux avatar julianoes avatar lorenzmeier avatar malcom2073 avatar malife avatar natergator avatar ndousse avatar oberion avatar pixhawk-students avatar pritamghanghas avatar rjehangir avatar tcanabrava avatar tecnosapiens avatar thomasgubler avatar treymarc avatar tstellanova avatar

Watchers

 avatar

qgroundcontrol's Issues

UX Improvements: Resetting CA

  • reset CA should never lead to deletion of the old CA frames
  • maybe an auto-resume functionality should be included?

Throttle FrameGrabber/OwnFlowHandler: it provides too many frames and overwhelms OwnFlow

Multiple Options:

  • Try to connect the signal/slot using a BlockingConnection - this forces the FrameGrabberThread to wait until the actual frame is consumed. This, however, also implies that any intermediate frames are not consumed from the capturing device (unless OpenCv buffers them?) - worth a try
  • Implement throttling according to current fps by connecting the already emitted OwnFlow/timingStatistics event
  • Hack: Simply skip every e.g. 9 of 10 frames

All of the aforementioned options influence the KF quite heavily - so investigations are necessary how heavy.

The alternative option: Move quad very slow :S

Ideas

  • replace Farneback with the filter-based approach should deliver superior performance; calculating the pendant to the inlier ratio should not be necessary as confidence values should be obtained per default
  • OF colliding: use magnitude as a hint for generating the OF vote - so far it's not considered at all; alternatively filtering the OF for low values should deliver ok-ish results as well

UI is too slow wrt. frame display

While the OwnFlow processing seems to be ok-ish (speed-wise):

Frame # 176
Measured FoE:
    ( 86 ,  81 )
    Inliers:  472  /  200000    0.236 %
Frame # 177
Measured FoE:
    ( 88 ,  86 )
    Inliers:  552  /  200000    0.276 %
Frame # 178
Measured FoE:
    ( 99 ,  82 )
    Inliers:  780  /  200000    0.39 %
Collision level lowered by 1 to  6
Emitting collisionLevelChanged(), new collision level:  6
Frame # 179
Unknown command 11
Measured FoE:
    ( 99 ,  82 )
    Inliers:  663  /  200000    0.3315 %
Collision level raised by 1 to  7
Emitting collisionLevelChanged(), new collision level:  7
Frame # 180
Unknown command 11
Measured FoE:
    ( 104 ,  82 )
    Inliers:  728  /  200000    0.364 %
Collision level raised by 1 to  8
Emitting collisionLevelChanged(), new collision level:  8
Frame # 181
Unknown command 11
Measured FoE:
    ( 103 ,  85 )
    Inliers:  732  /  200000    0.366 %
Collision level raised by 1 to  9
Frame # 182
Unknown command 11
Measured FoE:
    ( 97 ,  85 )
    Inliers:  849  /  200000    0.4245 %
Collision level lowered by 1 to  7
Emitting collisionLevelChanged(), new collision level:  7

It is not in the UI (neither the persisted frames on the SSD, which have the same amount of frames):
Only 139 images out of 287 according to log file are displayed.

FOE might lie outside of the image -- these regions are discarded though

Whem moving alongside a corridor and the camera is e.g. pointing to the left the OF vectors origin in an FOE outside the image (to the right) - this method, however, might consider a different 'hot' spot as FOE, this might confuse the Kalman, even though inier ratio is used as confidence value.

Just keep this in mind

Filter Optical Flow for outliers

@inproceedings{zingg2010mav,
title={MAV navigation through indoor corridors using optical flow},
author={Zingg, Simon and Scaramuzza, Davide and Weiss, Stephan and Siegwart, Roland},
booktitle={Robotics and Automation (ICRA), 2010 IEEE International Conference on},
pages={3361--3368},
year={2010},
organization={IEEE}
}

used a two-step filtering:

A simple threshold filter removes too large optical flow
amplitudes. In a further step, an angular criterion is checked.
Optical flow has to be tangential to a circle with its center at
the center of the image with a deviation of up to 50°. This
threshold seems to be large, it has been chosen experimentally
and proved to work though. It is necessary, since the
optical flow has not an exact circular shape. If the threshold
is chosen smaller, optical flow close to the direction of travel
might be filtered out, even if it was not wrongly matched.
A remarkable amount of wrong matches could be extracted
using these two criteria.

Todo:

  • Is filtering necessary with a dense optical flow field?
  • Analyze OF with and without filtering
  • how to they determine the center of their circle? voting?

Remove rotational information generated my MAV

It influences the optical flow with 'false' information, this needs to be counteracted.

A very promising approach is found in B. Depth Estimation in Straight Flight:

@inproceedings{zingg2010mav,
title={MAV navigation through indoor corridors using optical flow},
author={Zingg, Simon and Scaramuzza, Davide and Weiss, Stephan and Siegwart, Roland},
booktitle={Robotics and Automation (ICRA), 2010 IEEE International Conference on},
pages={3361--3368},
year={2010},
organization={IEEE}
}

Notice, howerver, that the authors state that:

If the IMU data are noisy, and therefore not precise enough, the
compensation for rotational effects cannot work properly and
produces wrong results.
Especially, inaccurate information
of the yaw angle can cause wrong sideways depth estimations,
since its effect is a decrease of the optical flow
magnitude on one side of the MAV and an increase of the
optical flow on the other side.

Create QML Overlay for displaying the optical flow information

The optical flow information should be displayed analogously to the VideoStreaming functionality already present in QGC, that is it should be possible to:

  • click it to make it focusable while the flight display moves to the lower left corner (and vice versa)
  • display all the important information (divergence, inliers, foe in pixels (with and without kalman filtering))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.