Comments (12)
I was actually able to make a reasonable improvement on that test case you provided @jchennales. Using scikit.exposure.match_histograms
with just the first frame of the video as a reference, here is what the result looked like: issue53-corrected.zip (no false positives either anymore)
The color space conversion is just using BGR2GREY right now, so using HSV or CIE/LAB would probably help a bit too. This proves that the idea can work in certain cases. Still need to figure out how to deal with changing the reference temporally over time, as abrupt changes to the matching could cause false positives itself.
from dvr-scan.
Hey @bossjl;
Hoping some others more familiar with methods to resolve this can chime in. Ideally the brightness changes could be smoothed out with another method (e.g. histogram matching), but what you suggest also seems feasible as well. It would be the same thing as setting both a minimum and a maximum threshold (versus what happens now, which is just a minimum). I'm unsure as to how robust of a solution that would be, so I definitely would like to do some more research and hear some other ideas on the topic first.
As-is, I think anyone should be able to add in a maximum threshold argument, so I'll tag this as help wanted for now.
Thanks for the suggestion!
from dvr-scan.
@bossjl do you happen to have any sample videos exhibiting this that I could use for testing? Thanks!
from dvr-scan.
Did some quick research into this, it may be worth investigating if OpenCV exposure compensation will work here. It's primarily intended for stitching different images together, so it might be overkill - need to do some performance checks.
Other than that, we can consider some kind of histogram matching as a first pass, and occasionally update the reference histogram when some measure - say the average brightness of the frame - changes by a certain threshold. The downside of this however is it might be a relatively large performance hit, so it would likely need to be an option rather than always enabled.
Edit: A relatively simple solution might be to average each frame, and use it to calculate a rolling average. Then, multiply all pixels in the frame by the required amount so that it's average is the same as the rolling average. Too simple - this causes even worse results, since it also affects light areas which may not be underexposed. Need to consider some kind of exposure compensation algorithm which may apply different corrections to different parts of the frame.
from dvr-scan.
@Breakthrough hi, i'm interested in this too and looking for solutions, if you want i can provide you with a sample video for testing (contains both real motion and brightness change caused by sun and clouds)
from dvr-scan.
Hello @gch1p, yes if you could provide a sample that would be very useful. Thank you!
from dvr-scan.
Hello @gch1p, yes if you could provide a sample that would be very useful. Thank you!
Alright, can you give me your email? Or some other contact where I can send it privately
from dvr-scan.
Is it possible to post it publicly here? If possible I would like to add it to the repo to use as a future development / test cases. If not, that's understandable. Thanks!
from dvr-scan.
Hi. hoping to revive this. Attaching a false-positive from clouds passing by. You can use it at will.
May I suggest doing the exposure compensation technique but not on every frame but only on the ones that have triggered the "normal" detection method? Like a "second-phase" check.
BrightnessFalsePositive.zip
Great program! Keep at it :)
from dvr-scan.
btw, this was generated with --threshold 0.85 -a 144 911 445 907 474 1080 141 1080
(a very small region on the bottom left driveway)
from dvr-scan.
Hi. hoping to revive this. Attaching a false-positive from clouds passing by. You can use it at will. May I suggest doing the exposure compensation technique but not on every frame but only on the ones that have triggered the "normal" detection method? Like a "second-phase" check. BrightnessFalsePositive.zip Great program! Keep at it :)
Thanks for the sample! Exposure compensation is definitely the right way to go, however it might be very expensive if it needs to be done as a second pass. This is because you would need to run background subtraction twice on each frame - once to detect the threshold without updating the model, and again just to update the model. I'm not opposed to pursuing this solution, but would like to think about alternatives too that might provide better performance.
That being said would probably be not too difficult to try something like you suggested, the new API internally for DVR-Scan is quite hackable in this regard:
https://github.com/Breakthrough/DVR-Scan/blob/main/dvr_scan/subtractor.py
Would be happy to see any PRs that might add support for this, even if it isn't that efficient.
I wonder if a better solution might lie in histogram correction or keeping a running average of the current exposure level, and using that to compensate frames as they are fed into the pipeline. Thoughts?
Edit: I tried some of the methods outlined below but had some difficulty making it consistent:
https://stackoverflow.com/questions/56905592/automatic-contrast-and-brightness-adjustment-of-a-color-photo-of-a-sheet-of-pape/56909036
Might be worth also looking into how OpenCV does exposure comp for image stitching:
https://github.com/opencv/opencv/blob/ae347ab493110eb774189fa6e533838ad498da5d/modules/stitching/src/stitcher.cpp#L204
from dvr-scan.
There's a few other parameters that the background models can set which I should add config file options for. In particular I suspect the history size would be pretty relevant as it likely needs to be adjusted based on framerate. I tried lowering the history to 200 (default 500) and increasing the variance threshold to 100 (default 16) and had some success with reducing false positives. These parameters are described more here:
https://docs.opencv.org/3.4/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html#ab8bdfc9c318650aed53ecc836667b56a
Adding config file options for these has long been on my TODO list, but likely won't fix all cases like this where there are rapid brightness changes. After giving it some more thought, I suspect histogram matching might be the way to go. In the processing pipeline, the input to the subtractor model must be a 1-channel image. To filter out brightness changes across frames, a histogram could be calculated on each frame, and used to calculate an average histogram for the past N frames. This could be used to correct the frame before subtraction, by shifting each pixel value such that the resulting histogram matches the calculated average.
This should make things more robust to sudden brightness changes covering a large portion of the frame, while still preserving enough local contrast for areas with motion to still be distinguishable. I haven't had much time to prototype this yet, but it should be doable with reasonable performance.
from dvr-scan.
Related Issues (20)
- Scanning one day of CCTV produced 765gb or 900,000 files HOT 1
- Please return exit code to indicate whether events were detected HOT 1
- Please document how to get cuda working HOT 1
- Request: Support for OpenCL HOT 2
- Docker Documentation run command
- When using with wildcard, dvr-scan detects motion across two videos HOT 1
- ROI / region of interest not working HOT 1
- Request: Process video stream from stdin HOT 2
- wildcards produce different detection than file by file run HOT 4
- how do you install this program? HOT 6
- cv2 package got renamed to opencv-python and is now at 4.8.0.76 HOT 2
- Timecode format of hh:mm:ss not recognized HOT 3
- Process ends early and displays error: Element exceeds containing master element HOT 1
- Traceback TypeError for any dvr-scan command HOT 1
- cnt mode stuck HOT 4
- Invalid duration specification for ss: 00:04:60.000 when using -m copy or -m ffmpeg HOT 6
- Require version >= 0.6.2 of scenedetect
- Exception in encode thread when extracting motion events on v1.6 HOT 7
- Unsupported config option: region-file HOT 3
- region-editor config option gets ignored HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dvr-scan.