Comments (7)
I kind of expected this, because it always arises when discussing codecs. The settings were taken from previous suggestions by x264 proponents.
Please add the contact you wish to be contacted for consultation.
from compare-codecs.
I agree kierank, all settings need to be validated and subject to adjustment and re-run. There are also problems with the VP9 settings though.
For VC / real time any use of frame should not be allowed. Threading is fine provided it is within frame and not across multiple frames. Also in all the tests for a given condition, command line parameters need to be consistent for all the clips and all points in a graph. This is not the case a the moment.
I notice some cases where two points in the same graph have different command lines and this needs to be addressed.
As things stand I think this can only be seen as a place holder for future work and comment and don't think that the results (as of yet) are not that meaningful.
from compare-codecs.
The current numbers and graphs are pointed at "what are the best numbers that can be achieved by this codec for these test conditions".
It makes sense to do other graphs that go after "what are the numbers that result from holding a certain configuration largely constant across multiple tests", but that's not what the current graphs are looking for.
from compare-codecs.
I certainly think it makes sense to say if a codec could not achieve the test criteria (i.e for a given point it was too slow or it did not comply with the required buffering constraints) but otherwise think point to point tweaks of the basic parameters are problematic.
For example changes to the real time cpu-speed setting in VP9 will only be valid for the particular machine you ran on on a particular day. A few days later with a new optimization patch or run on a slightly faster or slower machine and the results will change RADICALLY. Also, I noticed a couple of cases where more basic differences such as one point vbr and the other cbr were evident.
In the 264 case for example I noticed some real time points using lookahead 30 and others in the same graph lookahead 60. I am not sure the exact impact of this, other than look-ahead not being sensible for live / VC (though fine for a near live stream).
I would prefer to see an agreed command line for a test case for each codec and then the best case results (with failures points highlighted if need be for real time... but in that case the detailed hardware specification for the machine used to test needs to be given).
I any real world case (like YouTube) you have to come up with a set of settings to use. You cant change them on a point by point and clip by clip basis.
If we have a command line for a particular use case interested parties can comment or suggest improvements, and then hopefully reach consensus. If every point on every clip is hand tuned I think this will be very hard to achieve.
I also think we should extend the test set and perhaps have a different test set for the real time VC case. Just my $0.02 wirth.
from compare-codecs.
I have now implemented (not yet committed) an "information" sub-page that will give you one graph for each configuration used in the overall comparision, and some visualization help (choice to graph by score or CPU time used rather than by PSNR) that allows the viewer to understand why a particular configuration was considered "best" at a given rate target. This might help.
from compare-codecs.
I have now added a single-configuration graph set as the first result in the lists - PR #26.
Will close this bug once that PR lands.
from compare-codecs.
PR landed. Closing bug.
from compare-codecs.
Related Issues (20)
- Make Numpy warnings into errors
- Need to detect and delete illegal configurations from storage HOT 1
- Consider encoding speed in comparison HOT 6
- RT mode configurations should disable multi-pass modes.
- ChangeValue should be CreateVariant
- Share results from different sites HOT 1
- AllEncoderFilenames returns bare filenames, not paths HOT 1
- MJPEG issue at certain settings HOT 1
- Clean out or ignore illegal parameters from shared repo
- Encoder version should be stored in results
- Database should store encodes with same parameter, differing versions
- Pages should offer a way to display different encoder versions' results HOT 1
- The run_all_tests script doesn't pick up all tests
- Support .y4m
- test_*BlackFrame* in various codecs can be centralized
- Verify_scores needs to use old DB to find "best"
- Split encoder.py
- Change "raise encoder.Error" into more specific error classes.
- Use aq-mode=3 as VP9 parameter in RT
- Investigate performance comparisions between VP9 in 1.3 and 1.6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from compare-codecs.