Git Product home page Git Product logo

flip's Introduction

Teaser image

ꟻLIP: A Tool for Visualizing and Communicating Errors in Rendered Images (v1.4)

By Pontus Ebelin, and Tomas Akenine-Möller, with Jim Nilsson, Magnus Oskarsson, Kalle Åström, Mark D. Fairchild, and Peter Shirley.

This repository holds implementations of the LDR-ꟻLIP and HDR-ꟻLIP image error metrics. It also holds code for the ꟻLIP tool, presented in Ray Tracing Gems II.

The changes made for the different versions of ꟻLIP are summarized in the version list.

A list of papers that use/cite ꟻLIP.

A note about the precision of ꟻLIP.

An image gallery displaying a large quantity of reference/test images and corresponding error maps from different metrics.

Note: in v1.3, we switched to a single header (FLIP.h) for C++/CUDA for easier integration.

License

Copyright © 2020-2024, NVIDIA Corporation & Affiliates. All rights reserved.

This work is made available under a BSD 3-Clause License.

The repository distributes code for tinyexr, which is subject to a BSD 3-Clause License,
and stb_image, which is subject to an MIT License.

For individual contributions to the project, please confer the Individual Contributor License Agreement.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.

Python (API and Tool)

Setup (with pip):

cd python
pip install -r requirements.txt .

Usage:

API:
See the example script python/api_example.py. Note that the script requires matplotlib.

Tool:

python flip.py --reference reference.{exr|png} --test test.{exr|png} [--options]

See the README in the python folder and run python flip.py -h for further information and usage instructions.

C++ and CUDA (API and Tool)

Setup:

The FLIP.sln solution contains one CUDA backend project and one pure C++ backend project.

Compiling the CUDA project requires a CUDA compatible GPU. Instruction on how to install CUDA can be found here.

Alternatively, a CMake build can be done by creating a build directory and invoking CMake on the source dir (add --config Release to build release configuration on Windows):

mkdir build
cd build
cmake ..
cmake --build . [--config Release]

CUDA support is enabled via the FLIP_ENABLE_CUDA, which can be passed to CMake on the command line with -DFLIP_ENABLE_CUDA=ON or set interactively with ccmake or cmake-gui. FLIP_LIBRARY option allows to output a library rather than an executable.

Usage:

API:
See the README.

Tool:

flip[-cuda].exe --reference reference.{exr|png} --test test.{exr|png} [options]

See the README in the cpp folder and run flip[-cuda].exe -h for further information and usage instructions.

PyTorch (Loss Function)

Setup (with Anaconda3):

conda create -n flip_dl python numpy matplotlib
conda activate flip_dl
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install -c conda-forge openexr-python

Usage:

Remember to activate the flip_dl environment through conda activate flip_dl before using the loss function.

LDR- and HDR-ꟻLIP are implemented as loss modules in flip_loss.py. An example where the loss function is used to train a simple autoencoder is provided in train.py.

See the README in the pytorch folder for further information and usage instructions.

Citation

If your work uses the ꟻLIP tool to find the errors between low dynamic range images, please cite the LDR-ꟻLIP paper:
Paper | BibTeX

If it uses the ꟻLIP tool to find the errors between high dynamic range images, instead cite the HDR-ꟻLIP paper:
Paper | BibTeX

Should your work use the ꟻLIP tool in a more general fashion, please cite the Ray Tracing Gems II article:
Chapter | BibTeX

Acknowledgements

We appreciate the following peoples' contributions to this repository: Jonathan Granskog, Jacob Munkberg, Jon Hasselgren, Jefferson Amstutz, Alan Wolfe, Killian Herveau, Vinh Truong, Philippe Dagobert, Hannes Hergeth, Matt Pharr, Tizian Zeltner, Jan Honsbrok, and Chris Zhang.

flip's People

Contributors

comradez avatar hhergeth avatar inversepixel avatar jeffamstutz avatar jimnv avatar latios96 avatar mmp avatar pandersson94 avatar takeninenv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flip's Issues

Publish to PyPI

Would it be possible to publish the Python implementations of ꟻLIP to PyPI? It would be really convenient to directly pip install the package from command-line.

Different results when giving multiple test images

Hello and thank you for this great tool. I was trying to generate FLIP metrics and images for a big set of images and thought I could speed things up by computing multiple comparisons at once. So instead of running flip-cuda.exe -r Reference.png -t TestX.png for every image in my set, I tried flip-cuda.exe -r Reference.png -t Test1.png -t Test2.png [...].

I attached a screenshot of the result:

issue

While the pairwise execution produces the result I expected (hard to see in the thumbnail), the combined execution produces completely unexpected results. Did I misunderstand the purpose of supplying multiple test images or is it a bug in the tool?

Thanks for your help and the great software!

Ideas for future improvements to the code (mostly a reminder to ourselves)

  • Make solve2degree() a bit more robust by using a better 2nd-degree solver.
  • Renaming of, e.g., float2color3()floatToColor3() etc, because in this case it is a bit confusing. When doing this, we should rename so no function uses a 2 instead of To
  • Remove image() constructors that are not needed?
  • Test if LDR images are in [0,1] and return warning or bool or something.
  • Overlapping histograms for C++ is missing.
  • Fix constructor for FLIP::filename("tmp.png"); so that we do not need tmp but can set the extension (png).
  • Rename saveHDROutputLDRImages() to something better, perhaps saveIntermediateLDRImages(). Same goes for corresponding variable names: returnLDRFLIPImages, hdrOutputFlipLDRImages, returnLDRImages, and hdrOutputLDRImages.
  • Add more test/ref images + test of histograms.
  • Handle the case when the median = 0
  • Publish to PyPI
  • Make it possible for Python version to run on GPU.
  • Make a fast CUDA version of std::nth_element() (would make the HDR-version faster)
  • Better handling of num channels and dimensions (if they differ). Do it in a single place for pyhton/cpp. If file does not exist, then we should never write any image.

Crash on OpenEXR images with more channels than RGB

Hi,

I noticed that FLIP currently crashes when OpenEXR images which contain more channels than just RGB, for example AOVs.

Steps to reproduce:

  1. Download the attached example reference and test images images.zip
    The reference image has the following channels: R, G, B, sampleCount.R, sampleCount.G, sampleCount.B
    The test image has the following channels: R, G, B, absoluteRenderTime, relativeRenderTime.R, relativeRenderTime.G, relativeRenderTime.B, sampleCount.R, sampleCount.G, sampleCount.B
  2. Run flip.exe --reference reference.exr --test test.exr
    FLIP outputs the following and crashes:
Undefined EXR channel name: absoluteRenderTime
Undefined EXR channel name: relativeRenderTime.B
Undefined EXR channel name: relativeRenderTime.G
Undefined EXR channel name: relativeRenderTime.R
Undefined EXR channel name: sampleCount.B
Undefined EXR channel name: sampleCount.G
Undefined EXR channel name: sampleCount.R
EXR channels may be loaded in the wrong order.
Insufficient target channels when loading EXR: need 10
Undefined EXR channel name: sampleCount.B
Undefined EXR channel name: sampleCount.G
Undefined EXR channel name: sampleCount.R
EXR channels may be loaded in the wrong order.
Insufficient target channels when loading EXR: need 6

The differing additional channels do not matter, FLIP also crashes when running flip.exe --reference reference.exr --test reference.exr

Expected behavior:
FLIP should not crash. It should load the OpenEXR images, recognize the RGB channels and compute the metrics, ignoring the other channels. I think it is not uncommon to have other channels than RGB in an OpenEXR image.

I tested on Windows 10

I filed a PR with a proposed fix: #31

No error maps generated when non-existent save directory is given

I stumbled across this very minor nuisance when using flip:
Specifying an output directory for the error maps using the -d parameter, flip fails silently if the directory does not exist. This results in no error maps being saved.

Example:
Running flip-cuda.exe -d output -r reference.png -t test.png will not create the output directory, but instead flip will print the metrics and then exit.

Expected behavior:
Either create the directory or print a warning that the error maps will not be saved.

Separable filters?

Hey, forgive me for a newbie question, image filters are not my strongest skill :). I have been evaluating your work a bit and it does seem to fit my case really well! I am going to continue evaluating, but at the moment my main concern is performance, as I plan to run this for a hundreds of thousands of image pairs. At least in the C++ implementation, most of the time is spent in convolution, so I wonder if there is a way to separate the "spatial", "point" and "edge" filters? Unfortunately GPU acceleration is not available in the instances I plan to run this on :(.

Installation Guide For Python `venv`

Hello, I tried to get this repository to work with Python venv on macOS. But having issue installing OpenEXR. Is there a method to get this repository working with Python venv on macOS?

3D images

Hello and thank you for sharing your work!
Do you thing it would be possible to apply your algorithm to a 3D image? Les's say an MRI for example.
Would you recommend any other tool for this if it weren't possible?
Thank you in advance!

Python Binding: inputParameters dict is shared between multiple calls to evaluate

Hi,

I noticed that if no inputParameters dict is supplied to flip.evalute via the Python bindings, the default argument is used. Since the default argument only exists one time in memory and is mutated, this leads to unexpected results for subsequent calls.

Steps to reproduce:

  1. Download the attached images and Python Script flip-reproduce.zip
  2. Execute python reproduce.py. The script calls flip.evaluate from Python with no custom input parameters. For the first call, Flip will calculate start/stop exposure (in this case nan) and will store them in the inputParameters before returning them. The second flip does not supply its own inputParameters, but since the default argument was mutated in the previous call, the values from the previous call are used. This leads to different results:

Two flip calls:

python reproduce.py
0.0 {'ppd': 67.02064514160156, 'startExposure': nan, 'stopExposure': nan, 'numExposures': 2, 'tonemapper': 'aces'}
0.0 {'ppd': 67.02064514160156, 'startExposure': nan, 'stopExposure': nan, 'numExposures': 2, 'tonemapper': 'aces'}

Skipping the first flip call:

python reproduce.py 1
0.07767405360937119 {'ppd': 67.02064514160156, 'startExposure': -4.310622692108154, 'stopExposure': 8.824731826782227, 'numExposures': 14, 'tonemapper': 'aces'}

Expected behavior
When not providing own inputParamaters, all calls to flip.evaluate should use defaults.
Flip should not reuse the computed start/stop exposure and other parameters.

Workaround
There is a workaround by passing an empty parameters dict to flip.evaluate in Python (see also workaround.py). This works as expected:

python reproduce_with_workaround.py
0.0 {'ppd': 67.02064514160156, 'startExposure': nan, 'stopExposure': nan, 'numExposures': 2, 'tonemapper': 'aces'}
0.07767405360937119 {'ppd': 67.02064514160156, 'startExposure': -4.310622692108154, 'stopExposure': 8.824731826782227, 'numExposures': 14, 'tonemapper': 'aces'}

I filed a PR with a proposed fix: #33

Where is pbflip in python wrapper

Hi, I'm trying to get the python wrapper of this working and everything is fine except that in flip/python/flip/main.py the first import of import pbflip errors as therre's no module called that available.

This is clearly from pybind11 in main.cpp. I've installed pybind with conda and added the directory to CMakeLists.txt but it still can't resolve the imports.

Sorry if this is a dumb question but it's not clear from the documentation in the READMEs where this module's supposed to come from

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.