Git Product home page Git Product logo

Comments (16)

aurelienpierre avatar aurelienpierre commented on September 24, 2024

I think @rawfiner has something like that in the pipe, using the downsampling to denoise.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Hm, I think you are referring to this: master...rawfiner:rawfiner-denoise-profile-coarse-noise-reduction3, which seems to deal with noise reduction “only”. As a side effect, it may partially implement what I am suggesting above, but lack some of the aspects. Noise may not be the only reason to limit the output size, and therefore the possibility to give the output size as the (only) input parameter may be crucial. It would be great if we could pull @rawfiner into the discussion, maybe the approaches could be combined …

from darktable.

rawfiner avatar rawfiner commented on September 24, 2024

Hi
What I tried is a bit different: the purpose was make the downscaling used for preview earlier in the pipe. Right now, the image is downscaled during demosaic for the purpose of darkroom preview, and I wanted to downscale it before demosaic, as all modules before demosaic currently work on the full image even for darkroom preview, which makes it impossible to implement some noise reduction filters before demosaic. But I failed to modify demosaic code to do that (it supposes that it gets the full size image as input): it gave weird behaviors when zooming in and out. I did not find the cause at that time, and it has been a while I did not tried again to understand were is the problem (even if I think I probably should, my understanding of roi_in and roi_out is better now than before)

I also have in mind the idea of rescaling before demosaic, not only for preview but also to get a pixel bining effect, but I did not take time on this yet, and I don't know how good the results would be. This idea is closer to yours, the only difference is the pipe order (downsampling before demosaic in my case, or after demosaic in yours).

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Thanks @rawfiner, now I understand better. Thinking about the pixel pipe position, the two ideas are really very different, but maybe there would be some communication between them necessary if they both get implemented, i.e., using one of these locks the other and vice versa.

The position in the pixel pipe of my idea is very late, it would not even require an additional image operation at all. It's just that the limited output size is

  1. communicated to the export module such that the maximum output size follows this limitation, hq downsampling should still use the original image size, and
  2. it is communicated to everything affecting the display of the image in darkroom such that the 100% view represents the calculated downscaling.

So far, only the available image operations are used, but their control “metadata” is altered.

What to do about zoom levels above 100% I am not sure. For these, an additional, fast, downsampling at the end of the pixel pipe* could be required.

from darktable.

rawfiner avatar rawfiner commented on September 24, 2024

You are welcome @spaceChRiS :-)

Concerning the position in the pixel pipe, I think it should be right after demosaic, and not at the end of the pipe.

As I explained above, the darkroom preview is computed on a downscaled image, were the downscaling is adapted to the zoom level.
The problem is, the preview computed on the downscaled image is not always perfectly accurate. For noise reduction for instance, you can get very different results after export than what you were seeing in the darkroom preview at "zoom to fit" zoom level, which forces us to look at the image at 100% zoom level to be sure of what we are doing to the image.
We should guarantee that 100% view remains exactly what we get after export.

In addition, downsizing just after demosaic will allow later modules to run faster at export, as they will have a smaller image to process.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

I have a reason to see it in my case at the end of the pipe: The output of every iop that uses convolution in one or the other way would require different input parameters for a scaled image. Take sharpening as an example. For the same kernel size, the result will be very different if I do it on the full scale image or on a scaled version. If I adjust sharpen first and then activate scaling early in the pixel pipe, the parameters (e.g. radius) will not change accordingly, such that the result is different afterwards. To avoid all such complications, my feature request should only work at the end of the pixel pipeline, and as I wrote, it would not even require an iop at all, just overwritten/scaled parameters for zoom level and the export modules, and some special treatment for zoom levels >100%. For the latter, I think, even a little overlay text that states that due to activated feature, the preview is not accurate, would be sufficient I think.

from darktable.

aurelienpierre avatar aurelienpierre commented on September 24, 2024

If I adjust sharpen first and then activate scaling early in the pixel pipe, the parameters (e.g. radius) will not change accordingly, such that the result is different afterwards.

Actually, it's a good thing. Say you sharpen with a radius of 2 px or less, and then resize to half the original dimension: your sharpening will be useless. Worse, depending the interpolation algorithm, it can create staircasing effects (interpolation artifacts), that's why it's customary to apply a slight blur before resizing, and a slight sharpening after.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Hm, not sure what you refer to as a good thing. Yes, the sharpening would be useless in that case, but assume when resizing is early in the pixel pipe, but you set your parameters before activating rescaling:

  1. You set sharpen to radius 2 px on full size image, and it results in a good result
  2. Then, you activate the rescaling to e.g. 50 %

Result will be sharpening of the rescaled picture with radius 2 px, which means compared to the original image a radius of 4 px, and probably not a good resulting picture.

If resizing comes after, the result would be equivalent to what happens when I output the image at full size and rescale afterwards, or, what would happen when I limit the image output size in the export module with the hq option on.

What do I miss?

from darktable.

aurelienpierre avatar aurelienpierre commented on September 24, 2024

The good thing is the resizing happening first. Scaling means interpolation which means sharpness loss. So you should sharpen a bit after scaling (that's what Photoshop does). And sharpening before can be at best useless, at worse, damaging.

Result will be sharpening of the rescaled picture with radius 2 px, which means compared to the original image a radius of 4 px, and probably not a good resulting picture.

Not exactly. Don't forget we are in a discretized space, so scales are not completely proportional because of the pixels gaps. Also, sharpening is a frequential thing, not a spatial one (that's a highpass filter on which you increase the contrast). Downscaling can also be described as a discrete lowpass filter, in frequential domain. So, increasing the radius of the deblurring by the inverse of the scaling ratio is at best a rough approximation (especially using gaussian blur).

Bottom line, using the same radius on full and resized scales could not be as bad as it sounds overall. Generally speaking, I see the effects of sharpening in dt only zoomed at more than 50 %, and I don't use it at more than 2 px, if I use it at all. 50 % means at least 6K resolution on 24 Mpx, when I export at 2K max (HD 1080). So, overall, I suspect any 1-2 px sharpening at full resolution becomes completely useless in practice.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

I totally understand your point, and it tells me that sharpening was the worst example I could take. I think I found better ones:

  • What about masks, if I e.g. draw a mask around a face and then activate early pipe scaling, the mask position and size will be wrong (at least if the masks are stored in absolute pixel coordinates).
  • There are other modules that may be affected as well, e.g. lens correction, perspective correction, …

My feature request is just about setting a maximum output size for an image in the export module and store this decision in the xmp file and database. For convenience this should happen in the darkroom, and for even more convenience the image preview in darkroom should reflect this setting. But even only the first part would be great already, since one can also manually preview by setting the according zoom level (e.g., if I limit the export size to 50%, I could use the 50% zoom level to represent my exportet 100% view).

from darktable.

junkyardsparkle avatar junkyardsparkle commented on September 24, 2024

You may be able to hack something workable with LUA, using one of the available forms of metadata (a color label, star rating, tag, etc) to indicate a certain maximum output size, and then conditionally check for it on export. I do something vaguely similar by using the copyright field in my camera to store a numeric value that gets mapped to a lens name on import (for old, "dumb" lenses).

Since this lens field doesn't (actually, I haven't checked in 2.6 yet, but I think it's the same) get applied on export when set this way, the script also has to do the same check and mapping on export, and use exiftool to add the lens name to the output file. Yes, it's ugly... but the part that's relevant to your use case isn't, so much, :)

from darktable.

github-actions avatar github-actions commented on September 24, 2024

This issue did not get any activity in the past 30 days and will be closed in 7 days if no update occurs. Please check if the master branch has fixed it since then.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Bump. Am I the only one that is annoyed by githubs recent issue closing activity?

from darktable.

johnny-bit avatar johnny-bit commented on September 24, 2024

Am I the only one that is annoyed by githubs recent issue closing activity?

Nah, but it's a good housekeeping measure to keep things "alive". I myself am guilty of leaving issues/PRs open for LOOONG time and not returning to them.

Your issue is interesting albeit niche one and I do believe that what @junkyardsparkle might be the best overall solution - have LUA script handle the issue for you.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Am I the only one that is annoyed by githubs recent issue closing activity?

Nah, but it's a good housekeeping measure to keep things "alive". I myself am guilty of leaving issues/PRs open for LOOONG time and not returning to them.

from darktable.

spaceChRiS avatar spaceChRiS commented on September 24, 2024

Hm, bitten by a slow computer and pressed the wrong button …

Am I the only one that is annoyed by githubs recent issue closing activity?

Nah, but it's a good housekeeping measure to keep things "alive". I myself am guilty of leaving issues/PRs open for LOOONG time and not returning to them.

I would be perfectly fine if the bot would add tags, but some bug reports do contain some work, and having it closed by the bot does not seem appropriate. Some topics are not in focus now, but that does not mean they are not valid.

Your issue is interesting albeit niche one and I do believe that what @junkyardsparkle might be the best overall solution - have LUA script handle the issue for you.

I am not sure that lua is already capable, at least not for the “full” solution.

from darktable.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.