Git Product home page Git Product logo

Comments (3)

ThorstenFalk avatar ThorstenFalk commented on June 20, 2024 1

F1 measures do the following:
F1 segmentation:

  • Connected component labeling
  • Label-cooccurrence matrix computation
  • Application of an IoU threshold of 0.5, which should remove almost all predicted segments in the case of massively cluttered segmentations
  • From the remaining GT/pred pairs with IoU > 0.5 the distance matrix is computed and then the Hungarian algorithm applied for the 1:1 mapping. This is approximately in O(k^3), where k is the number of ground truth segments

F1 detection:

  • CCL
  • CoG computation per segment in one O(N) run
  • Distance threshold application (3 pixels) This is indeed in O(nGT * nPred), but should normally be <<N. If your prediction consists of isolated foreground pixels with pathological pattern
0 1 0 1 0 1
0 0 0 0 0 0
0 1 0 1 0 1

this threshold is quite ineffective and leaves many false positives, which can be expensive in the next step. In reality this pattern can occur due to the up-convolutions, but I only observed this for very early iterations during from scratch training or when labels were completely inconsistent (basically random or from completely different images), or if the number of output channels was setup to fewer classes than there are labels (which cannot be the case, since this is set programmatically in the plugin).

  • From the remaining GT/pred pairs with distance < 3 the distance matrix is computed and then the Hungarian algorithm applied for the 1:1 mapping

For non-pathological cases, both should scale linearly with image size and approximately cubically with the number of ground truth segments. Since your curves look reasonable, with F1 scores > 0.1 the imbalance in the number of GT segments and predicted segments cannot be so high.

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on June 20, 2024

Does this only happen with validation set or does training also slow down without? Training and validation are running in a separate process, thus ImageJ memory usage can not affect training and validation speed (except if you run into swapping).

So I quite confidently say: No, slowdown is not related to ImageJ memory usage.

Virtual memory increase during finetuning is possible due to the plot updates and because all outputs of the caffe process are appended to a String (and thus a full copy of the string is generated during append), which may cause virtual memory overhead and a slowdown in progress display for longer runs. Replacing the String by an extensible structure to avoid excessive re-allocations and copies is on my list.

from unet-segmentation.

jlause avatar jlause commented on June 20, 2024

Hey, thanks for the fast reply.

For replication and to answer your question, I timed 3 more finetuning runs, two without validation, one with.

Regarding the RAM issue, I can confirm that the heavy load on the client PC disappears for runs without validation.

Does this only happen with validation set or does training also slow down without?

It seems that it does not happen without validation set; I timed how long 100 iterations took in the beginning and towards the end of 5000-iteration-runs without validation set, and they always took around 14sec for my current settings.

In the finetuning replication run with validation, I again ran 5000 iterations with a validation every 100 iterations. I observed the following validation durations over the course of the run:

image

This does not look like the clear monotonic increase that I described before...
I wondered if maybe the 1:1-mapping that is required for computing IoU / F-measure takes longer/shorter depending on how many segments the network finds. If that changes over the course of the training, can it explain the observed changes in duration?

Validation plot for this run:
test_RAM_usage3_val_metrics

from unet-segmentation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.