Comments (3)
F1 measures do the following:
F1 segmentation:
- Connected component labeling
- Label-cooccurrence matrix computation
- Application of an IoU threshold of 0.5, which should remove almost all predicted segments in the case of massively cluttered segmentations
- From the remaining GT/pred pairs with IoU > 0.5 the distance matrix is computed and then the Hungarian algorithm applied for the 1:1 mapping. This is approximately in O(k^3), where k is the number of ground truth segments
F1 detection:
- CCL
- CoG computation per segment in one O(N) run
- Distance threshold application (3 pixels) This is indeed in O(nGT * nPred), but should normally be <<N. If your prediction consists of isolated foreground pixels with pathological pattern
0 1 0 1 0 1
0 0 0 0 0 0
0 1 0 1 0 1
this threshold is quite ineffective and leaves many false positives, which can be expensive in the next step. In reality this pattern can occur due to the up-convolutions, but I only observed this for very early iterations during from scratch training or when labels were completely inconsistent (basically random or from completely different images), or if the number of output channels was setup to fewer classes than there are labels (which cannot be the case, since this is set programmatically in the plugin).
- From the remaining GT/pred pairs with distance < 3 the distance matrix is computed and then the Hungarian algorithm applied for the 1:1 mapping
For non-pathological cases, both should scale linearly with image size and approximately cubically with the number of ground truth segments. Since your curves look reasonable, with F1 scores > 0.1 the imbalance in the number of GT segments and predicted segments cannot be so high.
from unet-segmentation.
Does this only happen with validation set or does training also slow down without? Training and validation are running in a separate process, thus ImageJ memory usage can not affect training and validation speed (except if you run into swapping).
So I quite confidently say: No, slowdown is not related to ImageJ memory usage.
Virtual memory increase during finetuning is possible due to the plot updates and because all outputs of the caffe process are appended to a String (and thus a full copy of the string is generated during append), which may cause virtual memory overhead and a slowdown in progress display for longer runs. Replacing the String by an extensible structure to avoid excessive re-allocations and copies is on my list.
from unet-segmentation.
Hey, thanks for the fast reply.
For replication and to answer your question, I timed 3 more finetuning runs, two without validation, one with.
Regarding the RAM issue, I can confirm that the heavy load on the client PC disappears for runs without validation.
Does this only happen with validation set or does training also slow down without?
It seems that it does not happen without validation set; I timed how long 100 iterations took in the beginning and towards the end of 5000-iteration-runs without validation set, and they always took around 14sec for my current settings.
In the finetuning replication run with validation, I again ran 5000 iterations with a validation every 100 iterations. I observed the following validation durations over the course of the run:
This does not look like the clear monotonic increase that I described before...
I wondered if maybe the 1:1-mapping that is required for computing IoU / F-measure takes longer/shorter depending on how many segments the network finds. If that changes over the course of the training, can it explain the observed changes in duration?
from unet-segmentation.
Related Issues (20)
- Exception in thread "Thread-5" java.lang.IllegalArgumentException HOT 1
- Where to obtain test data?
- Tiling makes tens of thousands of images HOT 3
- Use the weights available at https://lmb.informatik.uni-freiburg.de/resources/opensource/unet/ for Tensorflow 2 (Keras) HOT 6
- Invalid Private Key, RSA HOT 4
- CUDA 11 support HOT 3
- How to change binary path if I am using EC2? HOT 2
- output very weird HOT 5
- CUDA 10.1 support HOT 3
- U-net installation HOT 6
- Finetuning does not 'train' ,aborts HOT 6
- Segmentation does not have good separation between multiple instances of cells
- Writing a macro script to run the Finetune model
- caffe_unet not found HOT 1
- Model/weight check failed
- client loop: send disconnect: broken pipe; SFTP Failure 4 HOT 1
- caffe or caffe_unet issues HOT 7
- MacOs installation issue HOT 1
- Exporting the environment
- Installation Roadblock HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from unet-segmentation.