Git Product home page Git Product logo

spoco's People

Contributors

constantinpape avatar wolny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

spoco's Issues

Objective for push force

Hi all,

I'm a bit confused by the push force in your paper (see eq. 2).

Especially, I'm wondering why the push force is calcluated between "mu_k" and "mu_l" even in case that the index "k" is equal to the index "l".

Intuitively, you cannot push the mean embeddings from itself. Thus, in my opinion you should not compute this objective in case k == l. This idea follows from the original paper of Brabandere.

I'd appreciate any comments on this.

about use EM datasets

HI,thanks for sharing your impressive code.
I've been working on EM images segmentation using sparse instance label lately. I want to follow your work. Could you share codes about MitoEM-R with me?
@wolny

Training on a custom dataset

Hi all,

I'd like to run your approach on a custom dataset that contains images (1024 x 1024) from agriculutral fields captured by an UAV. Our task is to detect all plant instances in the field, which might be difficult due to overlapping instances.

We implemented a custom dataset parser and trained multiple models based on your approach. However, in tensorboard the results look not very promising so far since instances are not well seperated.

Currently I try to overfit to a set of 24 images (same images for train, val, and test), where we provide all instances (but no background). I think this model should work first before we try to reduce the number of annotated instances.

We trained our first model based on the following setup:

python spoco_train.py \
    --spoco \
    --ds-name custom \
    --ds-path /export/data/SSIS/datasets/mydataset \
    --instance-ratio 0.4 \
    --batch-size 8  \
    --model-name UNet2D \
    --model-feature-maps 16 32 64 128 256 512 \
    --model-out-channels 8 \
    --learning-rate 0.001 \
    --weight-decay 0.00001 \
    --cos \
    --loss-delta-var 0.5 \
    --loss-delta-dist 2.0 \
    --loss-unlabeled-push 1.0 \
    --loss-instance-weight 1.0 \
    --loss-consistency-weight 1.0 \
    --kernel-threshold 0.5 \
    --checkpoint-dir /export/data/ckpts \
    --log-after-iters 500  --max-num-iterations 90000 

Please note that the instance-ratio argument is ignored in our parser. Here are some results based on tensorboard:

01-setup

Since the results look not great so far we trained another model based on the following setup:

python spoco_train.py \
    --spoco \
    --ds-name custom \
    --ds-path /export/data/SSIS/datasets/mydataset \
    --instance-ratio 0.1 \
    --batch-size 6 \
    --model-name UNet2D \
    --model-feature-maps 16 32 64 128 256 512 \
    --learning-rate 0.0002 \
    --weight-decay 0.00001 \
    --cos \
    --loss-delta-var 0.5 \
    --loss-delta-dist 2.0 \
    --loss-unlabeled-push 1.0 \
    --loss-instance-weight 1.0 \
    --loss-consistency-weight 1.0 \
    --kernel-threshold 0.5 \
    --checkpoint-dir /export/data/ckpts \
    --log-after-iters 256 --max-num-iterations 80000

Again here are some results based on tensorboard:
02-setup

Unfortunately, in the RGB visualuzation the instaces are not well detected.

Thus, we gave it a try with a slightly different setup by setting the kernel threshold to 0.9 as following:

python spoco_train.py \
    --spoco \
    --ds-name custom \
    --ds-path /export/data/SSIS/datasets/mydataset \
    --instance-ratio 0.1 \
    --batch-size 4 \
    --model-name UNet2D \
    --model-feature-maps 16 32 64 128 256 512 \
    --learning-rate 0.0002 \
    --weight-decay 0.00001 \
    --cos \
    --loss-delta-var 0.5 \
    --loss-delta-dist 2.0 \
    --loss-unlabeled-push 1.0 \
    --loss-instance-weight 1.0 \
    --loss-consistency-weight 1.0 \
    --kernel-threshold 0.9 \
    --checkpoint-dir /export/data/ckpts \
    --log-after-iters 256 --max-num-iterations 80000?

Here the results look already better but there are still some artifacts in the background:
03-setup

Initially, we thought that something might be wrong in our dataset parser. Thus, we converted the CVPPP dataset into our custom format, passed it to our custom parser, and trained a model based on your approach. However, here the results look quite good. Consequently, the custom dataset parser should be fine.
custom-cvpp

Based on your experience with the model architecture - is there any hyperparamter that you would suggest to change to improve the overall performance? I'd appreciate any comments on this.

Problems running spoco_predict.py scripts

Sorry to bother you. As a newbie in CV, when I run the spoco_predict.py script, the error is: module 'collections' has no attribute 'Sequence'. I haven't been able to find a solution for a long time. I would appreciate it a lot if you can give me some suggestions.

test results

Good job!!! I want to follow your work. Could you please provide the final test file of the CVPPP A1 dataset submitted on the challenge webset?

Value of kernel threshold

Hi all,

I've a very short question regarding the kernel threshold t used to calculate the variance of the gaussian function (Eq. 3).

In your paper, you mention that you set t equal to 0.9.
However, in the readme of this repo you define the argument for the training script as

kernel-threshold 0.5

So should we instead set this argument to 0.9 to reproduce the results?

I'd appreciate any comments on this.

The issue about dataset

Hi, I am a master course student tudying computer vision. I opened the issue as I found a probelm while trying to run the code spoco_train.py for spoco.

I debugged the code as CUDA memory error kept occurring in the loss calculation part. And I found that there was a problem with the part at datasets/cvppp.py.

When performing the self.train_label_transform, the mask is not converted into a form suitable for learning due to transforms.RandomResizedCrop(448, scale=(0.7, 1.), interpolation=0). The version of torchvision was 0.13.1 in my case.

For example, even if there are five instances in the mask returned by dataloader, It was found that applying torch.unique() did not properly count the number of instances as below. When I replaced all transforms.RandomResizedCrop() to transforms.Resize, the train works well.

image

I would appreciate it if you could consider these points if you modify the code later. Also, I recommend that those who use this code take this into consideration.

Finally, hank you very much for your research and hard work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.