Git Product home page Git Product logo

Comments (11)

jianlong-yuan avatar jianlong-yuan commented on July 19, 2024

"name": "CCT",
"experim_name": "CCT",
"n_gpu": 2,
"n_labeled_examples": 1464,
"diff_lrs": true,
"ramp_up": 0.1,
"unsupervised_w": 30,
"ignore_index": 255,
"lr_scheduler": "Poly",
"use_weak_lables":false,
"weakly_loss_w": 0.4,
"pretrained": true,

from cct.

yassouali avatar yassouali commented on July 19, 2024

Hi, thank you for your interest in our work,

Did you try it without any synbatch norm ?

from cct.

jianlong-yuan avatar jianlong-yuan commented on July 19, 2024

Yes, i have tried. I got 0.6579999923706055

from cct.

yassouali avatar yassouali commented on July 19, 2024

Ok, thanks,

I did not really test it on multiple GPUs, let me run some tests and get back to you.

Just to be sure, you are training using 1.5K labeled examples in a semi mode ? @jianlong-yuan

from cct.

jianlong-yuan avatar jianlong-yuan commented on July 19, 2024

No, i just modify config supervised False ==> True
{
"name": "CCT",
"experim_name": "CCT",
"n_gpu": 2,
"n_labeled_examples": 1464,
"diff_lrs": true,
"ramp_up": 0.1,
"unsupervised_w": 30,
"ignore_index": 255,
"lr_scheduler": "Poly",
"use_weak_lables":false,
"weakly_loss_w": 0.4,
"pretrained": true,

"model":{
    "supervised": true,
    "semi": false,
    "supervised_w": 1,

    "sup_loss": "CE",
    "un_loss": "MSE",

    "softmax_temp": 1,
    "aux_constraint": false,
    "aux_constraint_w": 1,
    "confidence_masking": false,
    "confidence_th": 0.5,

    "drop": 6,
    "drop_rate": 0.5,
    "spatial": true,

    "cutout": 6,
    "erase": 0.4,

    "vat": 2,
    "xi": 1e-6,
    "eps": 2.0,

    "context_masking": 2,
    "object_masking": 2,
    "feature_drop": 6,

    "feature_noise": 6,
    "uniform_range": 0.3
},


"optimizer": {
    "type": "SGD",
    "args":{
        "lr": 1e-2,
        "weight_decay": 1e-4,
        "momentum": 0.9
    }
},


"train_supervised": {
    "data_dir": "/home/data/segmentation/pascal_voc/",
    "batch_size": 10,
    "crop_size": 320,
    "shuffle": true,
    "base_size": 400,
    "scale": true,
    "augment": true,
    "flip": true,
    "rotate": false,
    "blur": false,
    "split": "train_supervised",
    "num_workers": 8
},

"train_unsupervised": {
    "data_dir": "/home/gongyuan.yjl/data/segmentation/pascal_voc/",
    "weak_labels_output": "pseudo_labels/result/pseudo_labels",
    "batch_size": 10,
    "crop_size": 320,
    "shuffle": true,
    "base_size": 400,
    "scale": true,
    "augment": true,
    "flip": true,
    "rotate": false,
    "blur": false,
    "split": "train_unsupervised",
    "num_workers": 8
},

"val_loader": {
    "data_dir": "/home/data/segmentation/pascal_voc/",
    "batch_size": 1,
    "val": true,
    "split": "val",
    "shuffle": false,
    "num_workers": 4
},

"trainer": {
    "epochs": 80,
    "save_dir": "saved/",
    "save_period": 5,

    "monitor": "max Mean_IoU",
    "early_stop": 10,
    
    "tensorboardX": true,
    "log_dir": "saved/",
    "log_per_iter": 20,

    "val": true,
    "val_per_epochs": 5
}

}

from cct.

yassouali avatar yassouali commented on July 19, 2024

Hi, I just run on 2 GPUs for semi mode with 1.5K labels and got 68.9 (before the end of training - 60 epochs out of 80, it'll go up if I continued) using the provided config and 2 GPUs, but I see that you want the supervised mode, are you training on the supervised mode using only 1.5K labels or did you also add all the id in the .txt file?

if you are training on 1.5K labeled examples only, the results are correct I think, but if you are using the full labeled set (10K) than can you say for how much epochs are you training? the full 80 epochs?

from cct.

jianlong-yuan avatar jianlong-yuan commented on July 19, 2024

Got it. I used 1.5k labeled, and full 80 epochs, i got best model at about 60 epoch.
image
I found the tabel in the paper: <CCT 1.5k - 69.4> is with 1.5k supervised and 9k for semi supervised?

from cct.

yassouali avatar yassouali commented on July 19, 2024

yes - CCT always refers to semi-supervised mode, the supervised mode is only there as a baseline to make sure we have better performances than the supervised mode.

So CCT - 1.5K refers to using 1.5K labeled examples (using cross-entropy loss) and the rest 9K is used in the semi-supervised loss (Lu in the paper), to get the exact results, simply run the provided config file (note that one GPU works better due to batch norm). Now when you use supervised mode == you only train using cross-entropy loss, so the results will be 64-65, because you are not using Lu loss, only normal training.

Now for the 73.2 - see section 3 where we use weak labels == 1.5K of images with segmentations maps - 9K with class labels. Hope things are clear now.

from cct.

jianlong-yuan avatar jianlong-yuan commented on July 19, 2024

Thank you. Got it.

from cct.

chuxiang93 avatar chuxiang93 commented on July 19, 2024

Hi @yassouali ,
Did you do the experiment with only 1.5K labeled data for the supervised training? I didn't see the relevant data results in this table.
image
Thank you,

from cct.

yassouali avatar yassouali commented on July 19, 2024

@mjq93 Hi, thank you for your interest,
Yes, you are right, the supervised case for 1.5K labels was not reported, I don't quite remember the exact value, but I think it was around 66.

from cct.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.