Git Product home page Git Product logo

jigendg's People

Contributors

dependabot[bot] avatar enoonit avatar fmcarlucci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jigendg's Issues

The performance in paper about DeepAll

When I test the performace about DeepAll using caffenet, I find it's obvious higher than that in your paper. Also, it's close to that about your method by my testing.
I test each experoment three times and average the best step of them. Have I done something wrong? thanks very much!

Paper Question

I am confused about the "Deep All" setting. Could you interpret the setting clearly? Besides, does the code include the case of "Deep All"?

Thank you very much.

Why is the input data composed with 9 grid?

image

I tried to plot the input data for understanding as expecting that 222 x 222 x 3 one original or jigsaw(permutated) image.
But the result is composed with 9-grid patched image.
I don't understand why the input data looks like it, cuz i cannot find this information in the paper

evaluate on VLCS dataset

Hi,
According to your code, do you train your model on 3 source domains(90% full as training dataset) and chose the model with highest accuracy on 3 source domains(10% full as validation dataset) to report the result on target domain(only on test dataset)?

The question about the classifier

Hi, I'm confused about how in the code the model is made to distinguish between the input image as a classification task and a jigsaw task. So that input image for different tasks choose different classifiers in training process, that is, the image of the classification task will not be executed by the classifier of the puzzle task.

TypeError: alexnet() got an unexpected keyword argument 'jigsaw_classes'

`python train_jigsaw.py --network alexnet

`> Traceback (most recent call last):

File "train_jigsaw.py", line 198, in
main()
File "train_jigsaw.py", line 192, in main
trainer = Trainer(args, device)
File "train_jigsaw.py", line 59, in init
model = model_factory.get_network(args.network)(jigsaw_classes=args.jigsaw_n_classes + 1, classes=args.n_classes)
File "JigenDG/models/model_factory.py", line 21, in get_network_fn
return nets_mapname
TypeError: alexnet() got an unexpected keyword argument 'jigsaw_classes'

Train on different datasets

Hi,
Do you train your model on different datasets using the same parameters settings or adjust them on different datasets?

Error with argparser

I am unable to run the code using the commands from the README

I run the following commands for results on the PACS dataset:

python train_jigsaw.py --batch_size 128 --n_classes 7 --learning_rate 0.001 --network resnet18 --val_size 0.1 --folder_name test --jigsaw_n_classes 30 --train_all True --TTA False --nesterov False --min_scale 0.8 --max_scale 1.0 --random_horiz_flip 0.5 --jitter 0.4 --tile_random_grayscale 0.1 --source photo cartoon sketch --target art_painting --jig_weight 0.7 --bias_whole_image 0.9 --image_size 222

However, it results in the following error:

Traceback (most recent call last):
File "train_jigsaw.py", line 199, in
main()
File "train_jigsaw.py", line 191, in main
args = get_args()
File "train_jigsaw.py", line 43, in get_args
parser.add_argument("--TTA", type=bool, action='store_true', help="Activate test time data augmentation")
File "/anaconda/envs/py37_default/lib/python3.7/argparse.py", line 1359, in add_argument
action = action_class(**kwargs)
TypeError: init() got an unexpected keyword argument 'type'

Comparison on PACS and VLCS

In table 1 and table 2 of your paper, you show performance of DeepAll along with performance of the related work. DeepAll is the baseline number that should have been the same across different methods had the dataset, implementation are standardized, is that correct? My question is: how are you comfortable making comparisons with methods from different implementations when they have such diverging baseline numbers? I mean, how can you be sure that the improvements are from better implementation or better generalization? One could compare improvements over DeepAll as indicative for domain generalization but deltas over baseline need not be linear, that is it might be harder to push DeepAll when it is already doing well. I am at loss trying to make sense of PACS and VLCS evaluations. What am I missing?

Thanks

Python Version

Hi, can you please state the version of Python used in the code?

Thanks,
Supritam

PyTorch version of this code?

Hi, thanks for sharing this code as a great reference to your interesting paper.
Just a note, would you please advise the pytorch version of the code? Cause I tried run it under latest pytorch 1.4 and got series of bugs including failure of a forward pass (tensor dimension mismatch). Unfortunately the current provided requirement.txt does not give any help. Thanks!

Office home dataset

Hi,
I am trying to run your code to generate the results for the office home dataset, which you referred in table 3. I found out that the dataset txt files present in the code folders are for the office dataset (30classes, 3 domains) instead of the office home dataset (65 classes, 4 domains) which is reported in table 3. So I uploaded the Office home dataset with its generated txt files and loaded in the code. However, while running I am getting the following runtime error:

{
JigenDG-master/train_DA_jigsaw.py in _do_epoch(self)
107 loss = class_loss + jigsaw_loss * self.jig_weight + target_jigsaw_loss * self.target_weight + target_entropy_loss * self.target_entropy
108
--> 109 loss.backward()

RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
}

I used the following initial parameters:

%run train_jigsaw.py --batch_size 128 --n_classes 65 --learning_rate 0.001 --network resnet18 --val_size 0.1 --folder_name test --jigsaw_n_classes 100 --train_all True --TTA False --nesterov False --min_scale 0.8 --max_scale 1.0 --random_horiz_flip 0.5 --jitter 0.4 --tile_random_grayscale 0.1 --source Art Clipart Real_World --target Product --jig_weight 0.7 --bias_whole_image 0.9 --image_size 222

Can you please guide me that, if I am making any mistake while selecting the initial parameters?

thank you

Doubt about experimental results

Dear authors,
Thank you for your contribution for a good baseline in DG. However, I have 2 questions about your reports.
First, the dataset publishers have warned about different AGG results if you don't keep their splited train-val meta-files (http://www.eecs.qmul.ac.uk/~dl307/project_iccv2017). But in your code, I am seeing your training data didn't follow this instruction, this could lead to your results are more fancy than others paper?
Second, I can't see your jigsaw classier is efficient in this task (I set it's lambda equals 0, the results didn't change anything). Does this mean that Self-Supervised Learning here is nothing to do, your positive results just come from the data augmentation with permutation?

About the performance on VLCS dataset

I've found that the default parameters setting produces result about 2 points higher on VLCS dataset, than the result reported in your original paper. Did you just find a better parameters setting, or was it because the tricky train-test split of VLCS dataset in the code?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.