Git Product home page Git Product logo

Comments (10)

akshitac8 avatar akshitac8 commented on May 26, 2024 1

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Ans - Yes, your understanding is correct.

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Ans - syn_num parameters controls how many unseen samples per class you want to synthesis from the generator. Yes this parameter influences the model performance because further after synthesis you also train a classifier. The classifier training input data is a combination of these synthesised unseen class samples along with the real samples.

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

For the issue above, there are more details as follows.
'/pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [1267,0,0], thread: [31,0,0] Assertion input_val >= zero && input_val <= one failed.' `

File "/content/kg_gnn_gan/train_tfvaegan.py", line 274, in <module>', ' vae_loss_seen = loss_fn(recon_x, input_resv, means, log_var)', ' File "/content/kg_gnn_gan/train_tfvaegan.py", line 72, in loss_fn', ' BCE = torch.nn.functional.binary_cross_entropy(recon_x + 1e-12, x.detach(), size_average=False)', ' File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2893, in binary_cross_entropy', ' return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)', 'RuntimeError: CUDA error: device-side assert triggered']

It seems like the issue is about the calculation of loss function.

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

Some updates for the issue above.

I tested another semantic embedding, which has 1024 dimensions. It works.

But this failed -> the case with using 2048 dimensional vector as semantic embedding.

Notes: When I apply different semantic embeddings, I set proper parameters according to the size of embedding in the script. Also, I do not change other codes or settings (only changing semantic embedding).

Looking forward to hearing from you. Thanks.

Kind regards.
Kai

from tfvaegan.

in-my-heart avatar in-my-heart commented on May 26, 2024

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.
Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。
image

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。

image

Thank you so much. Nice paper recommendation.

from tfvaegan.

in-my-heart avatar in-my-heart commented on May 26, 2024

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。
image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? #24 (comment)

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。
image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? #24 (comment)

Sorry, I did not test on image ZSL. I just focus on action recognition. I guess the issue may be caused by hyper-parameter settings.

from tfvaegan.

in-my-heart avatar in-my-heart commented on May 26, 2024

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。
image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? #24 (comment)

Sorry, I did not test on image ZSL. I just focus on action recognition. I guess the issue may be caused by hyper-parameter settings.

Oh, it doesn't matter.

from tfvaegan.

kaiqiangh avatar kaiqiangh commented on May 26, 2024

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Ans - Yes, your understanding is correct.

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Ans - syn_num parameters controls how many unseen samples per class you want to synthesis from the generator. Yes this parameter influences the model performance because further after synthesis you also train a classifier. The classifier training input data is a combination of these synthesised unseen class samples along with the real samples.

Thanks for your clarification. Much appreciated.

from tfvaegan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.