Git Product home page Git Product logo

fewshot-egnn's People

Contributors

jmkim0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fewshot-egnn's Issues

About the generalization of Tranductive Setting

Hi, It is a nice work! But I have a question about the training and testing settings.

For training, a task has one query data for each class (total 1 * 5 query data of a task) in the task. When testing, the performance of the model should not change much with the query number of each task. But my experiment shows that the performance will decrease a lot when there are more than one query data for each class in the task (>1 * 5 query data in a task) when testing. Specifically, the performance decreases to 11% when each test episode is formed by sampling 15 queries for each of 5 classes.

I am wondering if the model is overfitting for the specific setting: 1 query for each of 5 classes for a task.

wget dataset

How to use the wget command to download ‘mini_imagenet_test.pickle’?

a question about BCE loss

hello, i find full_edge_loss_layers = [self.edge_loss((1-full_logit_layer[:, 0]), (1-full_edge[:, 0])) for full_logit_layer in full_logit_layers]
why it is self.edge_loss((1-full_logit_layer[:, 0]), (1-full_edge[:, 0]))
not self.edge_loss(full_logit_layer[:, 0], full_edge[:, 0])

the node and edge feature

Hi! Thanks for your amazing work. I was trying to load the feature ,but I was wondering what should I set the node and edge feature if I load the feature of my own which is extracted based skeleton.

How to open log file of DA-DL-02 format?

First, thanks for your great job!
I have trouble to open the log file to see print information.
For example: events.out.tfevents.1566895592.DA-DL-02
I have already searched it on google, but I can't find solutions.

5way 5shot experiment of miniImageNet

Hello,
your work is really good.
But as i try a 5way 5shot experiment in miniImageNet(transductive method), the result of my experiment could not achieve 76.37% as you reported. I did this experiment just with the code 'python3 trainer.py --dataset mini --num_ways 5 --num_shots 5 --trainsductive True' in readme. Since i don't have a file named 'trainer.py', so i changed 'trainer.py' to 'train.py'. Were you having some other adjustments when doing experiments?
Thank you very much!

About the evaluation setup

Hello, I'm just a little confused about the evaluation setup.

In Section 4.2 of your paper, it is said that 'For evaluation, each test episode was formed by randomly sampling 15 queries for each of 5 classes, and the performance is averaged over 600 randomly generated episodes from the test set.' I think it means every test episode has (5 * 15 =) 75 queries and 75 graphs are formed from each test episode under the non-transductive setting.

However, according to your released code, it seems that every val/test episode only has (5 * 1 =) 5 queries. And you randomly sample 10,000 episodes for validating/testing.

I'm just wondering which evaluation setup you use when obtaining the results in your paper. And have you tried them both? If so, is there any difference between the results obtained by following these two evaluation setups? Thank you!

How the pickel is generated from origin images

Hi, this is good work and very helpful. But I wonder how you generate the pickle files from original images. Because when I use my pickles (generated based on original images through Resize(84) and CenterCrop(84) from torchvsion.transforms). The performance decrease significantly from 77% to 73%

root question

Hi, your model is quite amazing. But as I use windows, my root is always invalid, what more is supposed to change except tt.arg.dataset_root in train.py. Only change this with the absolute path of the file mini_imagenet_train.pickle always rise FileNotFoundError.

Node label sequence

Hello:
In the 5-W 1-S setting, the query set label of each batch during training and testing is [0,1,2,3,4], no scrambling is performed,Will this make the network remember this setting, and the accuracy will increase?
In other papers (GNN, relational network) that I read for few shot learning, the labels of the query set are out of order, so I follow this idea of ​​out of order and only use the source code of each batch test query set label randomly scramble, maybe [1,4,2,0,3], [1,2,4,0,3], [2,0,4,3,1], etc., init_edge is also based on the modified label,the sequence generated is still a 10*10 symmetric matrix, and the accuracy value is only about 43%, which is far from the 66.27% accuracy of my source code.I also scrambled during the training and testing phases, and the result was about 43%.What I thought about the graph network at the beginning was that the order of the node labels should have no effect on the accuracy rate, because we made the form of the data into the graph, data structured, and relative, but this huge accuracy difference makes me,I don't quite understand it. Did I set it wrong?
thank you very much!

download tiered-imagenet images

Hi! Your work is really good!
But could you please provide the .csv file and the link to download tiered-imagenet images?
Thanks.

Wrong labels of selected classes passed in batch

While going through your data loader, I noticed that in fewshot-egnn/data.py you randomly pick class labels with task_class_list = random.sample(full_class_list, num_ways), but never really pass these labels in your batch. Instead we have

https://github.com/khy0809/fewshot-egnn/blob/205fa80ec7cb12550f7b52a63f921171f92dac4c/data.py#L106

https://github.com/khy0809/fewshot-egnn/blob/205fa80ec7cb12550f7b52a63f921171f92dac4c/data.py#L111

which assigns the wrong class labels ([0, ..., #ways]) to the picked data points. Shouldn't the support as well as the query label be assigned task_class_list[c_idx]?

about tiered-imagenet dataset

Hi! Your work is really good!
But could you please provide the .csv file and the link to download tiered-imagenet images?
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.