Comments (16)
Hi @ShuoYang-1998,
I am very sure that it is just a matter of some hyper-parameter that is causing the discrepancy. The code tip is one that I used to run a lot of ablations towards the end of the paper and for the rebuttal. I just need to find that rouge hyper parameter that is causing the issue. One other point to note from your reported results is that ORE is better than your reproduced Faster-RCNN+FT in most of the cases.
As @salman-h-khan said, I had an international travel on Saturday. Unfortunately enough, I am tested positive for COVID today. I have developed minor complications and hence am admitted to a hospital now. I am typing this message from the hospital room. Kindly give me some time to regain my health, if possible.
Thanks,
Joseph
from owod.
wish you all good
from owod.
Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached.
Hi Yang,
I have a question about the experiments of EBMs.
Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.
from owod.
Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached.
Hi,Yang
You have reproduced the results so quickly, and I would like you to give me some advice. I can't send messages to you email address, if it is convenient for you, could you give me some guidance.Thank you, looking forward to your reply.
from owod.
Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached.
Hi Yang,
I have a question about the experiments of EBMs.
Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.
The EBUI does use all known and unknown data to learn a distribution, but it doesn't access the labels.
from owod.
Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached.
Hi,Yang
You have reproduced the results so quickly, and I would like you to give me some advice. I can't send messages to you email address, if it is convenient for you, could you give me some guidance.Thank you, looking forward to your reply.
I have uploaded my run.sh in issue 18, please refer.
from owod.
Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached.
Hi Yang,
I have a question about the experiments of EBMs.
Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.The EBUI does use all known and unknown data to learn a distribution, but it doesn't access the labels.
Hi, Yang,
Thank you for your reply. I have checked train_loop.py and modeling/roi_heads/roi_heads.py, and find EBUI should have used the unknown annotations, as shown in the followed codes. The gt labels of the unknown instances will be allocated to the region proposals in roi_heads.py and are saved to fit the unknown WB distribution. Is it right?
wb_unk = Fit_Weibull_3P(failures=unk, show_probability_plot=False, print_results=False)
def compute_energy(self, predictions, proposals): gt_classes = torch.cat([p.gt_classes for p in proposals]) logits = predictions[0] data = (logits, gt_classes) location = os.path.join(self.energy_save_path, shortuuid.uuid() + '.pkl') torch.save(data, location)
I also find EBUI can work along, which means the unknown labels are not from ALU, but from gt.
from owod.
d unknown data to learn a distribution, but it doesn't access the labels.
Hi, Yang,
Do you mean the specific labels?
I think that for the open-set settings, we shouldn't use unknown samples for training. In this validation, we need to learn(fit) the distribution and save the parameters, which should be seen as training with some tricks.
For MINIST, we cannot set 0-6 with real labels and set the rest are labeled unknown. In OWOD, I am not sure whether using Known and unknown labeled samples to fit the distribution is OK? Do you think it is OK?
from owod.
d unknown data to learn a distribution, but it doesn't access the labels.
Hi, Yang,
Do you mean the specific labels?
I think that for the open-set settings, we shouldn't use unknown samples for training. In this validation, we need to learn(fit) the distribution and save the parameters, which should be seen as training with some tricks.
For MINIST, we cannot set 0-6 with real labels and set the rest are labeled unknown. In OWOD, I am not sure whether using Known and unknown labeled samples to fit the distribution is OK? Do you think it is OK?
I also have the same concern, some other people also raise this questions in [https://github.com//issues/16] (url) and https://github.com/JosephKJ/OWOD/issues/8. But the author didn't response.
from owod.
@JosephKJ @ShuoYang-1998 @Wyman123 @Hrqingqing @salman-h-khan What is the order in which I should use the configuration files in the T1 ~ T4 experiments?Looking forward to your reply!!
Hope to get your detailed analysis of the configuration file!!
from owod.
@ShuoYang-1998 : Thank you very much for helping out others. I have added 'replicate.py' to replicate results from the pertained models shared before. You can find the binaries and logs here, if you want to verify the authenticity of the results.
@Wyman123 : We are using 4000 kept aside validation data-points for learning the Weibull distribution. This is a tiny fraction when compared to 414,412 training data-points.
@LoveIsAGame: Please refer run.sh
Regarding my late response: #35
from owod.
@Wyman123: Thanks for the question. We use the validation set to fit the Weibull distribution. The validation set for each task consists of 1k images, hence a total of 4k.
Our problem setting demanded a sequential supervision model where unannotated unknown classes are initially observed without labels and are sequentially labelled by the annotator in the subsequent tasks.
You can understand this as a transductive mode of supervision for the small held-out validation set i.e., a small portion of the "unseen" classes data is available as a bag with a single unknown label for the collection of instances.
from owod.
@ShuoYang-1998 : Thank you very much for helping out others. I have added 'replicate.py' to replicate results from the pertained models shared before. You can find the binaries and logs here, if you want to verify the authenticity of the results.
@Wyman123 : We are using 4000 kept aside validation data-points for learning the Weibull distribution. This is a tiny fraction when compared to 414,412 training data-points.
@LoveIsAGame: Please refer run.sh
Regarding my late response: #35
The result from the pertained models in your figure is not consistent with the result in your paper。
In addition,we still cannot reproduce the result from the training scheduler。
I think you should seriously consider this problem。
from owod.
The result from the pertained models in your figure is not consistent with the result in your paper。
Kindly let me know why. Most of the number are in the same ballpark, some are even better.
In addition,we still cannot reproduce the result from the training scheduler。
Kindly see #37 . I have fixed it now. Try once from the latest tip. Thanks.
from owod.
Closing this issue due to inactivity. @dyabel is able to reproduce mAP and A-OSE with the latest code. Kindly reopen for more discussions.
from owod.
@ShuoYang-1998 Hi! friend, I also tried to reproduce the result but failed, I attach the result in #77 , if you successfully reproduced the result, could you help me, thank you!
from owod.
Related Issues (20)
- Can I annotate the EBUI module while I do inference?
- confused about the reg loss considering unknown proposals
- How to draw precision and recall curve for Known class?
- AttributeError: Cannot find field 'logits' in the given Instances! HOT 1
- Same problem #issue 41 in Task 1
- Download google drive data effectively here for China Mainland HOT 1
- Fine-tuning questions and the dataset splits method?
- Not able to load: datasets/VOC2007/Annotations/006423.xml. Continuing without aboarting... HOT 2
- IndexError: list index out of range when CUR_INTRODUCED_CLS > 84
- Owo
- If I only have one GPU, how should I set the training parameters? HOT 1
- Regarding the issue of unknown targets in the validation and test tasks.
- 2023年了,还有人复现成功吗。 HOT 2
- Top-k background marked as "unknown" code
- Duplicates in test set
- need output file or model_final.pth, plz HOT 1
- ImageNet Pretrained Weight in OWOD
- missing file all_task_val.txt
- pip install -r requirement.txt HOT 1
- Error in running train_net.py file
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from owod.