Comments (8)
To be 100% honest, I asked the paper's author if they had a draft of their code in Pytorch. Then based on that, I implemented BatchEnsemble. So I need to thank the authors of BatchEnsemble for their help. It is not an official implementation, but I checked that the results were consistent with their paper.
from lp_bnn.
Thank you for your quick response, the information is very helpful!
from lp_bnn.
Sorry to disturb again, I have some questions regarding the repeating operation.
There seems difference in the repeated pattern in training and testing stages.
In training, tile function is used to repeat the images to [x1,x1,x1,x2,x2,x2...] suppose n_models=3;
In testing, torch.cat functions is used instead with repeated pattern [x1,x2,....x1,x2,...x1,x2...] (e.g. https://github.com/giannifranchi/LP_BNN/blob/751d1499eb6f794885d050c92ba06d34816bdbda/networks/batchensemble_layers.py#:~:text=A%2CB%2CC%5D%5D-,x%20%3D%20torch.cat(%5Bx%20for%20i%20in%20range(self.num_models)%5D%2C%20dim%3D0),-num_examples_per_model%20%3D%20int(x)
Besides that, in the ensemble layers the same pattern [A, A, ...., B, B, .....C,C,....] is used to repeat alpha and gamma parameters.
Will this lead to the misalignment in the calculation? I appreciate your explanation about the details. Please let me know if my understanding is incorrect. Thank you.
from lp_bnn.
Dear Milliema
Sorry to answer so late. I think there is no problem with the inference phase.
If you read the paper carefully, they do not mention that they need to repeat n time the batch images in the training phase. Yet I realized that the rank one vectors: alpha and gamma, do not train perfectly. After multiple experiments and interactions with the authors, I realized that it also improves the performance of repeating the data during training.
Regarding the training phase, each set of vector alpha and gamma randomly select data from the batch that might lead to seeing the same images multiple times. Yet that is probably linked to sampling theory and was out of the scope of my research. As I already answered to you the first time, most of the code for batchensemble does not come from me.
Can you explain what you mean by " misalignment in the calculation"?
from lp_bnn.
Hi, thanks for the great work you have done.
Following your discussion with Milliema, you mentioned that
Yet I realized that the rank one vectors: alpha and gamma, do not train perfectly. After multiple experiments and interactions with the authors, I realized that it also improves the performance of repeating the data during training.
Could you please explain why repeating the data during training also improves the model performance? I actually found in this repository that by repeating the data, what the model receives are just multiple copies of the same data, i.e. the model is trained with [x1, x2, x3, x4] when the data is not repeated, while if the data is repeated the model is trained with [x1, x1, x1, x1, x2, x2, x2, x2, x3, x3, x3, x3, x4, x4, x4, x4]. I am a bit confused why this can be helpful for improving the model's performance?
Thanks again for your contribution and looking forward to seeing your reply!
from lp_bnn.
if you do not repeat the data during training, the weights alpha and gamma will never see all the data. So while the other weights will make an entire epoch, these weights will only make a fraction of an epoch. That is why it helps the training to repeat. I hope I am clear.
from lp_bnn.
Thanks for your immediate response.
When the data is not repeated, assume we have training samples [$x_1, x_2, x_3, x_4$] and num_models=2, then weights [$\alpha_1, \alpha_1, \alpha_2, \alpha_2$], thus
However, my question is that, even with the repeated pattern in the code, still, alpha and gamma are able to see only a fraction of an epoch. E.g. your training data is [$x_1, x_2, x_3, x_4$], it becomes [$x_1, x_1, x_2, x_2, x_3, x_3, x_4, x_4$] after repeating it, and the weights alpha become [$\alpha_1, \alpha_1, \alpha_1, \alpha_1, \alpha_2, \alpha_2, \alpha_2, \alpha_2$]. Within this mode,
I hope I state my confusion clear. Thanks again for your reply.
from lp_bnn.
I agree with you that this is not the intended behavior. I will correct the code by the end of this month and apply
x = torch.cat([x for i in range(num_models)], dim=0)
Thanks for the correction!
from lp_bnn.
Related Issues (3)
- DeepLabv3+ Release? HOT 1
- LPBNN_layers bug? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lp_bnn.