ensta-u2is-ai / torch-uncertainty Goto Github PK
View Code? Open in Web Editor NEWOpen-source framework for uncertainty and deep learning models in PyTorch :seedling:
Home Page: https://torch-uncertainty.github.io
License: Apache License 2.0
Open-source framework for uncertainty and deep learning models in PyTorch :seedling:
Home Page: https://torch-uncertainty.github.io
License: Apache License 2.0
To close this issue, we should at least:
The regression routines should allow the user to choose the distribution of liking. For now, we cannot distinguish Laplace and Gaussian distributions as we rely on the number of parameters to choose the corresponding distributions.
A potential solution was highlighted in this discussion:
"""
I had the same kind of problem with another project of mine, I would suggest creating a function like this:
def get_distribution(dist_name: str, dist_params: Tensor) -> Distribution:
...
I think it could be interesting to use the Distribution
class from Pytorch, but it might be too much. A dictionary with the parameters should do it nicely too.
"""
Originally posted by @alafage in #46 (comment)
In one of my latest training, I had the following error in plotting_utils.py, line 95
:
val_oh = torch.nn.functional.one_hot(val.long(), num_classes=10)
RuntimeError: Class values must be non-negative.
I don't have much more information about this, but we should check if some cases lead to such an error.
I have added dropout to standard ResNet in de31452. It could be interesting to add an MCDropout Ensemble baseline. To do this, the following steps could be considered:
This goes for all tutorials.
I'm running following command on cifar10
python3 resnet.py --version mimo --arch 18 --accelerator gpu --device 1 --benchmark True --max_epochs 75 --precision 16 --root "./data/" --num_estimators 4
but it raises error RuntimeError: The size of tensor a (512) must match the size of tensor b (128) at non-singleton dimension 0
. I dive into the code and find that it may be related to this line, should we remove it? @o-laurent Could you help with this?
Edit by maintainer:
Paper: https://arxiv.org/abs/2108.00968
Implementation: https://github.com/giannifranchi/deeplabv3-superpixelmix
More particularly, the transform is here: https://github.com/giannifranchi/deeplabv3-superpixelmix/blob/master/datasets/cityscapes_mix.py#L13
Some tests that fail locally pass in the CI/CD. They correspond to the pull request target synchronization.
Hi, thanks for your excellent work and sharing codes. I have a small question. If I apply packed ensemble, the pre-trained parameters of the ResNet on the ImageNet can't be used. Is it right?
Currently, validation may fail even for relatively small models such as DeepLabV3 when using large image crops. Calibration Error is the worst metric but others seem to also have huge impacts on the VRAM.
Now that we support regression, it could be interesting to implement simple methods, such as Deep Evidential Regression (DER).
This method would be relatively straightforward to implement. Here is a possible roadmap:
Hello, after studying your paper, it has been very inspiring for my work. I still have some questions that I would like to consult with you
For the learning rate issue of training, what your code means is that the loss function is the average value of the losses of all experts. If there are four experts, then for each expert, the actual loss is divided by four, which means that when backpropagation is used to calculate the gradient, it will also be divided by four. Do you need to initially set a learning rate that is four times larger than the single model
See https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/runs/8408560029/job/23024874240
Remove randomness.
More information on this dataset on PapersWithCode
This dataset is straightforward to implement. Here is a possible roadmap:
TinyImageNet-C can be downloaded here: https://berkeley.app.box.com/s/6zt1qzwm34hgdzcvi45svsb10zspop8a
Hello, thank you for your open source code to my research has a great inspiration and help. One small question I have is that PackedLinear and PackedConv2d in packed_layers.py don't seem to correspond to the latest github repository and the folders associated with them don't seem to be updated after pip installation. For example, the PackedLinear function in the source code of pip installation has no alpha parameter. Could you please update it to facilitate the use of pip installation?
This metric could be added to the default group in the classification routine.
Original code: https://github.com/giannifranchi/LP_BNN
Original paper: https://arxiv.org/abs/2012.02818
A single test (over local tests and GitHub actions) has failed due to the Bayesian layers. Let's look into this.
Source:
https://github.com/ENSTA-U2IS/torch-uncertainty/actions/runs/5358435126/attempts/1
There is no possibility of enforcing the installation of torch with cpu only for now.
Implement this feature.
Display the distance with the optimal calibration bins.
For instance, with dashed red histograms.
The token is not valid, and we cannot upload the coverage cf. this GitHub action.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.