Git Product home page Git Product logo

hub's Introduction

PyTorch Hub

CircleCI

Logistics

We accept submission to PyTorch hub through PR in hub repo. Once the PR is merged into master here, it will show up on the PyTorch website in 24 hrs.

Steps to submit to PyTorch hub

  1. Add a hubconf.py in your repo, following the instruction in torch.hub doc. Verify it's working correctly by running torch.hub.load(...) locally.
  2. Create a PR in pytorch/hub repo. For each new model you have, create a <repo_owner>_<repo_name>_<title>.md file using this template.

Notes

  • Currently we don't support hosting pretrained weights, users with pretrained weights need to host them properly themselves.
  • In general we recommend one model per markdown file, models with similar structures like resnet18, resnet50 should be placed in the same file.
  • If you have images, place them in images/ folder and link them correctly in the [images/featured_image_1/featured_image_2] fields above.
  • We only support a pre-defined set of tags, currently they are listed in scripts/tags.py. We accept PRs to expand this set as needed.
  • To test your PR locally, run the tests below.
python scripts/sanity_check.py
./scripts/run_pytorch.sh
  • Our CI concatenates all python code blocks in one markdown file and runs it against the latest PyTorch release.
    • Remember to mark your python code using ```python in your model's markdown file.
    • If your dependencies are not installed on our CI machine, add them in install.sh.
    • If it fails, you can find a new temp.py file left in the repo to reproduce the failure.
  • We also provide a way to preview your model webpage through netlify bot. This bot builds your PR with the latest pytorch.github.io repo and comments on your PR with a preview link. The preview will be updated as you push more commits to the PR. Example netlify bot comment

hub's People

Contributors

ailzhang avatar ak391 avatar bertmaher avatar brianjo avatar cpprhtn avatar datumbox avatar faroit avatar fbzekiyalniz avatar glenn-jocher avatar hyunkyunghan avatar jspisak avatar lyken17 avatar malfet avatar myleott avatar nicolalandro avatar nicolashug avatar nikhilaravi avatar nikithamalgifb avatar nv-kkudrynski avatar parmeet avatar progamergov avatar ranftlr avatar snakers4 avatar soumith avatar tylersuard avatar victorsanh avatar vmoens avatar wconstab avatar zdevito avatar zheng-xq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hub's Issues

Module collision when loading more than one model with Torch Hub

Hi, I'm having an issue with loading multiple specific models with torch.hub.load

Originally posted the description here and @glenn-jocher suggested to post it here too

I'm pasting the original issue below:

Hi, I think I have a similar issue to 2414
It prevents from loading more than one model using torch.hub, using specific models
If I'm not mistaken by reading the thread, loading the model with torch.hub shadows some module names, that then become unusable within torch.

I'm using torch '1.9.1+cu102' on a Ubuntu 20.04 machine and to reproduce I do:

import torch
model = torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True)
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)

that ends up in

~/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py in _create(name, pretrained, channels, classes, autoshape, verbose, device)
     28     from pathlib import Path
     29 
---> 30     from models.yolo import Model
     31     from models.experimental import attempt_load
     32     from utils.general import check_requirements, set_logging

ModuleNotFoundError: No module named 'models.yolo'

reversing the load order obviously ends up with:

~/.cache/torch/hub/facebookresearch_detr_master/hubconf.py in <module>
      2 import torch
      3 
----> 4 from models.backbone import Backbone, Joiner
      5 from models.detr import DETR, PostProcess
      6 from models.position_encoding import PositionEmbeddingSine

ModuleNotFoundError: No module named 'models.backbone'

Is there some workaround which I'm not seeing?

Thank you very much, also for the awesome project itself
cb

404 for pre-trained ResNeXt models

I'm currently getting a 404 when following the instructions here for loading the Instagram pre-trained ResNeXt models. It looks like the source repo changed their master branch to be main, so instead of

model = torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')

one simply needs

model = torch.hub.load('facebookresearch/WSL-Images:main', 'resnext101_32x8d_wsl')

and all is well.

ntsnet example not working

Hi,
I'm trying to run the ntsnet example :

    import urllib
    import torch
    from PIL import Image
    from torchvision import transforms

    transform_test = transforms.Compose([
        transforms.Resize((600, 600), Image.BILINEAR),
        transforms.CenterCrop((448, 448)),
        # transforms.RandomHorizontalFlip(),  # only if train
        transforms.ToTensor(),
        transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
    ])

    model = torch.hub.load("nicolalandro/ntsnet-cub200", "ntsnet", pretrained=True,
                        **{"topN": 6, "device": "cpu", "num_classes": 200})
    model.eval()

    url = "https://raw.githubusercontent.com/nicolalandro/ntsnet-cub200/master/images/nts-net.png"
    img = Image.open(urllib.request.urlopen(url))
    scaled_img = transform_test(img)
    torch_images = scaled_img.unsqueeze(0)

    with torch.no_grad():
        top_n_coordinates, concat_out, raw_logits, concat_logits, part_logits, top_n_index, top_n_prob = model(torch_images)

        _, predict = torch.max(concat_logits, 1)
        pred_id = predict.item()
        print("bird class:", model.bird_classes[pred_id])

but i'm getting the following error:

Exception has occurred: RuntimeError
index 1623497638210 is out of bounds for dimension 1 with size 1614
  File "C:\Users\xxx\.cache\torch\hub\nicolalandro_ntsnet-cub200_master\nts_net\model.py", line 61, in forward
    top_n_prob = torch.gather(rpn_score, dim=1, index=top_n_index)
  File "D:\xxx\test_ntsnet.py", line 25, in <module>
    top_n_coordinates, concat_out, raw_logits, concat_logits, part_logits, top_n_index, top_n_prob = model(torch_images)

SSD does not work

Just follow the tutorial.

import torch
precision = 'fp32'
ssd_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_ssd', model_math=precision)

Occurs a runtime error
RuntimeError: unexpected EOF, expected 1022700 more bytes. The file might be corrupted.

version:
torch.version.cuda: 10.0
torch.version.git_version: 004e3ab79172dda0008a251fda8a65634ea129c3
torch.version: 1.5.0.dev20200109+cu100

LICENSE

CONTRIBUTING.md says:

By contributing to hub, you agree that your contributions will be licensed under the LICENSE file in the root directory of this source tree.

but there is no LICENSE file in the root directory

problem about SSD model in ecosystem running with Google Colab

when running this part in SSD|PyTorch
inputs = [utils.prepare_input(uri) for uri in uris] tensor = utils.prepare_tensor(inputs, precision == 'fp16')
on Google Colab, it gives these error:
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 inputs = [utils.prepare_input(uri) for uri in uris]
2 tensor = utils.prepare_tensor(inputs, precision == 'fp16')

2 frames
/root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub/hubconf.py in load_image(image_path)
235 def load_image(image_path):
236 """Code from Loading_Pretrained_Models.ipynb - a Caffe2 tutorial"""
--> 237 img = skimage.img_as_float(skimage.io.imread(image_path))
238 if len(img.shape) == 2:
239 img = np.array([img, img, img]).swapaxes(0, 2)

AttributeError: module 'skimage' has no attribute 'io'`
How do I solve it?

cannot load the pre-trained model

  • windows

  • pytorch 1.1

  • torchvision 0.3

  • cuda 9.0

when I tried '' torch.hub.list('pytorch/vision') '', it happened that:
[WinError 32] ๅฆไธ€ไธช็จ‹ๅบๆญฃๅœจไฝฟ็”จๆญคๆ–‡ไปถ๏ผŒ่ฟ›็จ‹ๆ— ๆณ•่ฎฟ้—ฎใ€‚: 'C:\Users\****/.cache\torch\hub\master.zip'

resize image using transforms in image segmentation examples

This is a proposal rather than an issue, as the Image Segmentation models in the Hub, which in this case are: Deeplabv3-ResNet101 and FCN-ResNet101, have a restriction on the input image dimensions as mentioned in the docs "H and W are expected to be at least 224px".

Then so as to ease the common example "copy-paste" the average user will do I think its a good idea to include the following transforms.Compose() rather than the current one, specifying that those lines are optional if the image height and width are above 224px; as the image segmentation will work almost the same way as the one presented in the example.

The following piece of code:

preprocess = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])

would be replaced by:

preprocess = transforms.Compose([
    transforms.Resize(256), # Optional: Resize the input PIL Image to the given size.
    transforms.CenterCrop(224), # Optional: Crops the given PIL Image at the center.
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])

As specified for example on the PyTorch Transfer Learning Beginner Tutorials

Note: don't know if it's in the scope of the example, but maybe is useful/worth to mention than reducing the image size will mean that the model will do the inference faster but the segmentation will be poorer as there are less pixels.

None of the colab notebooks listed pytorch.org/hub work out of the box

There are three highlighted models in https://pytorch.org/hub with links to "Open on Google Colab". None of them quite work out of the box:

DEEPLABV3-RESNET101: This gives a "AttributeError: module 'torch.jit' has no attribute 'unused'" probably because the default PyTorch version is 1.1, but the attribute is only in the master branch. This is pulling the vision model from the master branch.

TRANSFORMER (NMT): The colab notebook is completely blank.

WAVEGLOW: Almost works, but you have to change the runtime to GPU. Why isn't this the default for the notebook if GPU support is required? As far as I can tell this is just a matter of adding "accelerator": "GPU" to the "metadata" dict in the ipynb.

Adding modules to pytorch hub

Hi,

I built a library called pytorch_zoo with a collection of pytorch modules, losses, schedulers, and utilities that I use often. I'd like to add some of these modules to pytorch hub.

Are you interested in contributions like this for pytorch hub, or is this repository only for pretrained models?

Thanks,

Bilal

example resnet50 not working

When you run this example code: https://pytorch.org/hub/pytorch_vision_resnet/

For resnet50, the example code for the suggested "dog.jpg" gives this:

bucket 0.005972536746412516
tennis ball 0.00567108066752553
hook 0.004871048964560032
plunger 0.004716242663562298
paper towel 0.004389984533190727

A hint: there is no dog in there. :)

Here is the complete code (created by copy-pasting from https://pytorch.org/hub/pytorch_vision_resnet/):

# as per: https://pytorch.org/hub/pytorch_vision_resnet/

# sample execution (requires torchvision)
from PIL import Image
import torch
from torchvision import transforms
from torchvision import models

model = models.resnet50()
pth_file = "../tmp/resnet50-19c8e357.pth" # https://download.pytorch.org/models/resnet50-19c8e357.pth
image_file="data/dog.jpg" # https://github.com/pytorch/hub/raw/master/images/dog.jpg

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.load_state_dict(
    torch.load(pth_file, map_location = device)
)

input_image = Image.open(image_file)
preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model

# move the input and model to GPU for speed if available
if torch.cuda.is_available():
    input_batch = input_batch.to('cuda')
    model.to('cuda')

with torch.no_grad():
    output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
#print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
# print(probabilities)
print("max>", probabilities.max())
# Read the categories
with open("data/imagenet_classes.txt", "r") as f: # https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt
    categories = [s.strip() for s in f.readlines()]
# Show top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
    print(categories[top5_catid[i]], top5_prob[i].item())

ImportError: cannot import name 'wide_resnet50_2'

when i use model = torch.hub.load('pytorch/vision', 'resnext50_32x4d', pretrained=True) and accur error:
`---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
in
----> 1 model = torch.hub.load('pytorch/vision', 'densenet121', pretrained=True)

/opt/conda/lib/python3.6/site-packages/torch/hub.py in load(github, model, *args, **kwargs)
334 sys.path.insert(0, repo_dir)
335
--> 336 hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
337
338 entry = _load_entry_from_hubconf(hub_module, model)

/opt/conda/lib/python3.6/site-packages/torch/hub.py in import_module(name, path)
68 spec = importlib.util.spec_from_file_location(name, path)
69 module = importlib.util.module_from_spec(spec)
---> 70 spec.loader.exec_module(module)
71 return module
72 elif sys.version_info >= (3, 0):

/opt/conda/lib/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)

/opt/conda/lib/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)

~/.cache/torch/hub/pytorch_vision_master/hubconf.py in
5 from torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161
6 from torchvision.models.inception import inception_v3
----> 7 from torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,
8 resnext50_32x4d, resnext101_32x8d, wide_resnet50_2, wide_resnet101_2
9 from torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1

ImportError: cannot import name 'wide_resnet50_2'`

PyTorch Hub models and dependencies should be versioned

We are using the PyTorch deeplabv3_resnet101 model on AWS (on pytorch 1.1), and it suddenly stopped working.

It appears that a recent upgrade to torchvision/inception (!) broke this model:

Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /home/ec2-user/.cache/torch/hub/master.zip

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-8-00653a6a902b> in <module>()
      4     'pytorch/vision',
      5     'deeplabv3_resnet101',
----> 6     pretrained=True)
      7 model_deeplab.eval()
      8 

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/hub.py in load(github, model, *args, **kwargs)
    334     sys.path.insert(0, repo_dir)
    335 
--> 336     hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
    337 
    338     entry = _load_entry_from_hubconf(hub_module, model)

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/hub.py in import_module(name, path)
     68         spec = importlib.util.spec_from_file_location(name, path)
     69         module = importlib.util.module_from_spec(spec)
---> 70         spec.loader.exec_module(module)
     71         return module
     72     elif sys.version_info >= (3, 0):

~/anaconda3/envs/pytorch_p36/lib/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)

~/anaconda3/envs/pytorch_p36/lib/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)

~/.cache/torch/hub/pytorch_vision_master/hubconf.py in <module>()
      3 
      4 from torchvision.models.alexnet import alexnet
----> 5 from torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161
      6 from torchvision.models.inception import inception_v3
      7 from torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\

~/.cache/torch/hub/pytorch_vision_master/torchvision/__init__.py in <module>()
      1 import warnings
      2 
----> 3 from torchvision import models
      4 from torchvision import datasets
      5 from torchvision import ops

~/.cache/torch/hub/pytorch_vision_master/torchvision/models/__init__.py in <module>()
      3 from .vgg import *
      4 from .squeezenet import *
----> 5 from .inception import *
      6 from .densenet import *
      7 from .googlenet import *

~/.cache/torch/hub/pytorch_vision_master/torchvision/models/inception.py in <module>()
      6 import torch.nn as nn
      7 import torch.nn.functional as F
----> 8 from torch.jit.annotations import Optional
      9 from torch import Tensor
     10 from .utils import load_state_dict_from_url

ImportError: cannot import name 'Optional'

Is there a way to avoid this?

HARDNET cannot load the model according to documentation

I run the code according to this:

import torch
model = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet68', pretrained=True)
# or any of these variants
# model = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet85', pretrained=True)
# model = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet68ds', pretrained=True)
# model = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet39ds', pretrained=True)
model.eval()

But I get this error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-8-f03545ea41ea> in <module>()
     17     return
     18 
---> 19 test()

<ipython-input-8-f03545ea41ea> in test()
      5     model_name = "HarDNet"
      6     ##
----> 7     model1 = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet68', pretrained=True)
      8 #     model2 = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet85', pretrained=True)
      9 #     model3 = torch.hub.load('PingoLH/Pytorch-HarDNet', 'hardnet68ds', pretrained=True)

~\Anaconda3\lib\site-packages\torch\hub.py in load(github, model, *args, **kwargs)
    339     entry = _load_entry_from_hubconf(hub_module, model)
    340 
--> 341     model = entry(*args, **kwargs)
    342 
    343     sys.path.remove(repo_dir)

~/.cache\torch\hub\PingoLH_Pytorch-HarDNet_master/hubconf.py in hardnet68(pretrained, **kwargs)
      8     """
      9     # Call the model, load pretrained weights
---> 10     model = hardnet.HarDNet(depth_wise=False, arch=68, pretrained=pretrained)
     11     return model
     12 

~/.cache\torch\hub\PingoLH_Pytorch-HarDNet_master\hardnet.py in __init__(self, depth_wise, arch, pretrained, weight_path)
    215               checkpoint = 'https://ping-chao.com/hardnet/hardnet39ds-0e6c6fa9.pth'
    216 
--> 217             self.load_state_dict(torch.hub.load_state_dict_from_url(checkpoint, progress=False))
    218           else:
    219             postfix = 'ds' if depth_wise else ''

~\Anaconda3\lib\site-packages\torch\hub.py in load_state_dict_from_url(url, model_dir, map_location, progress)
    433         hash_prefix = HASH_REGEX.search(filename).group(1)
    434         _download_url_to_file(url, cached_file, hash_prefix, progress=progress)
--> 435     return torch.load(cached_file, map_location=map_location)

~\Anaconda3\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    385         f = f.open('rb')
    386     try:
--> 387         return _load(f, map_location, pickle_module, **pickle_load_args)
    388     finally:
    389         if new_fd:

~\Anaconda3\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
    572     unpickler = pickle_module.Unpickler(f, **pickle_load_args)
    573     unpickler.persistent_load = persistent_load
--> 574     result = unpickler.load()
    575 
    576     deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)

~\Anaconda3\lib\site-packages\torch\serialization.py in persistent_load(saved_id)
    535                 obj = data_type(size)
    536                 obj._torch_load_uninitialized = True
--> 537                 deserialized_objects[root_key] = restore_location(obj, location)
    538             storage = deserialized_objects[root_key]
    539             if view_metadata is not None:

~\Anaconda3\lib\site-packages\torch\serialization.py in default_restore_location(storage, location)
    117 def default_restore_location(storage, location):
    118     for _, _, fn in _package_registry:
--> 119         result = fn(storage, location)
    120         if result is not None:
    121             return result

~\Anaconda3\lib\site-packages\torch\serialization.py in _cuda_deserialize(obj, location)
     93 def _cuda_deserialize(obj, location):
     94     if location.startswith('cuda'):
---> 95         device = validate_cuda_device(location)
     96         if getattr(obj, "_torch_load_uninitialized", False):
     97             storage_type = getattr(torch.cuda, type(obj).__name__)

~\Anaconda3\lib\site-packages\torch\serialization.py in validate_cuda_device(location)
     77 
     78     if not torch.cuda.is_available():
---> 79         raise RuntimeError('Attempting to deserialize object on a CUDA '
     80                            'device but torch.cuda.is_available() is False. '
     81                            'If you are running on a CPU-only machine, '

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

The code and model can be downloaded but cannot be loaded. So what is the problem here? Thanks.

Netlify Trusty Thar Ubuntu Image will no longer work

From the netlify log of PR #225

https://app.netlify.com/sites/pytorch-hub-preview/deploys/613f117bd5823400072ef4e5?utm_source=github&utm_campaign=bot_dl

2:23:15 PM: ---------------------------------------------------------------------
DEPRECATION NOTICE: Builds using the Trusty build image will fail after September 19, 2021

The build image for this site uses Ubuntu 14.04 Trusty Tahr, which is no longer supported.
All Netlify builds using the Trusty build image will begin failing in the week of September 19.

To avoid service disruption, please select a newer build image at the following link:
https://app.netlify.com/sites/pytorch-hub-preview/settings/deploys#build-image-selection

For more details, visit the build image migration guide:
https://answers.netlify.com/t/end-of-support-for-trusty-build-image-everything-you-need-to-know/39004
---------------------------------------------------------------------

Please upgrade the build image.

cc @NicolasHug

[benchmark] nits for running benchmark models locally

I am working on python packaging for PyTorch and just used the benchmark models to verify that the packaging approach would work and not get caught up in the complexity of existing models. I used the ability to loop through the models to get a handle on the torch.nn.Module for each one, saved it in the package, and reloaded it. It illustrated a lot of shortcomings of my initial code and I was able to quickly try fixes and see how they would work. Pretty cool! I think the benchmark suite is going to be really useful for these type of design questions in addition to purely perf improvements. Thanks for helping to put it together.

As part of the process of using the benchmarks, I ran into a few nits in some of the benchmarks that only got uncovered when trying to use things locally.

Couldn't easily work around

  • Background-Matting - does not work on my local machine
    • has hard-coded circleci paths,
    • expects local directory to be a specific value
  • tacotron2
    • requires a GPU even if you ask for a cpu model, because it calls .cuda in load_model()

Require workarounds

  • Overall

    • the use of sys.path modifications to load files means that error messages are confusing:
      e.g. (File "hubconf.py", line 74, in __init__, which hubconf.py is that?). Would be to treat models/ as part of the path and to load the submodules from there.
  • BERT-pytorch

    • expects local directory to be a specific value
  • attention-is-all-you-need-pytorch

    • expects local directory to be a specific value
  • fastNLP

    • expects local directory to be a specific value
  • demucs

    • get_module does not return a torch.nn.Module (returns a lambda)
    • doesn't do anything with the jit flag (should throw if it is not supported)
    • puts ScriptModule annotation on model, but doesn't actually script the model
  • moco

    • default device is set to 'cuda' but the runbook specifies the default device is 'cpu',
      causes model to fail in unexpected way when cuda is not installed

RuntimeError: invalid hash value

I keep getting an error when loading state dict form url:

Downloading: "https://github.com/mateuszbuda/brain-segmentation-pytorch/archive/master.zip" to /root/.cache/torch/hub/master.zip
Downloading: "https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/weights/unet-e012d006.pt" to /root/.cache/torch/checkpoints/unet-e012d006.pt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torch/hub.py", line 340, in load
    model = entry(*args, **kwargs)
  File "/root/.cache/torch/hub/mateuszbuda_brain-segmentation-pytorch_master/hubconf.py", line 25, in unet
    checkpoint, progress=False, map_location=device
  File "/usr/local/lib/python3.6/dist-packages/torch/hub.py", line 433, in load_state_dict_from_url
    _download_url_to_file(url, cached_file, hash_prefix, progress=progress)
  File "/usr/local/lib/python3.6/dist-packages/torch/hub.py", line 377, in _download_url_to_file
    .format(hash_prefix, digest))
RuntimeError: invalid hash value (expected "e012d006", got "7f29ab803d816cd5e7036edba138876058fe5b3ce89ad58f48eca48d05013c1e")

And every time I try, I get a different hash value

RuntimeError: invalid hash value (expected "e012d006", got "72a1672284d86564ba1d7bc42b0adfdc79d20fee5b474e920afb67618fc2db35")
RuntimeError: invalid hash value (expected "e012d006", got "6aac34dc3952bc3f033e0d6051f6058c15e5f2877773c852c32063d9213e37a8")

What is the recommended way to load weights that are stored on github?
I followed this guide: https://pytorch.org/docs/stable/hub.html#publishing-models

Example not working in Jupyter NBs;

Hi there,

I wanted to play around with your SSD300 - thanks again for creating that, btw - as it is probably better than the implementation I've been using. However, just trying to get the easy examples to run in Jupyter Notebooks has been a struggle. I used conda to install all the packages and correct versions as posted but when I try to predict the sample images, I get an error (screenshot):

Annotation 2020-05-25 095057

Full code, copy-pasted from https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/
Annotation 2020-05-25 095205

Any ideas if this is a bug or something I need to look into deeper on my end?

Thanks again for putting out great stuff!

Contribute a model to pytorh_hub

Hi, thanks for providing so many amazing AI model for us AI researchers!
Currently, I want to provide our model (https://github.com/hustvl/YOLOP) to PyTorch Hub. And I have added the hubconf.py to my repository and filled the project information in hub/docs/template.md, But there seems to be no news. Did I miss anything? Appreciate very much if you can answer it!

Regards

Accuracy issues with Pytorch's "AlexNet" pre-training model.

When I ran the latest version of AexNet's pre-training model to test the ImageNet validation set, I found that the accuracy of top1 and top5 increased. Here are my test details:

Test: [  0/391] Time  3.516 ( 3.516)    Loss 1.0981 (1.0981)    Acc@1  73.44 ( 73.44)   Acc@5  91.41 ( 91.41)
Test: [ 10/391] Time  1.555 ( 1.372)    Loss 1.8061 (1.2305)    Acc@1  61.72 ( 70.88)   Acc@5  79.69 ( 88.07)
......
Test: [380/391] Time  1.443 ( 1.605)    Loss 1.1601 (1.9751)    Acc@1  69.53 ( 55.10)   Acc@5  91.41 ( 78.20)
Test: [390/391] Time  1.158 ( 1.604)    Loss 3.5698 (1.9602)    Acc@1  23.75 ( 55.41)   Acc@5  61.25 ( 78.40)
 * Acc@1 55.412 Acc@5 78.402

Not in the article:

Model structure Top-1 error Top-5 error
alexnet 43.45 20.91

but:

Model structure Top-1 error Top-5 error
alexnet 44.59 21.60

In this paper, all the training details I see the source papers and pytorch/examples/imagenet the code.

Can you tell me some training tricks? Thank you!

demo not working for Translation: TypeError: cannot unpack non-iterable NoneType object

The example that I am trying to execute is: https://pytorch.org/hub/pytorch_fairseq_translation/

Whether it be in a local environment or Google Colab, I get the error:

----> 1 en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model', tokenizer='moses', bpe='fastbpe')


7 frames
/root/.cache/torch/hub/pytorch_fairseq_master/fairseq/criterions/__init__.py in <module>()
     22     CRITERION_DATACLASS_REGISTRY,
     23 ) = registry.setup_registry(
---> 24     "--criterion", base_class=FairseqCriterion, default="cross_entropy"
     25 )
     26 

TypeError: cannot unpack non-iterable NoneType object

Steps to replicate:
open the google colab page linked to the tutorial at https://pytorch.org/hub/pytorch_fairseq_translation/ and execute the code.

What should be here?

I think this could be an amazing resource once it gets going. I do have one rather broad an open ended question. What, exactly, is supposed to be published here, where is the bar intended to be in terms of admission?

Is the intent that anyone and everyone publishes their models here? 5 copies of EfficientNet, a ResNet-50 with better trained weights than Torchvision? Or is the intent to curate a list of the best, unique models from research community?

What is the intended difference between the 'research' class and the, as of yet unused, 'development' ... ResNet-50 from Torchvision is labeled as 'research', where is that line supposed to be?

Is there intended to be any standard on performance of model and the weights? e.g. must meet or exceed published paper results in top1 accuracy, mAP, etc.

error running pytorch_vision in colab

I ran the sample code on colab

from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model

move the input and model to GPU for speed if available

if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')

with torch.no_grad():
output = model(input_batch)

Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes

print(output[0])

The output has unnormalized scores. To get probabilities, you can run a softmax on it.

print(torch.nn.functional.softmax(output[0], dim=0))**

and I get the error:

RuntimeError Traceback (most recent call last)
in ()
1 from PIL import Image
----> 2 from torchvision import transforms
3 input_image = Image.open(filename)
4 preprocess = transforms.Compose([
5 transforms.Resize(256),

7 frames
/usr/local/lib/python3.6/dist-packages/torch/jit/init.py in _compile_and_register_class(obj, rcb, qualified_name)
1074 def _compile_and_register_class(obj, rcb, qualified_name):
1075 ast = get_jit_class_def(obj, obj.name)
-> 1076 _jit_script_class_compile(qualified_name, ast, rcb)
1077 _add_script_class(obj, qualified_name)
1078

RuntimeError: class 'torch.torchvision.models.detection._utils.BalancedPositiveNegativeSampler' already defined. (register_type at /pytorch/torch/csrc/jit/script/compilation_unit.h:166)

Any idea what I did wrong?
I am working on a Windows 10 machine, if it matters

[AttributeError] in TRANSFORMER (NMT): pytorch_fairseq_translation.ipynb

AttributeError                            Traceback (most recent call last)
<ipython-input-2-5863cdfe7acc> in <module>()
      2 
      3 # Load an En-Fr Transformer model trained on WMT'14 data :
----> 4 en2fr = torch.hub.load('pytorch/fairseq', 'transformer.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt')
      5 
      6 # Use the GPU (optional):

5 frames
/usr/local/lib/python3.6/dist-packages/subword_nmt/subword_nmt.py in <module>()
     93     sys.stdin = codecs.getreader('UTF-8')(sys.stdin)
     94 else:
---> 95     sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer)
     96     sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer)
     97     sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer)

AttributeError: 'OutStream' object has no attribute 'buffer'

Inconvenience Loading Several Models With Colliding Namespaces

If you load 2+ models from torch.hub at the same time in the same process, they are treated like one namespace, i.e. if you have a utils.py module in both packages, it raises cryptic errors.

This comment best describes this behavior - snakers4/silero-vad#28 (comment) - a used tried loading 2 models at the same time (I just renamed the utils module in one of them to avoid this).

This is not really a problem and I am not sure this is intentional, but very many torch.hub packages have utils.py module.

I understand that this can be solved with more proper packaging / containerization / CI, but since the ideology of torch.hub is to keep things minimal, this may become an issue in future.

Document error (sigsep_open-unmix-pytorch_umx.md)

It seems there is a typo in sigsep_open-unmix-pytorch_umx.md
If it is ok, let me send a pull request regarding this.

umx is trained on the regular MUSDB18 which is bandwidth limited to 16 kHz do to AAC compression.
--> umx is trained on the regular MUSDB18 which is bandwidth limited to 16 kHz due to AAC compression.

Furthermore, the models and all utility function to preprocess, read and save audio stems, are available in a python package that can be intalled via
--> Furthermore, the models and all utility function to preprocess, read and save audio stems, are available in a python package that can be installed via

torch.hub.list error for pytorch/vision

Following the documentation example for listing available models, I tried torch.hub.list('pytorch/vision'), but was met with a RuntimeError: builtin cannot be used as a value that appears to originate from the zeros_like() function. I've included the full traceback below

Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /home/addison/.cache/torch/hub/master.zip
Traceback (most recent call last):
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3319, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-4-7cd719e1efcb>", line 1, in <module>
    torch.hub.list('pytorch/vision')
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/hub.py", line 276, in list
    hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/hub.py", line 72, in import_module
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/hubconf.py", line 4, in <module>
    from torchvision.models.alexnet import alexnet
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/__init__.py", line 3, in <module>
    from torchvision import models
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/__init__.py", line 12, in <module>
    from . import detection
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/__init__.py", line 1, in <module>
    from .faster_rcnn import *
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/faster_rcnn.py", line 13, in <module>
    from .rpn import AnchorGenerator, RPNHead, RegionProposalNetwork
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/rpn.py", line 11, in <module>
    from . import _utils as det_utils
  File "/snap/pycharm-community/144/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/_utils.py", line 19, in <module>
    class BalancedPositiveNegativeSampler(object):
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/jit/__init__.py", line 1219, in script
    _compile_and_register_class(obj, _rcb, qualified_name)
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/jit/__init__.py", line 1076, in _compile_and_register_class
    _jit_script_class_compile(qualified_name, ast, rcb)
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/addison/miniconda3/envs/tcn/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
    fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
RuntimeError: 
builtin cannot be used as a value:
at /home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/_utils.py:14:56
def zeros_like(tensor, dtype):
    # type: (Tensor, int) -> Tensor
    return torch.zeros_like(tensor, dtype=dtype, layout=tensor.layout,
                                                        ~~~~~~~~~~~~~ <--- HERE
                            device=tensor.device, pin_memory=tensor.is_pinned())
'zeros_like' is being compiled since it was called from '__torch__.torchvision.models.detection._utils.BalancedPositiveNegativeSampler.__call__'
at /home/addison/.cache/torch/hub/pytorch_vision_master/torchvision/models/detection/_utils.py:72:12
            # randomly select positive and negative examples
            perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]
            perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]
            pos_idx_per_image = positive[perm1]
            neg_idx_per_image = negative[perm2]
            # create binary mask from indices
            pos_idx_per_image_mask = zeros_like(
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~...  <--- HERE
                matched_idxs_per_image, dtype=torch.uint8
            )
            neg_idx_per_image_mask = zeros_like(
                matched_idxs_per_image, dtype=torch.uint8
            )
            pos_idx_per_image_mask[pos_idx_per_image] = torch.tensor(1, dtype=torch.uint8)
            neg_idx_per_image_mask[neg_idx_per_image] = torch.tensor(1, dtype=torch.uint8)

Codelab and other demos does not work

I have tried and run what is in CodeLab, but I get the following error:


/usr/local/lib/python3.7/dist-packages/torch/hub.py in load(repo_or_dir, model, *args, **kwargs)
    360 
    361     if source == 'github':
--> 362         repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose)
    363 
    364     model = _load_local(repo_or_dir, model, *args, **kwargs)

/usr/local/lib/python3.7/dist-packages/torch/hub.py in _get_cache_or_reload(github, force_reload, verbose)
    160     else:
    161         # Validate the tag/branch is from the original repo instead of a forked repo
--> 162         _validate_not_a_forked_repo(repo_owner, repo_name, branch)
    163 
    164         cached_file = os.path.join(hub_dir, normalized_br + '.zip')

/usr/local/lib/python3.7/dist-packages/torch/hub.py in _validate_not_a_forked_repo(repo_owner, repo_name, branch)
    122         while True:
    123             url = url_prefix + '?per_page=100&page=' + str(page)
--> 124             with urlopen(url) as r:
    125                 response = json.loads(r.read().decode(r.headers.get_content_charset('utf-8')))
    126                 if not response:

/usr/lib/python3.7/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
    220     else:
    221         opener = _opener
--> 222     return opener.open(url, data, timeout)
    223 
    224 def install_opener(opener):

/usr/lib/python3.7/urllib/request.py in open(self, fullurl, data, timeout)
    529         for processor in self.process_response.get(protocol, []):
    530             meth = getattr(processor, meth_name)
--> 531             response = meth(req, response)
    532 
    533         return response

/usr/lib/python3.7/urllib/request.py in http_response(self, request, response)
    639         if not (200 <= code < 300):
    640             response = self.parent.error(
--> 641                 'http', request, response, code, msg, hdrs)
    642 
    643         return response

/usr/lib/python3.7/urllib/request.py in error(self, proto, *args)
    567         if http_err:
    568             args = (dict, 'default', 'http_error_default') + orig_args
--> 569             return self._call_chain(*args)
    570 
    571 # XXX probably also want an abstract factory that knows when it makes

/usr/lib/python3.7/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
    501         for handler in handlers:
    502             func = getattr(handler, meth_name)
--> 503             result = func(*args)
    504             if result is not None:
    505                 return result

/usr/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs)
    647 class HTTPDefaultErrorHandler(BaseHandler):
    648     def http_error_default(self, req, fp, code, msg, hdrs):
--> 649         raise HTTPError(req.full_url, code, msg, hdrs, fp)
    650 
    651 class HTTPRedirectHandler(BaseHandler):

HTTPError: HTTP Error 403: rate limit exceeded

I have the same problem when I try pytorch demos on my own machine: https://stackoverflow.com/questions/68071913/demos-for-pytorch-and-ios-are-not-working-on-macbook-m1

Publishing trained fairseq model on pytorch hub

Hello,

I trained my Roberta model following this tutorial, now I have the checkpoint directory with all checkpoints.pt (including best_checkpoint.pt) how to make use of these to publish on pytorch hub ?
In the documentation, it's indicated to create a hubconf.py with an entrypoint. However, I don't see how to make this entrypoint with the checkpoints files I get from training Roberta with fairseq.
Any suggestions ?

When running waveglow.infer, getting 'RuntimeError: CUDA error: invalid device function'

When running:
with torch.no_grad():
_, mel, _, _ = tacotron2.infer(sequence)
audio = waveglow.infer(mel)

I get the following error:

RuntimeError Traceback (most recent call last)

in ()
5 with torch.no_grad():
6 _, mel, _, _ = tacotron2.infer(sequence)
----> 7 audio = waveglow.infer(mel)
8 audio_numpy = audio[0].data.cpu().numpy()
9 rate = 22050

2 frames

/root/.cache/torch/hub/nvidia_DeepLearningExamples_torchhub/PyTorch/SpeechSynthesis/Tacotron2/waveglow/model.py in forward(self, z, reverse)
75 W_inverse = W_inverse.half()
76 self.W_inverse = W_inverse
---> 77 z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0)
78 return z
79 else:

RuntimeError: CUDA error: invalid device function

Circle-CI build failing due to ModuleNotFoundError on hydra

As spotted while creating the PR #155, the Circle-CI build is failing with the following error code while trying to load the RoBERTa-Large model from pytorch/fairseq:

Traceback (most recent call last):
  File "temp.py", line 2, in <module>
    roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
  File "/home/circleci/miniconda3/lib/python3.7/site-packages/torch/hub.py", line 349, in load
    hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
  File "/home/circleci/miniconda3/lib/python3.7/site-packages/torch/hub.py", line 71, in import_module
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/circleci/.cache/torch/hub/pytorch_fairseq_master/hubconf.py", line 8, in <module>
    from fairseq.hub_utils import BPEHubInterface as bpe  # noqa
  File "/home/circleci/.cache/torch/hub/pytorch_fairseq_master/fairseq/__init__.py", line 17, in <module>
    import fairseq.criterions  # noqa
  File "/home/circleci/.cache/torch/hub/pytorch_fairseq_master/fairseq/criterions/__init__.py", line 26, in <module>
    importlib.import_module('fairseq.criterions.' + module)
  File "/home/circleci/miniconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/home/circleci/.cache/torch/hub/pytorch_fairseq_master/fairseq/criterions/adaptive_loss.py", line 12, in <module>
    from fairseq.dataclass.data_class import DDP_BACKEND_CHOICES
  File "/home/circleci/.cache/torch/hub/pytorch_fairseq_master/fairseq/dataclass/data_class.py", line 18, in <module>
    from hydra.core.config_store import ConfigStore
ModuleNotFoundError: No module named 'hydra'

Exited with code exit status 1

Which I guess that has something to do with the installation of hydra-core which does not appear to be in:

hub/scripts/install.sh

Lines 13 to 15 in 5eac8f0

conda install -y regex pillow tqdm boto3 requests numpy\
h5py scipy matplotlib unidecode ipython pyyaml opencv
conda install -y -c conda-forge librosa inflect

P.S.: Let me know if this is going in the right direction. Thank you!

file naming of cached models for `load_state_dict_from_url`

  • I am currently using a torch hub entrypoint for an unreleased model which is in a private github repository. Therefore the entrypoint is manually called using hubconf.entrypoint().

The entrypoint then calls load_state_dict_from_url which takes care of the caching of a model by (default) storing downloaded models in ~/.cache/torch/checkpoints/.

Now I noticed, that the filenames are directly the model-<sha256>.pth filestaken from the url. I think it would be nicer if we could add a prefix (such as the entrypoint) to the downloaded checkpoints for better file organization.

maybe something like load_state_dict_from_url(..., prefix="XYZGAN-") would work?

Feel free to move this to pytorch core, if this issue is better suited there.

The example code for u-net brain MRI may not include normalization pre-step

When I following the example code in [(https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/)] the results is a mess. Then I discovered by removing the normalization step, everything goes well. I checked the repo for the original code, it seems the training procedure did not have a normalization step. So it may be better to fix it. The model is good.
What I mean is, changing
preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=m, std=s), ])
to
preprocess = transforms.Compose([ transforms.ToTensor(), ])
I was new to this area, tell me if I am wrong. Thanks.

Inception v3 online docs input explenation

Hi,
this is not a real issue, but just a note regarding the online docs of the Inception_v3 at this link.
I think it is better to insert a note in the page about the input shape as stated in the code documentation as:

.. note::
        **Important**: In contrast to the other models the inception_v3 expects tensors with a size of
        N x 3 x 299 x 299, so ensure your images are sized accordingly.

N must be greater than 1, otherwise it will throw an error, the strange thing is that it is reported as a training error even in pretrained mode (pretrained=True).

In the online docs it is not reported so well.

Best regards.

hub/pytorch_vision_deeplabv3_resnet101. plot error

# create a color pallette, selecting a color for each class
palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1])
colors = torch.as_tensor([i for i in range(21)])[:, None] * palette
colors = (colors % 255).numpy().astype("uint8")

# plot the semantic segmentation predictions of 21 classes in each color
r = Image.fromarray(output_predictions.byte().cpu().numpy()).resize(input_image.size)
r.putpalette(colors)

import matplotlib.pyplot as plt
plt.imshow(r)
# plt.show()

The above code will raise TypeError: only size-1 arrays can be converted to Python scalars.
The code r.putpalette(colors) requires single color not colors. How to fix this code?

torch.hub shouldn't assume model dependencies have __spec__ defined

Problem
I'm using torch.hub to load a model that has the transformers library as a dependency, however, the last few versions of transformes haven't had __spec__ defined. Currently, this gives an error with torch.hub when trying to load the model and checking that the dependencies exist with importlib.util.find_spec(name) inside _check_module_exists() (source code).

Solution
Don't check for __spec__ when checking that a module exists.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.