Git Product home page Git Product logo

torch2coreml's Introduction

Convert Torch7 models into Apple CoreML format.

Short tutorial

This tool helps convert Torch7 models into Apple CoreML format which can then be run on Apple devices.

fast-neural-style example app screenshot

Installation

pip install -U torch2coreml

In order to use this tool you need to have these installed:

  • Xcode 9
  • python 2.7

If you want to run tests, you need MacOS High Sierra 10.13 installed.

Dependencies

  • coremltools (0.6.2+)
  • PyTorch

How to use

Using this library you can implement converter for your own model types. An example of such a converter is located at "example/fast-neural-style/convert-fast-neural-style.py". To implement converters you should use single function "convert" from torch2coreml:

from torch2coreml import convert

This function is simple enough to be self-describing:

def convert(model,
            input_shapes,
            input_names=['input'],
            output_names=['output'],
            mode=None,
            image_input_names=[],
            preprocessing_args={},
            image_output_names=[],
            deprocessing_args={},
            class_labels=None,
            predicted_feature_name='classLabel',
            unknown_layer_converter_fn=None)

Parameters

model: Torch7 model (loaded with PyTorch) | str
A trained Torch7 model loaded in python using PyTorch or path to file with model (*.t7).

input_shapes: list of tuples Shapes of the input tensors.

mode: str ('classifier', 'regressor' or None)
Mode of the converted coreml model:
'classifier', a NeuralNetworkClassifier spec will be constructed.
'regressor', a NeuralNetworkRegressor spec will be constructed.

preprocessing_args: dict
'is_bgr', 'red_bias', 'green_bias', 'blue_bias', 'gray_bias', 'image_scale' keys with the same meaning as https://apple.github.io/coremltools/generated/coremltools.models.neural_network.html#coremltools.models.neural_network.NeuralNetworkBuilder.set_pre_processing_parameters

deprocessing_args: dict
Same as 'preprocessing_args' but for deprocessing.

class_labels: A string or list of strings.
As a string it represents the name of the file which contains the classification labels (one per line). As a list of strings it represents a list of categories that map the index of the output of a neural network to labels in a classifier.

predicted_feature_name: str
Name of the output feature for the class labels exposed in the Core ML model (applies to classifiers only). Defaults to 'classLabel'

unknown_layer_converter_fn: function with signature:
(builder, name, layer, input_names, output_names)
builder: object - instance of NeuralNetworkBuilder class
name: str - generated layer name
layer: object - PyTorch (python) object for corresponding layer
input_names: list of strings
output_names: list of strings
Returns: list of strings for layer output names
Callback function to handle unknown for torch2coreml layers

Returns

model: A coreml model.

Currently supported

Models

Only Torch7 "nn" module is supported now.

Layers

List of Torch7 layers that can be converted into their CoreML equivalent:

  1. Sequential
  2. ConcatTable
  3. SpatialConvolution
  4. ELU
  5. ReLU
  6. SpatialBatchNormalization
  7. Identity
  8. CAddTable
  9. SpatialFullConvolution
  10. SpatialSoftMax
  11. SpatialMaxPooling
  12. SpatialAveragePooling
  13. View
  14. Linear
  15. Tanh
  16. MulConstant
  17. SpatialZeroPadding
  18. SpatialReflectionPadding
  19. Narrow
  20. SpatialUpSamplingNearest
  21. SplitTable

License

Copyright (c) 2017 Prisma Labs, Inc. All rights reserved.

Use of this source code is governed by the MIT License that can be found in the LICENSE.txt file.

torch2coreml's People

Contributors

gregoriol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch2coreml's Issues

Output Stylized Image looks different on torch and coreml

I have used same image on both models , Torch model present a good output while the Coreml model changes the main picture color .

i used this image for testing
images

the output from torch model is
baby

Coreml model provides this
output_corml

the model trained options

percep_loss_weight: 1	
padding_type: reflect-start	
batch_size: 4	
arch: c9s1-32,d64,d128,R128,R128,R128,R128,R128,u64,u32,c9s1-3	
resume_from_checkpoint: 	
style_image: images/styles/home.jpeg	
max_train: -1	
style_image_size: 700	
tv_strength: 0.001	
lr_decay_factor: 0.5	
checkpoint_name: checkpoint	
loss_network: models/vgg16.t7	
gpu: 0	
content_layers: 16	
task: style	
use_instance_norm: 1	
tanh_constant: 150	
preprocessing: vgg	
style_weights: 5,5,5,5	
checkpoint_every: 1000	
num_val_batches: 10	
num_iterations: 10000	
use_cudnn: 1	
pixel_loss_weight: 0	
content_weights: 1	
style_target_type: gram	
weight_decay: 0	
pixel_loss_type: L2	
lr_decay_every: -1	
learning_rate: 0.001	
style_layers: 4,9,16,23	
backend: cuda	
upsample_factor: 4	
h5_file: /home/Desktop/coreml/models/file.h5	

Commercial use of models

Hi! First of all, congratulations for this project.

I have a simple question about the use of it. Considering that it is a open source project and in the short tutorial it was mentioned that all models are open source as well, I'm a little confused because all Justin Johnson’s pre-trained models are marked as free for personal or research use. If the models are not open source, would you suggest any other open source models in torch7 that are compatible?
Thank you in advance.

Applying coreml conversation on other style transfer torch models

I have tried the coreml conversation on other repository
here is the link of the repository :- link
they provide a pre trained model :- model.t7

By running the perpare_model.lua on this model an error is thrown
adding some changes to make it load the model

require 'InstanceNormalization'
require 'src/utils'
require 'src/descriptor_net'
require 'src/preprocess_criterion'

error :-

index local 'x' (a nil value)
stack traceback:
	perpare_model.lua:12: in function 'replaceModule'
	perpare_model.lua:32: in function 'main'
	perpare_model.lua:46: in main chunk
	[C]: in function 'dofile'
	.../src/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x00406670

running on convert-fast-neural-style.py

getting this error

Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 176, in <module>
    main()
  File "convert-fast-neural-style.py", line 162, in main
    unknown_layer_converter_fn=convert_instance_norm
  File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 195, in convert
    torch_model.evaluate()
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in evaluate
    self.applyToModules(lambda m: m.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 26, in applyToModules
    func(module)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in <lambda>
    self.applyToModules(lambda m: m.evaluate())
TypeError: 'NoneType' object is not callable

setup.sh fails when converting to CoreML.

Hi, I am trying to setup your project and setup.sh script fails with next error for every model:

Preparing models for conversion
/Users/teologov/torch/install/bin/lua: /Users/teologov/torch/install/share/lua/5.2/torch/File.lua:375: unknown object
stack traceback:
        [C]: in function 'error'
        /Users/teologov/torch/install/share/lua/5.2/torch/File.lua:375: in function 'readObject'
        /Users/teologov/torch/install/share/lua/5.2/torch/File.lua:409: in function 'load'
        prepare_model.lua:29: in function 'main'
        prepare_model.lua:46: in main chunk
        [C]: in function 'dofile'
        ...ogov/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
        [C]: in ?

I tried manually to load the model with th> torch.load('./fast-neural-style/models/instance_norm/la_muse.t7').model and it still fails:

/Users/teologov/torch/install/share/lua/5.2/torch/File.lua:375: unknown object
stack traceback:
        /Users/teologov/torch/install/share/lua/5.2/trepl/init.lua:506: in function </Users/teologov/torch/install/share/lua/5.2/trepl/init.lua:499>
        [C]: in function 'error'
        /Users/teologov/torch/install/share/lua/5.2/torch/File.lua:375: in function 'readObject'
        /Users/teologov/torch/install/share/lua/5.2/torch/File.lua:409: in function 'load'
        [string "_RESULT={torch.load('./fast-neural-style/mode..."]:1: in main chunk
        [C]: in function 'xpcall'
        /Users/teologov/torch/install/share/lua/5.2/trepl/init.lua:661: in function 'repl'
        ...ogov/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:204: in main chunk
        [C]: in ?

Loading Model error : torch.utils.serialization.read_lua_file.T7ReaderException

After training a model from the example and trying to convert the model to core ml

this error appears

File "convert-fast-neural-style.py", line 53, in <module>
   main()
 File "convert-fast-neural-style.py", line 39, in main
   'blue_bias': 103.939
 File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 171, in convert
   torch_model = load_lua(model)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 599, in load_lua
   return reader.read()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 586, in read
   return self.read_table()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 514, in wrapper
   result = fn(self, *args, **kwargs)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 563, in read_table
   v = self.read()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 584, in read
   return self.read_object()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 514, in wrapper
   result = fn(self, *args, **kwargs)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 537, in read_object
   return reader_registry[cls_name](self, version)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 242, in read_nn_class
   attributes = reader.read()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 586, in read
   return self.read_table()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 514, in wrapper
   result = fn(self, *args, **kwargs)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 563, in read_table
   v = self.read()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 586, in read
   return self.read_table()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 514, in wrapper
   result = fn(self, *args, **kwargs)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 563, in read_table
   v = self.read()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 584, in read
   return self.read_object()
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 514, in wrapper
   result = fn(self, *args, **kwargs)
 File "/usr/local/lib/python2.7/dist-packages/torch/utils/serialization/read_lua_file.py", line 543, in read_object
   "constructor").format(cls_name))
torch.utils.serialization.read_lua_file.T7ReaderException: don't know how to deserialize Lua class nn.InstanceNormalization. If you want to ignore this error and load this object as a dict, specify unknown_classes=True in reader's constructor

How to use Trained model in Pytorch?

Hi
I have trained a model using images. Now I want to test it with a single image.How to use my model weights. Its saved as a pth file. I'm new to python and deep learning. I used to work on R. In R once the model is trained, model.predict / predict command will give us the predictions. How its done in Pytorch?

Preprocess an image before passing to a ResNet model

I noticed that the preprocessing and the deprocessing only worked for the VGG model, and it also seems true for Apple's coremltools. In the original fast-neural-style code , the ResNet preprocess is done as

function M.resnet.preprocess(img)
  check_input(img)
  local mean = img.new(resnet_mean):view(1, 3, 1, 1):expandAs(img)
  local std = img.new(resnet_std):view(1, 3, 1, 1):expandAs(img)
  return (img - mean):cdiv(std)
end

while for VGG is done through

function M.vgg.preprocess(img)
  check_input(img)
  local mean = img.new(vgg_mean):view(1, 3, 1, 1):expandAs(img)
  local perm = torch.LongTensor{3, 2, 1}
  return img:index(2, perm):mul(255):add(-1, mean)
end

The difference is that for the ResNet model the image is in the range [0, 1] while for the VGG it is in the range [0, 255].

In your example, to use coremltools' API you defined the preprocessing and deprocessing as

    coreml_model = convert(
        model,
        [input_shape],
        input_names=['inputImage'],
        output_names=['outputImage'],
        image_input_names=['inputImage'],
        preprocessing_args={
            'is_bgr': True,
            'red_bias': -123.68,
            'green_bias': -116.779,
            'blue_bias': -103.939
        },
        image_output_names=['outputImage'],
        deprocessing_args={
            'is_bgr': True,
            'red_bias': 123.68,
            'green_bias': 116.779,
            'blue_bias': 103.939
        },
        unknown_layer_converter_fn=convert_instance_norm
    )

which is natural for the VGG model.

I wonder if there is a way to do the same thing for the ResNet model.

Thanks.

Cannot use convert-fast-neural-style.py to convert models to CoreML

Hello, I am using the script given in the example fast-neural-style. Prepare models worked as expected but when it comes to conversion, the error occurred. Don't know how to figure this out.

input:
python convert-fast-neural-style.py -input prepared_models/candy.t7 -output coreml_models/candy.mlmodel

output:
2018-01-20 12:58:11.787 python[23363:3203919] +[MLModel compileModelAtURL:error:]: unrecognized selector sent to class 0x7fff460e58e8
Traceback (most recent call last):
File "convert-fast-neural-style.py", line 176, in
main()
File "convert-fast-neural-style.py", line 162, in main
unknown_layer_converter_fn=convert_instance_norm
File "/Users/vega/workspace/virtualenv/lib/python2.7/site-packages/torch2coreml/_torch_converter.py", line 294, in convert
return MLModel(builder.spec)
File "/Users/vega/workspace/virtualenv/lib/python2.7/site-packages/coremltools/models/model.py", line 153, in init
self.proxy = _get_proxy_from_spec(filename)
File "/Users/vega/workspace/virtualenv/lib/python2.7/site-packages/coremltools/models/model.py", line 77, in _get_proxy_from_spec
return _MLModelProxy(filename)
RuntimeError: Caught an unknown exception!

Error while trying to create coreml model

I have followed all the instructions to create a model from "http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/instance_norm/mosaic.t7"
I have created my own style model, but when I tried to convert to coreml, I am getting this error.
I thought my model file is wrong, but I dowloaded this file "http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/instance_norm/mosaic.t7" and tried with that. The result is same. No luck.
Could you provide me a solution to fix that?

Traceback (most recent call last):
File "convert-fast-neural-style.py", line 175, in
main()
File "convert-fast-neural-style.py", line 161, in main
unknown_layer_converter_fn=convert_instance_norm
File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 192, in convert
with torch.legacy.nn.Sequential module as root"
TypeError: Model must be file path to .t7 file or pytorch loaded model with torch.legacy.nn.Sequential module as root

Exception: Model prediction is only supported on macOS version 10.13

Is there any way to run those models on Linux

Traceback (most recent call last):
  File "stylize-image.py", line 28, in <module>
    main()
  File "stylize-image.py", line 21, in main
    stylized_image = net.predict({'inputImage': image})['outputImage']
  File "/usr/local/lib/python2.7/dist-packages/coremltools/models/model.py", line 244, in predict
    raise Exception('Model prediction is only supported on macOS version 10.13.')
Exception: Model prediction is only supported on macOS version 10.13.

ValueError: expected 5D input (got 4D input)

Hello!,how to solve this problem,thank you very much!

Preparing models for conversion
Converting models to CoreML
Converting prepared_models/candy.t7
Traceback (most recent call last):
File "convert-fast-neural-style.py", line 176, in
main()
File "convert-fast-neural-style.py", line 162, in main
unknown_layer_converter_fn=convert_instance_norm
File "/home/sharp/.local/lib/python2.7/site-packages/torch2coreml/_torch_converter.py", line 211, in convert
input_shapes
File "/home/sharp/.local/lib/python2.7/site-packages/torch2coreml/_torch_converter.py", line 67, in _infer_torch_output_shapes
is_batch=True
File "/home/sharp/.local/lib/python2.7/site-packages/torch2coreml/_torch_converter.py", line 30, in _forward_torch_random_input
result = torch_model.forward(input_tensors[0])
File "/home/sharp/.local/lib/python2.7/site-packages/torch/legacy/nn/Module.py", line 33, in forward
return self.updateOutput(input)
File "/home/sharp/.local/lib/python2.7/site-packages/torch/legacy/nn/Sequential.py", line 36, in updateOutput
currentOutput = module.updateOutput(currentOutput)
File "convert-fast-neural-style.py", line 45, in updateOutput
return self._instance_norm.forward(Variable(input)).data
File "/home/sharp/.local/lib/python2.7/site-packages/torch/nn/modules/instancenorm.py", line 46, in forward
self._check_input_dim(input)
File "/home/sharp/.local/lib/python2.7/site-packages/torch/nn/modules/instancenorm.py", line 242, in _check_input_dim
.format(input.dim()))
ValueError: expected 5D input (got 4D input)

Missing Style Transfer models in IOS test application

There are no coreml_models in styleTransfer IOS application. I tried to download those mosaic and others models . For instance i have downloaded FNS-Mosaic model and included in the code but while running that i am getting error " unexpectedly found nil while unwrapping an Optional value" in line 81 of FNS-Mosaic.
Many Thanks.

TypeError: convert() got an unexpected keyword argument 'input_name'

when running setup.sh an error appears

Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 54, in <module>
    main()
  File "convert-fast-neural-style.py", line 39, in main
    'blue_bias': 103.939
TypeError: convert() got an unexpected keyword argument 'input_name'
Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 54, in <module>
    main()
  File "convert-fast-neural-style.py", line 39, in main
    'blue_bias': 103.939
TypeError: convert() got an unexpected keyword argument 'input_name'
Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 54, in <module>
    main()
  File "convert-fast-neural-style.py", line 39, in main
    'blue_bias': 103.939
TypeError: convert() got an unexpected keyword argument 'input_name'
Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 54, in <module>
    main()
  File "convert-fast-neural-style.py", line 39, in main
    'blue_bias': 103.939
TypeError: convert() got an unexpected keyword argument 'input_name'

environment contains

adium-theme-ubuntu (0.3.4)
coremltools (0.6.3)
decorator (4.0.6)
h5py (2.7.0)
ipython (2.4.1)
numpy (1.13.1)
pexpect (4.0.1)
pip (9.0.1)
protobuf (3.4.0)
ptyprocess (0.5)
pycurl (7.43.0)
python-apt (1.1.0b1)
PyYAML (3.12)
setuptools (36.3.0)
simplegeneric (0.8.1)
six (1.10.0)
torch (0.2.0.post3)
torch2coreml (0.1.0)
unity-lens-photos (1.0)
virtualenv (15.1.0)
wheel (0.29.0)

unable to import mlmodel to ios

when i try to import mlmodel, for example candy.mlmodel, to xcode, I got a validation error saying:

"There was a problem in decoding this coreml document"
validator error: In layer instancenormalization_5: incorrect mean size 0

Vision framework compatibility

I am using the Vision framework but the program crashes as soon as I access the pixelBuffer property of the result observation. This is my code:

    VNCoreMLModel * model = [VNCoreMLModel modelForMLModel:fns.model  error:&error];
    VNCoreMLRequest * request = [[VNCoreMLRequest alloc] initWithModel:model completionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
            if(error) {
                NSLog(@"completionHandler Error %@", error);
                return;
            }
            for (VNObservation * observation in request.results){
                if(observation.class == [VNPixelBufferObservation class]){
                    VNPixelBufferObservation * pxbufobs = (VNPixelBufferObservation*)observation;
                    
                      
                    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pxbufobs.pixelBuffer]; // <<---EXC_BAD_ACCESS here!!!
                    
                    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
                    CGImageRef videoImage = [temporaryContext
                                             createCGImage:ciImage
                                             fromRect:CGRectMake(0, 0,
                                                                 CVPixelBufferGetWidth(pxbufobs.pixelBuffer),
                                                                 CVPixelBufferGetHeight(pxbufobs.pixelBuffer))];
                    
                    // resulting UIImage
                    UIImage * image = [UIImage imageWithCGImage:videoImage];
                    CGImageRelease(videoImage);
                }
                    
            }
        }];

The fns is a .mlmodel converted with this project and dragged into my xcode project.

Interestingly, the models offered here do not work in xcode 9.1. It complains about the missing header file (although it is there). So, I was not able to test with mlmodels from alternative sources.

Install error: PyTorch does not currently provide packages for PyPI

I'm just getting into machine learning, and I wanted to try converting something into the coreml format. I wrote a very simple torch program which (not complete, I don't have access to the latest one right now) was something like this:

import numpy as np
import torch
from torch.autograd import Variable
model = torch.nn.Linear(1, 1)
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for t in range(10000):
    x = Variable(torch.from_numpy(np.random.random((1,1)).astype(np.float32)))
    y = x * 3
    y_pred = model(x)
    loss = loss_fn(y_pred, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print loss.data[0]

torch.save... # some more code here

Then I installed Anaconda and PyTorch, along with coremltools, but when I run pip2 install -U torch2coreml I get the error:

Collecting torch2coreml
  Using cached torch2coreml-0.2.0-py2.7-none-any.whl
Collecting torch (from torch2coreml)
  Using cached torch-0.1.2.post1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/_5/xy3fwjvx6h9fj8my9t_lnl_c0000gn/T/pip-build-EpKgGE/torch/setup.py", line 11, in <module>
        raise RuntimeError(README)
    RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566).

    Please follow the instructions at http://pytorch.org/ to install with miniconda instead.


    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/_5/xy3fwjvx6h9fj8my9t_lnl_c0000gn/T/pip-build-EpKgGE/torch/

I'm probably making some silly mistake here, so any help is appreciated. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.