Git Product home page Git Product logo

pytorch / pytorch Goto Github PK

View Code? Open in Web Editor NEW
77.0K 1.7K 20.8K 1012.83 MB

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Home Page: https://pytorch.org

License: Other

CMake 0.76% Python 50.28% Shell 0.29% C++ 40.89% C 2.04% Objective-C 0.03% Cuda 3.77% Batchfile 0.02% Makefile 0.01% Metal 0.04% Objective-C++ 1.33% CSS 0.01% HTML 0.01% Dockerfile 0.03% PowerShell 0.01% PureBasic 0.10% LLVM 0.01% Yacc 0.01% Java 0.12% Assembly 0.29%
neural-network autograd gpu numpy deep-learning tensor python machine-learning

pytorch's Introduction

PyTorch Logo


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

Our trunk health (Continuous Integration signals) can be found at hud.pytorch.org.

More About PyTorch

Learn the basics of PyTorch

At a granular level, PyTorch is a library that consists of the following components:

Component Description
torch A Tensor library like NumPy, with strong GPU support
torch.autograd A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn A neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils DataLoader and other utility functions for convenience

Usually, PyTorch is used either as:

  • A replacement for NumPy to use the power of GPUs.
  • A deep learning research platform that provides maximum flexibility and speed.

Elaborating Further:

A GPU-Ready Tensor Library

If you use NumPy, then you have used Tensors (a.k.a. ndarray).

Tensor illustration

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, mathematical operations, linear algebra, reductions. And they are fast!

Dynamic Neural Networks: Tape-Based Autograd

PyTorch has a unique way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Our goal is to not reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.

Fast and Lean

PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends are mature and have been tested for years.

Hence, PyTorch is quite fast — whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before.

Extensions Without Pain

Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions.

You can write new neural network layers in Python using the torch API or your favorite NumPy-based libraries such as SciPy.

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see a tutorial here and an example here.

Installation

Binaries

Commands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/

NVIDIA Jetson Platforms

Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided here and the L4T container is published here

They require JetPack 4.2 and above, and @dusty-nv and @ptrblck are maintaining them.

From Source

Prerequisites

If you are installing from source, you will need:

  • Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
  • A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required)

We highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.

If you want to compile with CUDA support, select a supported version of CUDA from our support matrix, then install the following:

Note: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware

If you want to disable CUDA support, export the environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py.

If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here

If you want to compile with ROCm support, install

  • AMD ROCm 4.0 and above installation
  • ROCm is currently supported only for Linux systems.

If you want to disable ROCm support, export the environment variable USE_ROCM=0. Other potentially useful environment variables may be found in setup.py.

Install Dependencies

Common

conda install cmake ninja
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
pip install -r requirements.txt

On Linux

conda install intel::mkl-static intel::mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda110  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo

# (optional) If using torch.compile with inductor/triton, install the matching version of triton
# Run from the pytorch directory after cloning
make triton

On MacOS

# Add this package on intel x86 processor machines only
conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv

On Windows

conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39

Get the PyTorch Source

git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

Install PyTorch

On Linux

If you would like to compile PyTorch with new C++ ABI enabled, then first run this command:

export _GLIBCXX_USE_CXX11_ABI=1

If you're compiling for AMD ROCm then first run this command:

# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py

Install PyTorch

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py develop

Aside: If you are using Anaconda, you may experience an error caused by the linker:

build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1

This is caused by ld from the Conda environment shadowing the system ld. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.

On macOS

python3 setup.py develop

On Windows

Choose Correct Visual Studio Version.

PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not come with Visual Studio Code by default.

If you want to build legacy python code, please refer to Building on legacy code and CUDA

CPU-only builds

In this mode PyTorch computations will run on your CPU, not your GPU

conda activate
python setup.py develop

Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.

CUDA based build

In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching

NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.

Additional libraries such as Magma, oneDNN, a.k.a. MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.

You can refer to the build_pytorch.bat script for some other environment variables configurations

cmd

:: Set the environment variables after you have downloaded and unzipped the mkl package,
:: else CMake would throw an error as `Could NOT find OpenMP`.
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
set LIB={Your directory}\mkl\lib;%LIB%

:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%

:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe

python setup.py develop
Adjust Build Options (Optional)

You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step.

On Linux

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build  # or cmake-gui build

On macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
ccmake build  # or cmake-gui build

Docker Image

Using pre-built images

You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

Building the image yourself

NOTE: Must be built with a docker version > 18.06

The Dockerfile is supplied to build images with CUDA 11.1 support and cuDNN v8. You can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default.

make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch

You can also pass the CMAKE_VARS="..." environment variable to specify additional CMake variables to be passed to CMake during the build. See setup.py for the list of available variables.

CMAKE_VARS="BUILD_CAFFE2=ON BUILD_CAFFE2_OPS=ON" make -f docker.Makefile

Building the Documentation

To build documentation in various formats, you will need Sphinx and the readthedocs theme.

cd docs/
pip install -r requirements.txt

You can then build the documentation by running make <format> from the docs/ folder. Run make to get a list of all available output formats.

If you get a katex error run npm install katex. If it persists, try npm install -g katex

Note: if you installed nodejs with a different package manager (e.g., conda) then npm will probably install a version of katex that is not compatible with your version of nodejs and doc builds will fail. A combination of versions that is known to work is [email protected] and [email protected]. To install the latter with npm you can run npm install -g [email protected]

Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found on our website.

Getting Started

Three-pointers to get you started:

Resources

Communication

Releases and Contributing

Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.

To learn more about making a contribution to Pytorch, please see our Contribution page. For more information about PyTorch releases, see Release page.

The Team

PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.

PyTorch is currently maintained by Soumith Chintala, Gregory Chanan, Dmytro Dzhulgakov, Edward Yang, and Nikita Shulga with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.

pytorch's People

Contributors

alband avatar apaszke avatar awgu avatar bdhirsh avatar bowenbao avatar chillee avatar colesbury avatar ezyang avatar gchanan avatar janeyx99 avatar jerryzh168 avatar kshitij12345 avatar malfet avatar mrshenli avatar peterbell10 avatar pietern avatar pytorchmergebot avatar rohan-varma avatar seemethere avatar smessmer avatar soumith avatar ssnl avatar suo avatar swolchok avatar vkuzo avatar wanchaol avatar yangqing avatar zasdfgbnm avatar zdevito avatar zou3519 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch's Issues

Matrix multiplication operator

Instead of overloading the multiplication operator to do both element-wise and matrix-multiplication it would be nicer and much safer to just support Python's matrix multiplication operator (see PEP 465, A @ B is the matrix product, A * B the element-wise product).

Figure out and fix Tensor(Storage) constructor

Sometimes constructing a Tensor with a storage interprets the storage as the backing storage:

a = torch.IntTensor(torch.IntStorage([1,2,3]))

 1
 2
 3
[torch.IntTensor of size 3]

But not if it's a LongStorage

a = torch.LongTensor(torch.LongStorage([1,2,3]))

(0,.,.) =
  1.4059e+14  1.4059e+14  1.4059e+14
  0.0000e+00  0.0000e+00  1.4059e+14
[torch.LongTensor of size 1x2x3]

This is because we want to allow constructions like:

a = torch.IntTensor([1,2,3])
b = torch.FloatTensor(a.size())

But we also want to allow things like:

a = torch.IntTensor([1,2,3])
b = torch.IntTensor(a.storage())

We should resolve this ambiguity, probably using keyword arguments. We probably need to require something like:

a = torch.XTensor(size=b.size())
a = torch.XTensor(storage=b.storage())

cpu builds broken due to cudnn and dataparallel

both cudnn and dataparallel pushes do:

import torch.cuda

On OSX, test_nn.py crashes.
Also, dataparallel should smoothly load and work even when you have no CUDA, so this should be fixed.

import torch works in ipython but not in python (_THRefcountedMapAllocator)

on os x with anaconda 2.7:

>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/szagoruyko/anaconda/lib/python2.7/site-packages/torch/__init__.py", line 1, in <module>
    from torch._C import *
ImportError: dlopen(/Users/szagoruyko/anaconda/lib/python2.7/site-packages/torch/_C.so, 2): Symbol not found: _THRefcountedMapAllocator
  Referenced from: /Users/szagoruyko/anaconda/lib/python2.7/site-packages/torch/lib/libshm.dylib
  Expected in: /Users/szagoruyko/torch/install/lib/libTH.dylib
 in /Users/szagoruyko/anaconda/lib/python2.7/site-packages/torch/lib/libshm.dylib

ipython works fine

rethink checkpointing

Right now, our checkpointing exists / works, but we've been thinking of rethinking it and having particular use-cases / goals to be covered.

Here are some things that need to be covered by the new checkpointing:

  • save to GPU and load from CPU, i.e. separate the type and location of the saved Tensors and allow remapping locations at load time
  • If one saves a model, changes the call operator of the class and loads the model back, the model is not doing the same thing as before. use python's inspect API to save the classes current source code with the model, and Warn if the loaded source is different from the Class source.
  • Make the endianness and long-size of the checkpoint consistent and working across all platforms
  • Allow one to get the parameters of a model as a super simple name / tensor dictionary. This decouples the problem of versioning the Container class to the parameters. This also allows one to save the weights from a model and load it into another model, as the keys here are simply named-strings of each parameter. For example:
    { 'conv1.weight' : torch.FloatTensor(...), 'resblock1.conv3.bias' : torch.FloatTensor(...), ...}
  • dumping trainer / optimizer state

Error messages to improve

  • THNN errors (say exactly which function has failed) - depends on the C API
  • double backward without save_variables=True
  • accept int as real for float types
  • constructing variables with non-tensor objects
  • torch.cat prints TODO: torch.cat(torch.Tensor(128), torch.Tensor(128))
  • CUDA OOM when constructing tensors says the arguments were invalid
  • out of range errors
  • inconsistent tensor size
  • use of variables in torch.xxx APIs expecting tensors
  • Convolution with invalid input sizes
  • torch.cat (on variables) when given a.cat((b), 1) # a and b are 2D
  • torch.cat when given different tensor types. torch.cat((a,b)) where a is LongTensor and b is FloatTensor for example.
  • when autograd.Function.forward returns something other than a tensor or tuple(tensor)
  • when MyFunction.backward() takes the incorrect number of arguments
  • addbmm output tensor of incorrect size at /Users/soumith/code/pytorch/torch/lib/TH/generic/THTensorMath.c:1040
RuntimeError: out of range at /Users/soumith/anaconda/conda-bld/pytorch-0.1.4_1475478983079/work/torch/lib/TH/generic/THTensor.c:350

Tensors don't print sometimes

print(net.output.numpy())
print(net.output)

outputs:

[[ 0.13239434 -0.29563415 -1.65602779 ...,  0.40573671  2.0148921
   2.12263751]
 [ 0.40269312 -0.46252817 -1.35247242 ...,  0.53116792  1.76924741
   1.93715036]]
Traceback (most recent call last):
  File "loader.py", line 94, in <module>
    print(net.output)
  File "/home/zagoruys/anaconda2/lib/python2.7/site-packages/torch/Tensor.py", line 87, in __str__
    return TensorPrinting.printTensor(self)
  File "/home/zagoruys/anaconda2/lib/python2.7/site-packages/torch/TensorPrinting.py", line 110, in printTensor
    strt = _printMatrix(self)
  File "/home/zagoruys/anaconda2/lib/python2.7/site-packages/torch/TensorPrinting.py", line 69, in _printMatrix
    strt += ' '.join(fmt.format(val/scale) for val in self.select(0, l).narrow(0, firstColumn, lastColumn-firstColumn+1)) + '\n'
ValueError: Invalid arguments! Got (int, int, float), but expected (long dimension, long start, long length)

tbh I would rely on numpy for all printing

Don't support legacy Python

There is really no reason to support Python 2. Python 3 has been out for 8 years now. There are plenty of good articles written about this. Maintaining a dual codebase is a going to be a major pain and it prevents you from using a whole bunch of new Python 3 features (six only gets you so far).

Multiprocessing doesn't preserve data sharing of storage slices

x = torch.Storage(10)
y = x[1:-1]

#1
with open('file.t7', 'w+b') as f:
    torch.save([x, y], f)
    f.seek(0)
    a, b = torch.load(f)
# a and b no longer share the same storage

#2
q = multiprocessing.Queue()
q.put([x, y])
a, b = q.get()
# a and b no longer share the same data

indexing bug on Variable

RuntimeError: indexing a tensor with an object of type tuple. The only supported types are integers, slices and torch.ByteTensor.

from torch.autograd import Variable
b = Variable(torch.randn(10, 20))
b[:,:5]
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-9-039a3f3ae91a> in <module>()
----> 1 b[:,:5]

/Users/soumith/anaconda/lib/python2.7/site-packages/torch/autograd/variable.pyc in __getitem__(self, key)
     55         if isinstance(key, Variable) and isinstance(key.data, torch.ByteTensor):
     56             return MaskedSelect()(self, key)
---> 57         return Index(key)(self)
     58 
     59     # TODO: setitem

/Users/soumith/anaconda/lib/python2.7/site-packages/torch/autograd/function.pyc in __call__(self, *input)
     16 
     17     def __call__(self, *input):
---> 18         return self._do_forward(*input)
     19 
     20     def save_for_backward(self, *tensors):

/Users/soumith/anaconda/lib/python2.7/site-packages/torch/autograd/function.pyc in _do_forward(self, *input)
     48             self.previous_functions = [(arg.creator or arg, id(arg)) for arg in input]
     49 
---> 50         raw_output = self.forward(*unpacked_input)
     51         if not isinstance(raw_output, tuple):
     52             raw_output = (raw_output,)

/Users/soumith/anaconda/lib/python2.7/site-packages/torch/autograd/functions/tensor.pyc in forward(self, i)
     15     def forward(self, i):
     16         self.input_size = i.size()
---> 17         return i[self.index]
     18 
     19     def backward(self, grad_output):

RuntimeError: indexing a tensor with an object of type tuple. The only supported types are integers, slices and torch.ByteTensor

module.parameters() should only return unique parameters ?

In [141]: L = nn.Linear(10,10)
     ...: S = nn.Sequential()
     ...: S.add_module('a', L)
     ...: S.add_module('b', L)
     ...: len(list(S.parameters()))
     ...:
Out[141]: 4

Listing shared parameters multiple times seems wrong... because for example SGD will end up accumulating the gradients twice (I think).

Also from a theoretical point of view I would say this model only has 110 parameters, not 220.

fix bad error message

m = nn.Conv2d(16, 32, (3, 3))
input = autograd.Variable(torch.randn(3, 16, 10, 10))
m(input)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-57-ad2f7eaad9e5> in <module>()
----> 1 m(input)

/home/soumith/local/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input)
     64
     65     def __call__(self, *input):
---> 66         result = self.forward(*input)
     67         for hook in self.forward_hooks.values():
     68             hook(self, input, result)

/home/soumith/local/miniconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input)
    140             return func(input, self.weight)
    141         else:
--> 142             return func(input, self.weight, self.bias)
    143
    144

/home/soumith/local/miniconda2/lib/python2.7/site-packages/torch/autograd/function.pyc in __call__(self, *input)
     16
     17     def __call__(self, *input):
---> 18         return self._do_forward(*input)
     19
     20     def save_for_backward(self, *tensors):

/home/soumith/local/miniconda2/lib/python2.7/site-packages/torch/autograd/function.pyc in _do_forward(self, *input)
     44             self.previous_functions = [(arg.creator, id(arg)) for arg in input]
     45
---> 46         raw_output = self.forward(*unpacked_input)
     47         if not isinstance(raw_output, tuple):
     48             raw_output = (raw_output,)

/home/soumith/local/miniconda2/lib/python2.7/site-packages/torch/nn/functions/thnn/auto.pyc in forward(self, input, *params)
    134             self.save_for_backward(input, *params)
    135
--> 136         getattr(self._backend, update_output.name)(self._backend.library_state, input, output, *args)
    137         return output
    138

ValueError: Invalid arguments! Got (int, FloatTensor, FloatTensor, DoubleTensor, DoubleTensor, FloatTensor, FloatTensor, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, [torch.FloatTensor bias or None], torch.FloatTensor finput, torch.FloatTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)

extension API broken in python 2.7

  1. FileExistsError is not present in 2.7. Workaround: http://stackoverflow.com/questions/20790580/python-specifically-handle-file-exists-exception
  2. this occurs after fixing 1.
Traceback (most recent call last):
  File "build.py", line 7, in <module>
    with_cuda=False
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/torch/utils/ffi/__init__.py", line 127, in compile_extension
    ffi.cdef(_typedefs + header_source);
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/api.py", line 105, in cdef
    self._cdef(csource, override=override, packed=packed)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/api.py", line 119, in _cdef
    self._parser.parse(csource, override=override, **options)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/cparser.py", line 299, in parse
    self._internal_parse(csource)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/cparser.py", line 304, in _internal_parse
    ast, macros, csource = self._parse(csource)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/cparser.py", line 260, in _parse
    ast = _get_parser().parse(csource)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/cffi/cparser.py", line 40, in _get_parser
    _parser_cache = pycparser.CParser()
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/pycparser/c_parser.py", line 87, in __init__
    outputdir=taboutputdir)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/pycparser/c_lexer.py", line 66, in build
    self.lexer = lex.lex(object=self, **kwargs)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/pycparser/ply/lex.py", line 911, in lex
    lexobj.readtab(lextab, ldict)
  File "/Users/soumith/anaconda/lib/python2.7/site-packages/pycparser/ply/lex.py", line 233, in readtab
    titem.append((re.compile(pat, lextab._lexreflags | re.VERBOSE), _names_to_funcs(func_name, fdict)))
  File "/Users/soumith/anaconda/lib/python2.7/re.py", line 194, in compile
    return _compile(pattern, flags)
  File "/Users/soumith/anaconda/lib/python2.7/re.py", line 249, in _compile
    p = sre_compile.compile(pattern, flags)
  File "/Users/soumith/anaconda/lib/python2.7/sre_compile.py", line 583, in compile
    "sorry, but this version only supports 100 named groups"
AssertionError: sorry, but this version only supports 100 named groups

OSX Multiprocessing errors out

OSX 10.11.5

Running multiprocessing tests
F..slibc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: Protocol not supported
$ Error: std::exception at /Users/soumith/anaconda/conda-bld/pytorch-0.1.3_1473720667243/work/torch/lib/libshm/core.cpp:112
TESTS FAILED: pytorch-0.1.3-py27_9

PEP8

I have an unhealthy obsession with PEP8... Could viewAs, expandAs be renamed to view_as, expand_as, etc.?

I might even volunteer to make everything pass flake8 if you guys are okay with accepting a PR that does that.

Add information about non-differentiable points to grad tests

FAIL: test_ReLU (__main__.TestNN)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_nn.py", line 287, in <lambda>
    setattr(TestNN, test_name, lambda self,test=test: test(self))
  File "/data/users/soumith/pytorch/test/common_nn.py", line 533, in __call__
    self._do_test(test_case, module, input)
  File "test_nn.py", line 33, in _do_test
    test_case.check_jacobian(module, input, self.jacobian_input)
  File "/data/users/soumith/pytorch/test/common_nn.py", line 433, in check_jacobian
    PRECISION
AssertionError: 0.20518362972049153 not less than or equal to 1e-05

ERROR: test_maskedSelect (__main__.TestTorch)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_torch.py", line 1887, in test_maskedSelect
    self.assertEqual(dst, torch.Tensor(dst2), 0)
  File "/data/users/soumith/pytorch/test/common.py", line 66, in assertEqual
    max_err = max(max_err, abs(x[index] - y[index]))
TypeError: bad operand type for abs(): 'DoubleTensor'

torch.set_num_threads broken

Two problems:

  1. The Python extension is compiled without OpenMP support even though TH is built with it, so set_num_threads is incorrectly a no-op / warning
  2. Python is complaining about the return value
>>> torch.set_num_threads(1)
__main__:1: RuntimeWarning: set_num_threads is a no-op - torch was compiled without OpenMP support
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
SystemError: <built-in function set_num_threads> returned NULL without setting an error

OS X build issue in THP_decodeInt64Buffer

with gcc-6:

cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
In file included from /Users/szagoruyko/research/rocks/pytorch/torch/csrc/generic/Storage.cpp:297:0,
                 from generic/Storage.cpp:1,
                 from torch/csrc/Storage.cpp:11:
/Users/szagoruyko/research/rocks/pytorch/torch/csrc/generic/StorageMethods.cpp: In function 'PyObject* THPLongStorage_fromBuffer(PyObject*, PyObject*, PyObject*)':
/Users/szagoruyko/research/rocks/pytorch/torch/csrc/generic/StorageMethods.cpp:138:34: error: invalid conversion from 'long int*' to 'int64_t* {aka long long int*}' [-fpermissive]
   THP_decodeInt64Buffer(storage->data, src + offset, byte_order, count);
                         ~~~~~~~~~^~~~
In file included from torch/csrc/Storage.cpp:8:0:
torch/csrc/byte_order.h:12:6: note:   initializing argument 1 of 'void THP_decodeInt64Buffer(int64_t*, const uint8_t*, THPByteOrder, size_t)'
 void THP_decodeInt64Buffer(int64_t* dst, const uint8_t* src, THPByteOrder order, size_t len);
      ^~~~~~~~~~~~~~~~~~~~~

freezing part / parts of the graph for gradient updates

Request from Ross Girshick and quite a common use-case:

Have an example showcasing (and figure out changes in the API) to support:
Train a model, but say with the first K layers frozen.

tl;dr: how to freeze part of the graph

Install Error, OSX 10.11.6, fresh miniconda install

pip install -r requirements.txt
pip install .
cmake is installed via homebrew

Processing /Users/awiltschko/Code/pytorch
Requirement already satisfied (use --upgrade to upgrade): pyyaml in /Users/awiltschko/anaconda/lib/python2.7/site-packages (from torch==0.1)
Installing collected packages: torch
  Running setup.py install for torch ... error
    Complete output from command /Users/awiltschko/anaconda/bin/python -u -c "import setuptools, tokenize;__file__='/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-435VCI-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build_deps
    /private/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/torch/_thnn/utils.py:1: RuntimeWarning: Parent module 'torch._thnn' not found while handling absolute import
      import os
    /private/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/torch/_thnn/utils.py:2: RuntimeWarning: Parent module 'torch._thnn' not found while handling absolute import
      import itertools
    CMake Error at /usr/local/Cellar/cmake/3.2.3/share/cmake/Modules/CMakeDetermineCCompiler.cmake:57 (message):
      Could not find compiler set in environment variable CC:

      gcc-4.9.
    Call Stack (most recent call first):



    CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly.
    Missing variable is:
    CMAKE_C_COMPILER_ENV_VAR
    CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly.
    Missing variable is:
    CMAKE_C_COMPILER
    CMake Error: Could not find cmake module file: /private/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/torch/lib/build/TH/CMakeFiles/3.2.3/CMakeCCompiler.cmake
    CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly.
    Missing variable is:
    CMAKE_CXX_COMPILER_ENV_VAR
    CMake Error: Error required internal CMake variable not set, cmake may be not be built correctly.
    Missing variable is:
    CMAKE_CXX_COMPILER
    CMake Error: Could not find cmake module file: /private/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/torch/lib/build/TH/CMakeFiles/3.2.3/CMakeCXXCompiler.cmake
    CMake Error in :
      No CMAKE_C_COMPILER could be found.

      Tell CMake where to find the compiler by setting the CMake cache entry
      CMAKE_C_COMPILER to the full path to the compiler, or to the compiler name
      if it is in the PATH.


    CMake Error in :
      No CMAKE_CXX_COMPILER could be found.

      Tell CMake where to find the compiler by setting the CMake cache entry
      CMAKE_CXX_COMPILER to the full path to the compiler, or to the compiler
      name if it is in the PATH.


    CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
    CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
    -- Configuring incomplete, errors occurred!
    See also "/private/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/torch/lib/build/TH/CMakeFiles/CMakeOutput.log".

    ----------------------------------------
Command "/Users/awiltschko/anaconda/bin/python -u -c "import setuptools, tokenize;__file__='/var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-435VCI-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /var/folders/h2/_l96jjy96xb86z690kzcqmzh0000gn/T/pip-fURm_1-build/

MaxUnpool2d segfaults for some configurations

Having the output of MaxUnpool2d to be computed as follows is error-prone.

out_height = (input.size(2) - 1) * self.dh + self.kh - 2*self.padh
out_width = (input.size(3) - 1) * self.dw + self.kw - 2*self.padw

For some configurations of input size and stride/kernel size, there are pixel loss in the boundaries due to integer division, so when unpooling we might end up by having accesses which are out of the output size defined by the above equation, or giving completely wrong results.
Here is a test case:

import torch
import torch.nn as nn
import torch.autograd as autograd

m = nn.MaxPool2d(3, stride=2, return_indices = True)
mu = nn.MaxUnpool2d(3, stride=2)
input_tensor = torch.rand(1, 1, 6, 6)
input_tensor[0][0][5][5] = 2
input = autograd.Variable(input_tensor)
output, indices = m.forward(input)
unpooled_output = mu.forward(output, indices)

Curiously, the segfault is doesn't happen all the time, but instead it just raises an out-of-range error, or the outputs max are not in the right position.

Optim API: per-layer learning rates etc.

Right now, apart from figuring out API changes around freezing parts of the graph, another problem is:

  • specifying per-layer learning rates optionally.

This is a huge pain point in Torch, and is actually a common use case for many.
Cover this use-case properly and provide an example.

Error on legacy.nn serialization

repro:

import torch                                                                                                                            
import torch.legacy.nn as nn

net = nn.Sequential()
net.add(nn.SpatialConvolution(3,3,3,3,1,1))
torch.save(net, open('model.pt7', 'wb'))

Checklist for Release

Core

Core Framework code

  • optim + trainer + dataset objects
  • Sharing CPU tensors
  • Add all operations to autograd
  • Free GIL when processing big tensors
  • Custom CUDA memory allocator (Sam Gross)
  • multi-GPU functions
  • nccl integration
  • finish legacy.nn
  • refactor C API for extensions
  • create an example extension with TH/pytorch C API
  • Checkpointing and improving torch.save / torch.load to use the same byte order
  • implement keyword arguments in cwrap
  • go over TH and try to make error messages more descriptive (e.g. if the sizes don't match )
  • Sparse Tensors on CPU and GPU (Zeming)
  • improve tensor printing
  • functional API for autograd variables
  • Finish multiple CUDA types (Soumith)
  • Add stochastic nodes
  • Add all modules to nn
  • Improved error messages #39
  • sync legacy.nn with Lua nn
  • move Trainer to torch.experimental

Operations

  • Integrate CuDNN
  • Write nn.LSTM*, nn.GRU* etc. to integrate CuDNN RNNs
  • Rewrite LookupTable and SparseLinear to use SparseTensors

FBCode stuff

  • Import into FBCode

Open Source stuff

  • Binary builds
  • Continuous builds for CUDA
  • MNIST and ResNet18 Contbuilds
  • pip wheels

Backward Compatibility

Lua Bridge

  • integrate lutorpy into pytorch (either as optional package or by default)
    • change TH indexing to 0-based and add to cwrap the 1-subtraction and addition

Model Loading

Framework Integration

  • Caffe2 Integration
    • Modify TH / THC / THNN / THCUNN to integrate them
    • Have a converter that takes in a (Module and input) or (output) and auto-converts it to caffe model
      • vice versa. take a caffe protobuf and codegen a python class with loading weights
  • Keras Integration
    • Have a keras backend. Send in a Pull Request to fchollet/keras
  • Converting models between TF and Pytorch
    • Torch2TF: Same as caffe convertor pretty much!
    • TF2Torch: same as caffe, but cover ops like tf.if and tf.while

Website

  • Find someone to design and code it
  • Getting Started
    • Binary installs
      • anaconda-based which links automatically with MKL
      • Each of them for different CUDA versions. 7.0, 7.5, 8.0
    • Source-based installs
  • Showcase Demos / Examples / ModelZoo elegantly
  • Tutorials
  • Look at gym.openai.com (http://gym.openai.com/)
  • Developer docs

Documentation, Demos, Examples, Tutorials, ModelZoo

Demos / Examples / ModelZoo

  • Pre-trained models for each demo (in the model zoo)
    • Create a python wrapper that allows to search and download models (like nltk)
  • Simple API for retraining / using pre-trained models on custom datasets
  • Documentation on how to modify the example for one's own experiments
  • Most or all of them should be multi-GPU ready

Demos + Examples

  • Basic
  • Vision
    • Supervised
      • fb.resnet.torch / googlenet for image classification (sam)
      • fastrcnn (francisco)
      • Video Classification
      • NeuralTalk2 (paszke)
      • Visual Q&A (paszke)
    • Unsupervised
      • Image super-resolution (wafi2x) (soumith)
      • DCGANs + Improved Training for GANs + InfoGAN
      • Text 2 Image (soumith)
      • Pixel RNNs (soumith)
      • Variational AutoEncoders (joost)
  • Games / RL (ludc)
  • NLP / Text
  • Metalearning
    • Neural Turing Machine
    • Learning to Learn by Gradient Descent by Gradient Descent
    • Decoupled Neural Interfaces using Synthetic Gradients https://arxiv.org/abs/1608.05343
  • ConvNet-Benchmarks / DeepMark scripts

Tutorials

Documentation

  • Auto-generate from source / docstrings

Links

Postponed for next release

  • lazy forward execution engine
  • double backprop
  • Sharing CUDA tensors
  • look into Cython
  • a built-in profiler for forward/backward (with automatic hints for speeding up the execution?)

AIViz Integration

  • Have an intial attempt, and talk to Allan

  • Serveable via some Python REST API

  • figure out details for images and videos

  • Audio

  • wav2letter

  • DeepSpeech2 for maybe Switchboard or something (Ask Gabriel)

  • Sparse Models (ads?)

Distributed Training

  • simple distributed trainer like torch-distlearn / torch-ipc / torch-thrift

  • Synchronous, asynchronous and Elastic SGD

  • Integrate with Andrew / Yangqing's MPI library when that's ready

  • Port image classification and seq2seq to this

  • error handling

    • create a dict for translating exceptions and adding some pytorch specific info, sort of like in Elm [1,2]
    • make sure there's a clear error message when multiprocessing runs out of fds

Add an keyword argument "device" to torch.cuda.XXXTensor

One of the weird things, is that to create a cuda tensor of the same type and size on a different device, you now have to write:

with torch.cuda.device(3):
    x = type(tensor)(tensor.size())

This does not work, because it places x on tensor's device:

with torch.cuda.device(3):
    x = tensor.new(tensor.size())

We should probably allow explicitly choosing the device in the constructor and new functions:

x = torch.cuda.FloatTensor(foo.size(), device=3)
y = x.new(device=3)
y = x.type('torch.cuda.FloatTensor', device=3)

Containers should allow module assignments

Right now, after you created a Container, you can assign modules at a later time to it like this:

container.add_module('linear', nn.Linear())

Instead, also allow this simpler interface:

container.linear = nn.Linear()

save/load CUDA tensor always puts it on device 0

Always deserializing onto device 0 is dangerous, because often device 0 doesn't have enough memory.

This requires some heuristics to get right (e.g. what if you deserialize onto a machine with different # of GPUS?) You can look at what we did in cunn. I think this is low-pri.

In [127]: with open('checkpoint4.pt', 'wb') as f:
     ...:     pickle.dump(torch.FloatTensor(10).cuda(3), f)
     ...:

In [128]: with open('checkpoint4.pt', 'rb') as f:
     ...:     obj = pickle.load(f)
     ...:

In [130]: obj.getDevice()
Out[130]: 0

ImportError: No module named _C

Ubuntu 16.04, anaconda python 2.7, got this when trying to 'import torch'

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-c031d3dd82fc> in <module>()
----> 1 import torch

/opt/rocks/pytorch/torch/__init__.py in <module>()
----> 1 from torch._C import *
      2 import sys
      3 import math
      4
      5 _tensor_classes = set()

ImportError: No module named _C

Can't print tensors with inf or nan

In [27]: print(torch.FloatTensor(1).fill_(1).div(0))
---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
<ipython-input-27-811177cb15a4> in <module>()
----> 1 print(torch.FloatTensor(1).fill_(1).div(0))

/home/alerer/anaconda/lib/python2.7/site-packages/torch/Tensor.pyc in __str__(self)
     85
     86     def __str__(self):
---> 87         return TensorPrinting.printTensor(self)
     88
     89     def __iter__(self):

/home/alerer/anaconda/lib/python2.7/site-packages/torch/TensorPrinting.pyc in printTensor(self)
    106         return '[{} with no dimension]\n'.format(torch.typename(self))
    107     elif self.nDimension() == 1:
--> 108         strt = _printVector(self)
    109     elif self.nDimension() == 2:
    110         strt = _printMatrix(self)

/home/alerer/anaconda/lib/python2.7/site-packages/torch/TensorPrinting.pyc in _printVector(tensor)
     96
     97 def _printVector(tensor):
---> 98     fmt, scale, _ = _printformat(tensor.storage())
     99     strt = ''
    100     if scale != 1:

/home/alerer/anaconda/lib/python2.7/site-packages/torch/TensorPrinting.pyc in _printformat(storage)
     25
     26     scale = 1
---> 27     exp_max = int(exp_max)
     28     if int_mode:
     29         if exp_max > 9:

OverflowError: cannot convert float infinity to integer

optim API to incorporate not just a rigid model

one feedback talking to a few researchers and showing the design is that the optim API right now doesn't allow one to optimize a non parameter variable.
For example, when optimizing like neural-art maybe, but also a bunch of meta-optimization research (like learning to learning to learn).
We should consider making it optional to give model in the constructor, and allow .step to take in params to optimize

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.