Comments (4)
This probably needs a design doc
from pytorch.
From internal discussion: there is a preference towards avoiding actual serialization of dispatch-related args. We should store them as class properties instead -> query them from the cls
arg within _make_wrapper_subclass()
. This has a few benefits:
- No need to change the serialization format
- Decreased BC surface - serializing dispatch keys in particular is not something we want to do
We also discussed whether a default impl for subclasses that just works is valuable, as another option is to require a manual __reduce_ex__()
impl for each new subclass. @albanD has the opinion that it is valuable and should be present; there are already quite a few things to think about for subclasses.
from pytorch.
What are you using the serialization for? Is it just multiprocessing?
from pytorch.
The serialization is used for both multiprocessing and torch.load()
/ torch.save()
:
Lines 394 to 404 in 13462ec
from pytorch.
Related Issues (20)
- A Permute Layer for torch.nn.Sequential
- Forward hooks not called when fast path is used in TransformerEncoderLayer HOT 1
- How to enable XNNPACK instead of NNPACK/MKLDNN in Windows?
- torch._dynamo.exc.Unsupported: call_method GetAttrVariable(UnspecializedNNModuleVariable(CenterCrop), _transformed_types) __iter__ () {}
- AOTriton Cmake error breaking PyTorch nightly binary builds for ROCm
- `torch.compile` with `reduce-overhead`: very long compile time + GPU memory continuously to grow HOT 7
- warnings.warn is super spammy under Dynamo HOT 5
- taking upper triangular of "-inf" matrix results in nan values HOT 1
- Using a warning inside of Dynamo internals is super spammy
- Using PyTorch with Transformers to run inference with 'MPS' backend causes poor results. HOT 3
- [v.2.4.0] Release Tracker HOT 15
- 'torch.compiler.reset()' does not reset 'assume_constant_result' value HOT 1
- [Dynamo] FunctionCtx initilaization warning suppression looks fishy
- HSDP + `set_optimizer_state_dict` errors with monolithic checkpointing HOT 3
- torch.export gives segmentation fault HOT 1
- the bug of torch._export.aot_compile when it is using a _mm_plus_mm operator HOT 1
- the bug of torch._export.aot_compile when it is using a _mm_plus_mm operator HOT 1
- xpu: gradient checkpointing wrongly hits cuda path running on non-cuda devices HOT 3
- Negative numbers in the "Self CPU" column in pytorch's profiler HOT 1
- InternalTorchDynamoError on converting llama-2 to onnx using torch.onnx.dynamo_export
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.