Comments (5)
This looks like a reason to me:
from pytorch.
Sorry do be the necromancer here.
The numbers for the charts above are a bit hard to replicate, since download statistics from pypi have been disabled, then re-enabled in the meantime, and it's not entirely clear to me how good they were then and how good they are now.
So I just looked at the conda-cloud download numbers for 0.4.0
and 0.4.1
, and it might be time to re-evaluate this.
Pytorch Version | Python 2.7 | Python 3.X |
---|---|---|
0.4.0 |
20,900 | 165,057 |
0.4.1 |
14,578 | 127,007 |
There may be other sources of data that I didn't see or don't have access to, and there may be other reasons to further support legacy python, but I thought I'd put this out here.
from pytorch.
One can get consistent PyPI download numbers over the last 1.5 years from this bigtable: https://bigquery.cloud.google.com/dataset/the-psf:pypi
Running a query to get the download numbers for the last ~3 months (since 2018-06-15):
SELECT
file.project,
file.version,
file.type,
file.filename,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
TIMESTAMP("20180615"),
CURRENT_TIMESTAMP()
)
WHERE
file.project in ('torch')
GROUP BY
file.project, file.version, file.filename, file.type
ORDER BY total_downloads DESC
LIMIT 100
This gives:
file_project | file_version | file_type | file_filename | total_downloads |
---|---|---|---|---|
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp36-cp36m-manylinux1_x86_64.whl | 107464 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp35-cp35m-manylinux1_x86_64.whl | 76897 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-cp27mu-manylinux1_x86_64.whl | 64493 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp36-cp36m-manylinux1_x86_64.whl | 53572 |
torch | 0.1.2.post1 | sdist | torch-0.1.2.post1.tar.gz | 33083 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp36-cp36m-manylinux1_x86_64.whl | 25778 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-cp27mu-manylinux1_x86_64.whl | 23750 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp35-cp35m-manylinux1_x86_64.whl | 17091 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp36-cp36m-macosx_10_7_x86_64.whl | 15014 |
torch | 0.4.1.post2 | bdist_wheel | torch-0.4.1.post2-cp37-cp37m-manylinux1_x86_64.whl | 8977 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-cp27mu-manylinux1_x86_64.whl | 8479 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp37-cp37m-macosx_10_7_x86_64.whl | 6943 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-none-macosx_10_6_x86_64.whl | 6496 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp35-cp35m-manylinux1_x86_64.whl | 5504 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp36-cp36m-macosx_10_7_x86_64.whl | 4059 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp37-cp37m-manylinux1_x86_64.whl | 2852 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp35-cp35m-macosx_10_6_x86_64.whl | 2495 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-none-macosx_10_6_x86_64.whl | 1976 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-cp27m-manylinux1_x86_64.whl | 1841 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp36-cp36m-macosx_10_7_x86_64.whl | 1618 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-cp27m-manylinux1_x86_64.whl | 1210 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp35-cp35m-macosx_10_6_x86_64.whl | 1160 |
torch | 0.1.2 | sdist | torch-0.1.2.tar.gz | 908 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-none-macosx_10_6_x86_64.whl | 697 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-cp27m-manylinux1_x86_64.whl | 494 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp35-cp35m-macosx_10_6_x86_64.whl | 402 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp36-cp36m-macosx_10_7_x86_64.whl | 299 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp27-none-macosx_10_6_x86_64.whl | 254 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp35-cp35m-macosx_10_6_x86_64.whl | 210 |
On PyPI, 2.7 is very much alive and kicking, still.
The reason Anaconda numbers are skewed is because anaconda brings it's Python, and a lot of users just start with miniconda3 instead of miniconda2
from pytorch.
Indeed, pypi gives a different picture.
After figuring out how to use bigquery (had to select "legacy" sql...) I played around with it a bit.
I aggregated the python/torch version numbers to only make the distinction between major python versions (2/3) and minor pytorch versions (i.e. ignoring postfixes such as "post1").
SELECT
REGEXP_EXTRACT(file.version, r'^([0-9](?:\.[0-9]+)*)') as version,
REGEXP_EXTRACT(details.python, r'^([2-3])\.[0-9].') as python_major,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
TIMESTAMP("20180615"),
TIMESTAMP("20181015")
)
WHERE
file.project in ('torch')
-- AND (file.version = '0.4.0' OR file.version = '0.4.1')
GROUP BY
version, python_major
ORDER BY version, python_major ASC
Ignoring the requests that have no python version set (assuming they're distributed like the other downloads) that puts python 2 at about 24% for the 0.4.0
and 0.4.1
releases. So yeah, arguably still relevant. But is seems to be changing, finally. 🎉
from pytorch.
@black-puppydog nice SQL skills, i might use that in the future.
from pytorch.
Related Issues (20)
- DISABLED test_binary_op_list_error_cases__foreach_clamp_max_cuda_float64 (__main__.TestForeachCUDA) HOT 1
- [Feature Request] Support `dtype` arg in `torch._foreach_norm` HOT 2
- Support auto_functionalized for None returns
- TOR901 lint is too aggressive HOT 1
- MPS backend thinks that small floats are less than zero
- DISABLED test_dynamic_shapes (__main__.TestCompiledAutograd) HOT 2
- DISABLED test_select_expanded_v (__main__.TestAutogradWithCompiledAutograd) HOT 2
- DISABLED test_foreach_matches_forloop_RAdam_cpu_float64 (__main__.TestOptimRenewedCPU) HOT 2
- DISABLED test_sdpa_backwards_cuda_bfloat16 (__main__.TestNestedTensorSubclassCUDA) HOT 1
- 404 on torch.inference_mode doc page
- ValueError: weight_norm of 'weight' not found in ParametrizedConvTranspose1d HOT 1
- 2.2.0+ regresses SDPA performance on Windows
- DISABLED test_setting_default_saved_variable_hooks_twice_should_use_inner (__main__.TestAutogradWithCompiledAutograd) HOT 2
- Inductor can not fuse cat with a pointwise HOT 1
- [Inductor] Generate triton block pointers for discontiguous strided tensors HOT 3
- `torch.compile` fails with `jacfwd` when multiplying/dividing float and tensor HOT 1
- [Distributed] P2P Operations on NCCL do not respect tag HOT 1
- Triton installation not found.
- Broken Docker Image on dockerhub HOT 1
- PyTorch Distributed Load Updates or Returns `state_dict` HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.