Git Product home page Git Product logo

ngc-container-replicator's Introduction

NGC Replicator

Clones nvcr.io using the either DGX (compute.nvidia.com) or NGC (ngc.nvidia.com) API keys.

The replicator will make an offline clone of the NGC/DGX container registry. In its current form, the replicator will download every CUDA container image as well as each Deep Learning framework image in the NVIDIA project.

Tarfiles will be saved in /output inside the container, so be sure to volume mount that directory. In the following example, we will collect our images in /tmp on the host.

Use --min-version to limit the number of versions to download. In the example below, we will only clone versions 17.10 and later DL framework images.

docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output \
    deepops/replicator --project=nvidia --min-version=17.12 \
                       --api-key=<your-dgx-or-ngc-api-key>

You can also filter on specific images. If you want to filter only on image names containing the strings "tensorflow", "pytorch", and "tensorrt", you would simply add --image for each option, e.g.

docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output \
    deepops/replicator --project=nvidia --min-version=17.12 \
                       --image=tensorflow --image=pytorch --image=tensorrt \
                       --dry-run \
                       --api-key=<your-dgx-or-ngc-api-key>

Note: the --dry-run option lets you see what will happen without committing to a lengthy download.

By default, the --image flag does a substring match in order to ensure you match all images that may be desired. Sometimes, however, you only want to download a specific image with no substring matching. In this case, you can add the --strict-name-match flag, e.g.

docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output \
    deepops/replicator --project=nvidia --min-version=17.12 \
                       --image=tensorflow \
                       --strict-name-match \
                       --dry-run \
                       --api-key=<your-dgx-or-ngc-api-key>

Note: a state.yml file will be created the output directory. This saved state will be used to avoid pulling images that were previously pulled. If you wish to repull and save an image, just delete the entry in state.yml corresponding to the image_name and tag you wish to refresh.

Kubernetes Deployment

If you don't already have a deepops namespace, create one now.

kubectl create namespace deepops

Next, create a secret with your NGC API Key

kubectl -n deepops create secret generic  ngc-secret
--from-literal=apikey=<your-api-key-goes-here>

Next, create a persistent volume claim that will life outside the lifecycle of the CronJob. If you are using DeepOps you can use a Rook/Ceph PVC similar to:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ngc-replicator-pvc
  namespace: deepops
  labels:
    app: ngc-replicator
spec:
  storageClassName: rook-raid0-retain  # <== Replace with your StorageClass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 32Mi

Finally, create a CronJob that executes the replicator on a schedule. This eample run the replicator every hour. Note: This example used Rook block storage to provide a persistent volume to hold the state.yml between executions. This ensures you will only download new container images. For more details, see our DeepOps project.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: replicator-config
  namespace: deepops
data:
  ngc-update.sh: |
    #!/bin/bash
    ngc_replicator                                        \
      --project=nvidia                                    \
      --min-version=$(date +"%y.%m" -d "1 month ago")     \
      --py-version=py3                                    \
      --image=tensorflow --image=pytorch --image=tensorrt \
      --no-exporter                                       \
      --registry-url=registry.local  # <== Replace with your local repo
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: ngc-replicator
  namespace: deepops
  labels:
    app: ngc-replicator
spec:
  schedule: "0 4 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          nodeSelector:
            node-role.kubernetes.io/master: ""
          containers:
            - name: replicator
              image: deepops/replicator
              imagePullPolicy: Always
              command: [ "/bin/sh", "-c", "/ngc-update/ngc-update.sh" ]
              env:
              - name: NGC_REPLICATOR_API_KEY
                valueFrom:
                  secretKeyRef:
                    name: ngc-secret
                    key: apikey
              volumeMounts:
              - name: registry-config
                mountPath: /ngc-update
              - name: docker-socket
                mountPath: /var/run/docker.sock
              - name: ngc-replicator-storage
                mountPath: /output
          volumes:
            - name: registry-config
              configMap:
                name: replicator-config
                defaultMode: 0777
            - name: docker-socket
              hostPath:
                path: /var/run/docker.sock
                type: File
            - name: ngc-replicator-storage
              persistentVolumeClaim:
                claimName: ngc-replicator-pvc
          restartPolicy: Never

Developer Quickstart

make dev
py.test

TODOs

  • save markdown readmes for each image. these are not version controlled
  • test local registry push service. coded, beta testing
  • add templater to workflow

ngc-container-replicator's People

Contributors

ajdecon avatar dependabot[bot] avatar dholt avatar ksasagit avatar ryanolson avatar samcmill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ngc-container-replicator's Issues

Is this really offline?

I am a bit confused with the word 'offline'. Does that mean I do not have to log into the ngc registry by using this replicator to run the pre-trained models? In rencent, I trained a lpr model with pre-trained weights, which has a great performance on recognizing car plates. Unfortunately, the public networks are not allowed in the place I want to deploy my model because of the security issues. So, is it possible to use the models without logging into the ngc registry?

Ability to run without Docker?

I tried to convert the replicator into a Singularity image to be able to use it on a Docker-less cluster:

singularity pull docker://deepops/replicator:201015

This worked just fine and generated a replicator_201015.sif. Then off to replicating (note: needed PYTHONNOUSERSITE=1 otherwise stuff from ~/.local/lib/python3.6 was getting in the way... I might suggest defining this variable in the container proactively):

singularity run --env=PYTHONNOUSERSITE=1 -B /tmp:/output \
    replicator_201015.sif --project=nvidia --min-version=17.12 \
                       --image=tensorflow --image=pytorch --image=tensorrt \
                       --singularity \
                       --dry-run \
                       --api-key=`cat ~/.ngc_api_key.txt`

Unfortunately, the run crashes citing the lack of Docker daemon:

2021-03-24 11:35:39,056 - ngc_replicator.ngc_replicator - 30 - INFO - Initializing Replicator
2021-03-24 11:35:40,501 - nvidia_deepops.docker.registry.ngcregistry - 126 - INFO - GET https://api.ngc.nvidia.com/v2/orgs - took 0.5812202040106058 sec
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Traceback (most recent call last):
  File "/usr/local/bin/ngc_replicator", line 33, in <module>
    sys.exit(load_entry_point('ngc-replicator==0.4.0', 'console_scripts', 'ngc_replicator')())
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py", line 344, in main
    replicator = Replicator(**config)
  File "/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py", line 39, in __init__
    self.nvcr_client.login(username="$oauthtoken", password=api_key, registry="nvcr.io/v2")
  File "/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/client/dockercli.py", line 62, in login
    "docker login -u {} -p {} {}".format(username, password, registry))
  File "/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/client/dockercli.py", line 58, in call
    stderr=stderr)
  File "/usr/local/lib/python3.6/subprocess.py", line 311, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['docker', 'login', '-u', '$oauthtoken', '-p', '<_my_API_key_here_', 'nvcr.io/v2']' returned non-zero exit status 1.

From a naive user prospective, if I run from singularity (i.e. outside of Docker ecosystem) and all I want is to dump a bunch of image files, I should not be needing a functional Docker daemon on the host, right? Would it be possible for the replicator to detect such condition?

Issue with API?

Howdy,

We have a new process that uses this container to check and pull new versions Nvidia Containers nightly. It stopped working last Thusday night. I have tried everything, including pulling the container locally, and no matter what I still get errors show below:

[appman@hpctest-ngc scripts]$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
deepops/replicator latest ded4e6170335 2 weeks ago 504MB
docker latest 51453dcdd9bd 5 weeks ago 215MB
ubuntu latest 1318b700e415 7 weeks ago 72.8MB
centos latest 831691599b88 15 months ago 215MB

[appman@hpctest-ngc scripts]$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output deepops/replicator --image=pytorch --dry-run --strict-name-match --api-key=
2021-09-13 21:05:48,675 - ngc_replicator.ngc_replicator - 30 - INFO - Initializing Replicator
2021-09-13 21:05:49,865 - nvidia_deepops.docker.registry.ngcregistry - 126 - INFO - GET https://api.ngc.nvidia.com/v2/orgs - took 0.6250609308481216 sec
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
2021-09-13 21:05:51,624 - ngc_replicator.ngc_replicator - 66 - INFO - tarfiles will be saved to /output
2021-09-13 21:05:51,624 - ngc_replicator.ngc_replicator - 70 - INFO - Replicator initialization complete
2021-09-13 21:05:51,624 - ngc_replicator.ngc_replicator - 87 - INFO - Replicator Started
2021-09-13 21:06:22,168 - nvidia_deepops.docker.registry.ngcregistry - 126 - INFO - GET https://api.ngc.nvidia.com/v2/org/ygwdl2o5rmaj/repos?include-teams=true&include-public=true - took 30.543361625634134 sec
Traceback (most recent call last):
File “/usr/local/bin/ngc_replicator”, line 33, in
sys.exit(load_entry_point(‘ngc-replicator==0.4.0’, ‘console_scripts’, ‘ngc_replicator’)())
File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 722, in call
return self.main(*args, **kwargs)
File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 697, in main
rv = self.invoke(ctx)
File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 535, in invoke
return callback(*args, **kwargs)
File “/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py”, line 371, in main
replicator.sync()
File “/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py”, line 90, in sync
new_images = {image.name: image.tag for image in self.sync_images(project=project)}
File “/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py”, line 90, in
new_images = {image.name: image.tag for image in self.sync_images(project=project)}
File “/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py”, line 106, in sync_images
for image in self.images_to_download(project=project):
File “/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py”, line 127, in images_to_download
remote_state = self.nvcr.get_state(project=project, filter_fn=filter_fn)
File “/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/registry/ngcregistry.py”, line 255, in get_state
names = self.get_image_names(project=project)
File “/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/registry/ngcregistry.py”, line 204, in get_image_names
for image in cache or self._get_repo_data(project=project)]
File “/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/registry/ngcregistry.py”, line 190, in _get_repo_data
.format(self.default_org))
File “/usr/local/lib/python3.6/site-packages/nvidia_deepops-0.4.2-py3.6.egg/nvidia_deepops/docker/registry/ngcregistry.py”, line 136, in _get
req.raise_for_status()
File “/usr/local/lib/python3.6/site-packages/requests/models.py”, line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 502 Server Error: for url: https://api.ngc.nvidia.com/v2/org/ygwdl2o5rmaj/repos?include-teams=true&include-public=true

Current Container and source has issues

Howdy,

After the latest merge, there seems to be an issue running the container (as well as downloading and building locally):

[ngc@hpctest-ngc ~]$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output
deepops/replicator --project=nvidia --min-version=17.12
--image=tensorflow --image=pytorch --image=tensorrt
--dry-run
--api-key=
Unable to find image 'deepops/replicator:latest' locally
latest: Pulling from deepops/replicator
33847f680f63: Pull complete
e8124950597e: Pull complete
cfdfe715e2ab: Pull complete
11063ba8ad10: Pull complete
9cca960c0455: Pull complete
59c710625a0b: Pull complete
43c7bdc918aa: Pull complete
719fe337f921: Pull complete
e76245b707c9: Pull complete
1788f284abad: Pull complete
0edde909c898: Pull complete
5ebd9e0bdab6: Pull complete
0da1f63a5b3b: Pull complete
205b32bbfc59: Pull complete
0de6974af104: Pull complete
afe609d6ba8e: Pull complete
5ad296e03acc: Pull complete
1107645e1064: Pull complete
df77e4a2892d: Pull complete
79c69c4aa875: Pull complete
714ec4dbe631: Pull complete
232daeae6c76: Pull complete
e02dcc83a9f8: Pull complete
b89af75cf6e0: Pull complete
d329b23c4357: Pull complete
7803980f99c7: Pull complete
2a229df2f624: Pull complete
b185763ce13b: Pull complete
Digest: sha256:f1a71af92e6332f9b2be718cc309103d9fe359fb1d0aec04f4924e5030c283b8
Status: Downloaded newer image for deepops/replicator:latest
2021-08-24 19:02:11,356 - ngc_replicator.ngc_replicator - 30 - INFO - Initializing Replicator
2021-08-24 19:02:12,495 - nvidia_deepops.docker.registry.ngcregistry - 126 - INFO - GET https://api.ngc.nvidia.com/v2/orgs - took 0.6229122190270573 sec
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py:57: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
tmp = yaml.load(file)
Traceback (most recent call last):

Make it easier for users to replicate a small number of the most common containers

Since this project's original creation the number of containers available on NGC has grown exponentially. Downloading and copying with the default settings takes up Terabytes of data and specifying specific common containers is somewhat of a hassle and requires specific knowledge or subject matter expertise that the admin running the replicator might not have.

I would suggest we create --common flag that will only download the last N versions of the most common containers (TensorFlow, PyTorch, RAPIDS, CUDA, Clara, Triton, TensorRT, ...) by default and allow specification of other containers with the --image flag.

--image and --min-version filters may not work on irregular NGC tags

I am using the following configuration within my CronJob yaml file:

data:
  ngc-update.sh: |
    #!/bin/bash
    ngc_replicator                                        \
      --project=nvidia                                    \
      --min-version=$(date +"%y.%m" -d "1 month ago")     \
      --py-version=py3                                    \
      --image=tensorflow --image=pytorch --image=tensorrt --image=mxnet --image=digits --image=cuda --image=nvhpc --image=rapidsai \
      --no-exporter                                       \
      --registry-url=mgmt01.cluster.local:31500

And, it seems to be executed with the following images to be fetched based on the logs:

2020-12-28 03:02:01,711 - ngc_replicator.ngc_replicator - 289 - INFO - images to be fetched: defaultdict(<class 'dict'>,
            {   'nvidia/digits': {   '20.11-tensorflow-py3': {   'docker_id': '2020-11-20T02:46:37.875Z',
                                                                 'registry': 'nvcr.io'},
                                     '20.12-tensorflow-py3': {   'docker_id': '2020-12-18T03:42:35.815Z',
                                                                 'registry': 'nvcr.io'}},
                'nvidia/l4t-pytorch': {   'r32.4.2-pth1.2-py3': {   'docker_id': '2020-04-29T23:10:39.028Z',
                                                                    'registry': 'nvcr.io'},
                                          'r32.4.2-pth1.3-py3': {   'docker_id': '2020-04-29T23:11:07.724Z',
                                                                    'registry': 'nvcr.io'},
                                          'r32.4.2-pth1.4-py3': {   'docker_id': '2020-04-29T23:11:35.269Z',
                                                                    'registry': 'nvcr.io'},
                                          'r32.4.2-pth1.5-py3': {   'docker_id': '2020-04-29T23:12:04.055Z',
                                                                    'registry': 'nvcr.io'},
                                          'r32.4.3-pth1.6-py3': {   'docker_id': '2020-07-07T23:55:54.218Z',
                                                                    'registry': 'nvcr.io'},
                                          'r32.4.4-pth1.6-py3': {   'docker_id': '2020-10-21T21:27:22.926Z',
                                                                    'registry': 'nvcr.io'}},
                'nvidia/l4t-tensorflow': {   'r32.4.2-tf1.15-py3': {   'docker_id': '2020-04-29T22:23:48.073Z',
                                                                       'registry': 'nvcr.io'},
                                             'r32.4.3-tf1.15-py3': {   'docker_id': '2020-07-07T22:40:06.178Z',
                                                                       'registry': 'nvcr.io'},
                                             'r32.4.3-tf2.2-py3': {   'docker_id': '2020-07-07T22:40:40.409Z',
                                                                      'registry': 'nvcr.io'},
                                             'r32.4.4-tf1.15-py3': {   'docker_id': '2020-10-21T21:29:06.077Z',
                                                                       'registry': 'nvcr.io'},
                                             'r32.4.4-tf2.3-py3': {   'docker_id': '2020-10-21T22:36:26.793Z',
                                                                      'registry': 'nvcr.io'}},
                'nvidia/mxnet': {   '20.11-py3': {   'docker_id': '2020-11-20T02:47:47.932Z',
                                                     'registry': 'nvcr.io'},
                                    '20.12-py3': {   'docker_id': '2020-12-18T03:42:53.893Z',
                                                     'registry': 'nvcr.io'}},
                'nvidia/pytorch': {   '20.11-py3': {   'docker_id': '2020-11-20T02:46:27.312Z',
                                                       'registry': 'nvcr.io'},
                                      '20.12-py3': {   'docker_id': '2020-12-18T03:52:53.213Z',
                                                       'registry': 'nvcr.io'}},
                'nvidia/tensorflow': {   '20.11-tf1-py3': {   'docker_id': '2020-11-20T02:49:23.047Z',
                                                              'registry': 'nvcr.io'},
                                         '20.11-tf2-py3': {   'docker_id': '2020-11-20T02:51:56.543Z',
                                                              'registry': 'nvcr.io'},
                                         '20.12-tf1-py3': {   'docker_id': '2020-12-18T03:54:53.111Z',
                                                              'registry': 'nvcr.io'},
                                         '20.12-tf2-py3': {   'docker_id': '2020-12-18T03:45:48.862Z',
                                                              'registry': 'nvcr.io'}},
                'nvidia/tensorrt': {   '20.11-py3': {   'docker_id': '2020-11-20T02:47:41.008Z',
                                                        'registry': 'nvcr.io'},
                                       '20.12-py3': {   'docker_id': '2020-12-18T03:44:24.218Z',
                                                        'registry': 'nvcr.io'}}})

There are a few things that I noticed didn't work well:

  1. It also fetch l4t-pytorch and l4t-tensorflow which I didn't specify in the yaml earlier
  2. It didn't fetch cuda, nvhpc and rapidsai images
  3. Even though I specified --min-version to be at least 1 month, it also captures l4t-pytorch and l4t-tensorflow from much older version/release (i.e. 2020-04, 2020-07, and 2020-10)

For items no.2 above, I suspected that this is due to cuda or rapidsai images on NGC didn't follow the usual tag naming convention (e.g. 20.11-xx or 20.12-xx).

Always create state.yaml

As it stands right now, the replicator seems to only create the state.yaml if it completes successfully. On any failure, it doesn't create file. It's not a big deal if I only want a small subset, but if I'm doing a major pull (100s to 1000s of the images), then it's a huge risk. It would be awesome to have it add to the state.yaml after each successful download so you don't have to restart the entire process if it fails at 750/1000.

API Key is not recognized even though I can use it to log in via Docker

Error:
local$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output deepops/replicator --project=nvidia --min-version=17.12 --image=tensorflow --image=pytorch --image=tensorrt --dry-run --api-key=
2020-08-28 02:46:19,202 - ngc_replicator.ngc_replicator - 30 - INFO - Initializing Replicator
Traceback (most recent call last):
File "/usr/local/bin/ngc_replicator", line 11, in
load_entry_point('ngc-replicator==0.4.0', 'console_scripts', 'ngc_replicator')()
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py", line 346, in main
replicator = Replicator(**config)
File "/usr/local/lib/python3.6/site-packages/ngc_replicator-0.4.0-py3.6.egg/ngc_replicator/ngc_replicator.py", line 39, in init
raise RuntimeError("Unable to recognize the API key")
RuntimeError: Unable to recognize the API key

Steps to reproduce:

  1. Log into NGC and get new API Key
  2. Test key by using docker to login:
    a) docker login nvcr.io
    b) username: $oauthtoken
    c) Password: (my new API key from 1)
  3. Login was successful and API key was accepted
  4. Run sample command from the github site using same exact API key in 1 and 2:
    docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/output deepops/replicator --project=nvidia --min-version=17.12 --image=tensorflow --image=pytorch --image=tensorrt --dry-run --api-key=

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.