Git Product home page Git Product logo

neuro-extras's Introduction

Neu.ro Extras

A set of tools and commands to extend the functionality of Neu.ro platform CLI client.

Usage

Check-out neuro-extras CLI reference here for main commands syntax and use-cases.

Dowloading and extracting archive from bucket

To platform storage: or disk:

  • From Google bucket storage:

    • neuro-extras data cp -x -t -v secret:gcp-creds:/gcp-creds.txt -e GOOGLE_APPLICATION_CREDENTIALS=/gcp-creds.txt gs://BUCKET_NAME/dataset.tar.gz storage:/project/dataset
    • secret:gcp-creds is a secret, containing authentication credentials file used by config gsutil
  • From AWS-compatible object storage:

    • neuro-extras data cp -x -t -v secret:s3-creds:/s3-creds.txt -e AWS_SHARED_CREDENTIALS_FILE=/s3-creds.txt s3://BUCKET_NAME/dataset.tar.gz disk:disk-name-or-id:/project/dataset
    • secret:s3-creds is a secret, containing auth data file for aws utility.
  • From Azure blob object storage:

    • neuro-extras data cp -x -t -e AZURE_SAS_TOKEN=secret:azure-sas-token azure+https://BUCKET_NAME/dataset.tar.gz storage:/project/dataset
    • secret:azure-sas-token is a secret, containing SAS token for accessing needed blob.
  • From HTTP/HTTPS server:

    • neuro-extras data cp -x -t https://example.org/dataset.tar.gz disk:disk-name-or-id:/project/dataset

To local machine

  • From GCP bucket storage:

    • neuro-extras data cp -x gs://BUCKET_NAME/dataset.tar.gz /project/dataset
    • gsutil utility should be installed on local machine and authenticated to read needed bucket
    • Supported Python verions are 3 (3.5 to 3.8, 3.7 recommended) and 2 (2.7.9 or higher)
  • From AWS-compatible object storage:

    • neuro-extras data cp -x s3://BUCKET_NAME/dataset.tar.gz /project/dataset
    • aws utility should be installed on local machine and authenticated to read needed bucket
    • If needed, install it with pipx install awscli not to conflict with neuro-cli
  • From Azure blob object storage:

    • AZURE_SAS_TOKEN=$TOKEN neuro-extras data cp -x azure+https://BUCKET_NAME/dataset.tar.gz storage:/project/dataset
    • rclone should be installed on the local machine
  • From HTTP/HTTPS server:

    • neuro-extras data cp -x -t https://example.org/dataset.tar.gz /project/dataset
    • rclone should be installed on the local machine

neuro-extras's People

Contributors

anayden avatar andriihomiak avatar asvetlov avatar atemate avatar dalazx avatar dependabot-preview[bot] avatar dependabot[bot] avatar neu-ro-github-bot[bot] avatar pre-commit-ci[bot] avatar romasku avatar yevheniisemendiak avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

asvetlov

neuro-extras's Issues

Add a command to copy data between clusters

neuro cluster-cp storage:neuro-ai-public/* storage:neuro-public/*

Under the hood this will run something like that:

neuro config switch-cluster neuro-public
neuro mkdir -p storage:
neuro run -s cpu-small --pass-config --tty -v storage::/storage -e NEURO_CLUSTER=neuro-ai-public neuromation/neuro-extras:latest cp -r -u -T storage: /storage

Better UX for secret management in data ingestion

see discussion.

  1. Have a command to create a secret of one of supported kinds (aws, gcp, github, ...,) where you need to specify the secret name (so that the user can have multiple secrets of the same kind). The mapping "secret_name -> secret_kind" is saved to a local file ~/.neuro/neuro-extras/secrets.yaml.
    Example: neuro-extras secret add aws-key-1 $SECRET_VALUE --kind aws.
  2. When starting a job for data ingestion, the user needs to specify the secret name and its kind is taken from the file secrets.yaml (so that neuro-extras knows how to mount this secret).
    Example: neuro-extras data copy --secret aws-key-1 s3://my-aws-bucket storage:somewhere

Flaky test: test_data_cp_from_cloud_to_local_compress

https://github.com/neuro-inc/neuro-extras/runs/1288165964?check_suite_focus=true

2020-10-21T17:22:32.0314310Z =================================== FAILURES ===================================
2020-10-21T17:22:32.0316150Z ___ test_data_cp_from_cloud_to_local_compress[tar.gz-gs://***] ____
2020-10-21T17:22:32.0726780Z [gw1] darwin -- Python 3.6.12 /Users/runner/hostedtoolcache/Python/3.6.12/x64/bin/python
2020-10-21T17:22:32.0727280Z 
2020-10-21T17:22:32.0728560Z project_dir = PosixPath('/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmp41bu98m8')
2020-10-21T17:22:32.0729760Z remote_project_dir = PosixPath('e2e-test-remote-dir')
2020-10-21T17:22:32.0730480Z cli_runner = <function cli_runner.<locals>._run_cli at 0x1040ace18>
2020-10-21T17:22:32.0731230Z args_data_cp_from_cloud = <function args_data_cp_from_cloud.<locals>._f at 0x1040acf28>
2020-10-21T17:22:32.0733110Z bucket = 'gs://***', archive_extension = 'tar.gz'
2020-10-21T17:22:32.0733550Z 
2020-10-21T17:22:32.0734130Z     @pytest.mark.parametrize("bucket", [GCP_BUCKET, AWS_BUCKET])
2020-10-21T17:22:32.0735000Z     @pytest.mark.parametrize("archive_extension", ["tar.gz", "tgz", "zip", "tar"])
2020-10-21T17:22:32.0735710Z     @pytest.mark.skipif(
2020-10-21T17:22:32.0736220Z         sys.platform == "win32",
2020-10-21T17:22:32.0736870Z         reason="Windows path are not supported yet + no utilities on windows",
2020-10-21T17:22:32.0737420Z     )
2020-10-21T17:22:32.0737890Z     def test_data_cp_from_cloud_to_local_compress(
2020-10-21T17:22:32.0738400Z         project_dir: Path,
2020-10-21T17:22:32.0738870Z         remote_project_dir: Path,
2020-10-21T17:22:32.0739350Z         cli_runner: CLIRunner,
2020-10-21T17:22:32.0739890Z         args_data_cp_from_cloud: Callable[..., List[str]],
2020-10-21T17:22:32.0740390Z         bucket: str,
2020-10-21T17:22:32.0740840Z         archive_extension: str,
2020-10-21T17:22:32.0741980Z     ) -> None:
2020-10-21T17:22:32.0742510Z         TEMP_UNPACK_DIR.mkdir(parents=True, exist_ok=True)
2020-10-21T17:22:32.0743270Z         with TemporaryDirectory(dir=TEMP_UNPACK_DIR.expanduser()) as tmp_dir:
2020-10-21T17:22:32.0744290Z             src = f"***bucket***/hello.***archive_extension***"
2020-10-21T17:22:32.0744850Z             res = cli_runner(
2020-10-21T17:22:32.0745320Z                 args_data_cp_from_cloud(
2020-10-21T17:22:32.0746010Z                     bucket, src, f"***tmp_dir***/hello.***archive_extension***", False, True
2020-10-21T17:22:32.0746570Z                 )
2020-10-21T17:22:32.0746930Z             )
2020-10-21T17:22:32.0747380Z >           assert res.returncode == 0, res
2020-10-21T17:22:32.0749000Z E           AssertionError: CompletedProcess(args=['neuro-extras', 'data', 'cp', 'gs://***/hello.tar.gz', '/Users/runner/.neuro-tmp/t...elp)
2020-10-21T17:22:32.0749850Z E             Elapsed time:         0.0s
2020-10-21T17:22:32.0750250Z E             
2020-10-21T17:22:32.0750730Z E             2020/10/21 17:16:33 Failed to copy: directory not found
2020-10-21T17:22:32.0751650Z E             Error: Cloud copy failed')
2020-10-21T17:22:32.0752110Z E           assert 1 == 0
2020-10-21T17:22:32.0752490Z E             +1
2020-10-21T17:22:32.0753160Z E             -0

Prevent alias overwriting

If user has got a personal alias that conflicts with one installed by neuro-extras the user's one gets overwritten. We probably should either:

  1. Warn user about alias conflict, but overwrite anyway
  2. Warn user about alias conflict and not overwrite alias, but create other aliases
  3. Warn user about alias conflict and reject alias creation altogether

Add command for copying images between registries

Suggested usage:

neuro copy_image SOURCE DESTINATION

Suggested implementation: create new Dockerfile which would be a copy of the original one + a new line LABEL original_url=SOURCE, then use kaniko to build such an image and push it to DESTINATION registry.

[B] gcloud pre-authentication does not work

Summary

It is impossible to use neuro-extras as is to inject data from google cloud storage, since gcloud auth activate-service-account --key-file=... is not executed:
https://github.com/neuro-inc/neuro-extras/blob/master/neuro_extras/main.py#L249

Steps to reproduce

  1. Use neuro-flow to execute injection, define the job (replacing GCP_FACE_API_DEV_NEURO_SERVICE_KEY to your secret's name):
volumes:
 data:
   remote: storage:cloudinary
   mount: /data

jobs:
  download:
    image: neuromation/neuro-extras:v20.9.30.2
    preset: cpu-small
    detach: False
    multi: True
    volumes:
      - ${{ volumes.data.ref_rw }}
      - secret:GCP_FACE_API_DEV_NEURO_SERVICE_KEY:/var/secrets/gcp.json
    env:
      GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/gcp.json
    bash: |
      neuro-extras data cp -x ${{ multi.args }}
  1. Execute neuro-flow run download -- gs://datasets-synthesis/hw-1600.105k-test.tar.gz /storage
  2. Wait for the job to finish.
  3. Check job logs and see that injection failed.

Expected result

Injection succeeds.

Environment

neuro-extras image: neuromation/neuro-extras:v20.9.30.2

Data ingestion and extraction

While there are a lot of use-cases related to data ingestion and extraction, we would like to start with those we need for the client use-case. To be precise, we need a bunch of commands like "neuro-flow data upload / download" which support the following functionality:

  • From a given S3 bucket / GS bucket / local machine path upload a folder / ZIP / TAR / TAR.GZ (and unpack) to a given storage URI (ingestion).
  • From a given storage URI download a folder as folder / ZIP / TAR / TAR.GZ (pack if necessary) to a given S3 bucket / GS bucket / local machine path (extraction).

There are several caveats here:

  • For S3 and GS buckets we need to pass credentials (use secrets in some way).
  • Unpacking right on storage is very slow, so we need to download archives in ephemeral storage, unpack, and use something like rclone to copy the result to storage (see default README in Jupyter Notebooks instance as an example).

Allow copying tags in `neuro-extras image copy`

Current implementation of image copy copies only the latest tag. We should support:

  • Copying arbitrary tag (e.g. neuro-extras image copy SOURCE_IMG:tagname DESTINATION)
  • Copying all tags at once (e.g. neuro-extras image copy SOURCE_IMG:* DESTINATION)

[B] Can't run data ingestion in neuro-flow

Summary

Can't run data ingestion in neuro-flow

Steps to reproduce

Use the target in neuro-flow YAML with neuromation/neuro-extras as image and the following command:

    bash:
      neuro-extras data cp -x gs://datasets-synthesis/hw-1600.105k-test.tar.gz /data/hw-1600.105k-test

The target fails because bash is not found in neuromation/neuro-extras.

Expected result

I can download data.

Error: cp --progress: not found

neuro-extras cp storage://neuro-public/mvasilkovsky/mask-detector storage://neuro-compute/mvasilkovsky/
Executing 'neuro run -s cpu-small --pass-config -v storage:://storage -e NEURO_CLUSTER=neuro-public neuromation/neuro-extras:latest "cp --progress -r -u -T storage:mask-detector /storage/"'
Temporary config file created on storage: storage://neuro-compute/mvasilkovsky/.neuro/9d53c561-7147-4f40-8156-ece5ab6917fa-cfg.
Inside container it will be available at: /var/storage/.neuro/9d53c561-7147-4f40-8156-ece5ab6917fa-cfg.
√ Job ID: job-53f1aadf-5389-41e5-b7e2-de1bd25423ab
- Status: pending Creating
- Status: pending Scheduling
× Status: failed Error (cp: unrecognized option '--progress'
Try 'cp --help' for more information.
)
Traceback (most recent call last):
  File "/Users/starlight/miniconda3/bin/neuro-extras", line 8, in <module>
    sys.exit(main())
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/neuro_extras/main.py", line 237, in cluster_copy
    run_async(_copy_storage(source, destination))
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/neuromation/cli/asyncio_utils.py", line 122, in run
    return runner.run(main)
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/neuromation/cli/asyncio_utils.py", line 54, in run
    return self._loop.run_until_complete(main_task)
  File "/Users/starlight/miniconda3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
    return future.result()
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/neuro_extras/main.py", line 254, in _copy_storage
    await _run_copy_container(src_cluster, "/".join(src_path), "/".join(dst_path))
  File "/Users/starlight/miniconda3/lib/python3.7/site-packages/neuro_extras/main.py", line 276, in _run_copy_container
    raise Exception("Unable to copy storage")
Exception: Unable to copy storage

[B] data transfer cannot copy data from and to folder not inside user's home directory

Summary

If you want to copy data from the folder, which was shared by another user AND/OR to the folder which was shared to you, neuro-extras parses paths in a wrong dir and supposes the src/dst folder is located in the current users home directory.

Steps to reproduce

  1. Get the shared folder, from which you want to fetch the data. It should be in another user's home dir. For instance storage://neuro-public/mvasilkovsky/mask-detector/data. Make sure the folder mask-detector/data does not exist in your storage home dir in neuro-public cluster.

  2. Try to copy this data into your home dir on another cluster (assuming, current user is yevheniisemendiak and it exists on neuro-compute cluster): neuro-extras data transfer storage://neuro-public/mvasilkovsky/mask-detector/data storage://neuro-compute/yevheniisemendiak/mask-detector/data

  3. See that copy job crashed since source data cannot be found in the current user home dir:

image

Expected result

neuro-extras could pass full src/dst storage URI to copy job to avoid such problems.

Environment

  • neuro-extras version: v20.10.16
  • neuro CLI version: 20.9.24

Use versioned 'neuromation/neuro-extras' image instead of latest

Discussion: https://neuromation.slack.com/archives/C0185V3TMJN/p1601299471007900:

guys we need to move neuro-extras' Dockerfile away from the repo itself, they both depend on each other. Now, I change dockerfile and my tests fail:
https://github.com/neuromation/neuro-extras/pull/81 (edited) 
7 replies

Mariya Davydova  6 minutes ago
I’m not sure if moving it will help…

Artem Yushkovskiy  5 minutes ago
so we need to release it somehow without testing.
Note, alpha releases won't work as once we release an alpha, the latest image will be pushed to dockerhub and break the whole neuro-extras functionality. To fix this, we should link neuro-extras to the specific version of its image, not to :latest. This will solve everything IMO

Yevhenii Semendiak:parrot:  5 minutes ago
Could we release alpha images ?

Artem Yushkovskiy  2 minutes ago
what's the point if neuro-extras's code looks at :latest

Artem Yushkovskiy  2 minutes ago
it's hard-coded

Artem Yushkovskiy  2 minutes ago
and should be

Yevhenii Semendiak:parrot:  1 minute ago
For the tests we may parametrise it, but it will only overcomplicate the things..

Neuro-extras image build exits with 0 even if failed

$ ls 
Dockerfile

$ cat Dockerfile 
FROM ubuntu
COPY nonexistent /

$ neuro-extras image build . image:test
...
INFO: Submitting a builder job
INFO: The builder job ID: job-caefd98f-9255-4ae5-8e3c-258c57888b72
...
DEBU[0011] Resolved nonexistent to nonexistent          
DEBU[0011] Resolved / to /                              
error building image: error building stage: failed to optimize instructions: failed to get files used from context: failed to get fileinfo for /workspace/nonexistent: lstat /workspace/nonexistent: no such file or directory
INFO: Successfully built image:test

$ echo $?
0

This can be a problem in CI/CD or other pipelines.

[B] Source archive is deleted if we extract data from local dir

Summary

If we run extras while attached to storage:, where BLOB archive is available, we cannot just extract the archive conent to local dir, since neuro-extras source archive will be removed.

Steps to reproduce

Precisely describe each step you perform to reproduce the problem. Example:
  1. Set-up job with attached storage:, where some tar.gz archive is available, use neuromation/neuro-extras:v20.9.30.2 as a base image.
  2. Run injection with extraction: neuro-extras data cp -x /path/to/archive.tar.gz /path/to/unarchived/target.
  3. Check source folder (/path/to/ in this case) and see that archive was removed.

Expected result

  1. Preserve the archive, if we are not loading from the cloud.

Environment

neuromation/neuro-extras:v20.9.30.2
synthesis cluster (job-81adaddb-fdcc-4961-9f28-0df7edd4560b)

Provide `upload` and `download` commands

Both commands should search for a project folder with .neuro.toml file inside, read the config, figure out what local and remote paths should be synchronized, and sync data.

neuro-extras upload path/to should copy all files and folders from <local-project-root>/path/to to storage:remote-project-root/path/to. If the user's current directory is not <local-project-root> -- please calculate path as git does.

Let's forbid the empty paths (bare neuro-extras download) for the sake of safety. We can support them easily on people's demand.

To copy files please call neuro cp --recursive in a subprocess using asyncio.

load_user_config() https://github.com/neuromation/platform-client-python/blob/master/neuromation/api/config.py#L216 can be used for reading the config file. Please note that _check_sections raises an error for unexpected sections, this check should be removed in a separate PR.
There is no public API for figuring out the project root, please create a function for this in neuromation.api package. I expect that we'll reuse this function in other commands as well.

Add --version flag

$ neuro-extras --version
Usage: neuro-extras [OPTIONS] COMMAND [ARGS]...
Try 'neuro-extras --help' for help.

Error: no such option: --version

Support secrets in `build` command

We need to have an opportunity to pass secrets to a building job. To be precise, we need the following to be possible:

neuro-flow build 
    --env VAR=secret:key
    --volume secret:key:/mount/path/file.txt
   Dockerfile

Support data transfer between clusters for other users

Use-cases:

  1. Team member A asks team member B to transfer directory storage://cluster-1/A/a/b/c to storage://cluster-2/B/a/b/c (B's home dir)
  2. Team member A asks team member B to transfer directory storage://cluster-1/A/a/b/c to storage://cluster-2/A/a/b/c (A's home dir)

Flaky test: database locked

see https://github.com/neuro-inc/neuro-extras/runs/1186452796?check_suite_focus=true


=================================== FAILURES ===================================
_____________________ test_config_save_docker_json_locally _____________________
[gw0] linux -- Python 3.6.12 /opt/hostedtoolcache/Python/3.6.12/x64/bin/python

cli_runner = <function cli_runner.<locals>._run_cli at 0x7f58a7c27488>

    def test_config_save_docker_json_locally(cli_runner: CLIRunner) -> None:
>       result = cli_runner(["neuro-extras", "config", "save-docker-json", ".docker.json"])

/home/runner/work/neuro-extras/neuro-extras/tests/e2e/test_main.py:433: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/home/runner/work/neuro-extras/neuro-extras/tests/e2e/test_main.py:70: in _run_cli
    main(args)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:829: in __call__
    return self.main(*args, **kwargs)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:782: in main
    rv = self.invoke(ctx)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:1259: in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:1259: in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:1066: in invoke
    return ctx.invoke(self.callback, **ctx.params)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/click/core.py:610: in invoke
    return callback(*args, **kwargs)
/home/runner/work/neuro-extras/neuro-extras/neuro_extras/main.py:825: in config_save_docker_json
    run_async(_save_docker_json(path))
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/cli/asyncio_utils.py:122: in run
    return runner.run(main)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/cli/asyncio_utils.py:54: in run
    return self._loop.run_until_complete(main_task)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/asyncio/base_events.py:488: in run_until_complete
    return future.result()
/home/runner/work/neuro-extras/neuro-extras/neuro_extras/main.py:843: in _save_docker_json
    await builder.save_docker_config(docker_config, uri)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/api/utils.py:86: in __aexit__
    await self._ret.close()  # type: ignore
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/api/client.py:60: in close
    self._core._save_cookies(db)
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/api/core.py:89: in _save_cookies
    _save_cookies(db, to_save)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

db = <sqlite3.Connection object at 0x7f58a7be73b0>
cookies = [<Morsel: NEURO_MONITORINGAPI_SESSION="http://10.1.10.217:8080"; Domain=neuro-public.org.neu.ro; Max-Age=300; Path=/>]

    def _save_cookies(
        db: sqlite3.Connection,
        cookies: Sequence["Morsel[str]"],
        *,
        now: Optional[float] = None,
    ) -> None:
        if now is None:
            now = time.time()
        _ensure_schema(db, update=True)
        cur = db.cursor()
        for cookie in cookies:
            cur.execute(
                """\
                    INSERT OR REPLACE INTO cookie_session
                    (name, domain, path, cookie, timestamp)
                    VALUES (?, ?, ?, ?, ?)""",
>               (cookie.key, cookie["domain"], cookie["path"], cookie.value, now),
            )
E           sqlite3.OperationalError: database is locked

/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/neuromation/api/core.py:229: OperationalError
----------------------------- Captured stdout call -----------------------------
Saving Docker config.json as file:///tmp/tmpzd1w8cq7/.docker.json
------------------------------ Captured log call -------------------------------
INFO     e2e.test_main:test_main.py:55 Run 'neuro-extras config save-docker-json .docker.json'

Cleanup test images

On each commit, we now build and push image neuromation/neuro-extras:$SHA. Need to clean-up it. See GH action posttest in our ci.yml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.