Git Product home page Git Product logo

amazon-sagemaker-developer-guide's Introduction

amazon-sagemaker-developer-guide's People

Contributors

aaronmarkham avatar aws-rbs avatar chrstfu avatar donarus avatar e-eight avatar easyj2j avatar eslesar-aws avatar geremycohen avatar hyandell avatar ivybazan avatar jane-delaney avatar joshbean avatar judyheflin avatar krismeht avatar lingesh2311 avatar machu-gwu avatar maxwell9999 avatar mchoi8739 avatar mohamed-ali avatar mohanva avatar nanda-nainadurai avatar neildcruz avatar ngluna avatar p-nanimal avatar parsash2 avatar robperc avatar stacicho avatar tasiogr avatar techopra1000 avatar teebkeng-aws avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-sagemaker-developer-guide's Issues

Balancing while using BlazingText (supervised)

I have highly unbalanced text data, which i want to classify by using the BlazingText algorithm.
For now, I cannot get any good results. In my opinion it's because of the fact, that my data is so unbalanced (98% to 2%).
Since I only have text data and no access to the already embedded word vectors, i cannot us sampling algorithms like SMOTE.

Is there anything, that supports oversampling or undersampling the data within sagemaker or is one of the hyper-parameters important for unbalanced data?

Built-in algorithm pages should list which hyperparameters are tunable

Algorithms packaged in SageMaker are able to exclude certain hyperparameters from eligibility for hyperparameter tuning via the IsTunable property in the CreateAlgorithm API.

At least some first-party algorithms (like Semantic Segmentation) seem to have quite restrictive settings: according to an error message I just got, several parameters like gamma1/2, weight_decay etc that you might expect to be tunable are not.

I can't find any documentation for this on the Semantic Segmentation algorithm in particular, but it also doesn't seem to be obviously present on many others.

(Would also like if the range was less restrictive for this particular algorithm, but that doesn't seem like a docs issue!)

Create an EI Endpoint with Boto3 example needs enhancement

I see 2 problems in the section "Create an EI Endpoint with Boto 3"

https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/ei-endpoints.md#L79

  1. In L91 endpoint_config_name is created, only printed, but never used again

  2. In L110 sagemaker.create_endpoint(...) Is used incorrectly . The function requires EndpointName and EndpointConfigName as kwargs . See:
    https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint

If I am not mistaken and I were to fix the example, it would be as follows:

# Create Endpoint Configuration
import boto3
from time import gmtime, strftime

sagemaker = boto3.client('sagemaker')
endpoint_config_name = 'ImageClassificationEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response=sagemaker.create_endpoint_config(
    EndpointConfigName=endpoint_config_name,
    ProductionVariants=[{
        'InstanceType':'ml.m4.xlarge',
        'InitialInstanceCount':1,
        'ModelName':model_name,
        'VariantName':'AllTraffic',
        'AcceleratorType':'ml.eia1.medium'}])

print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])

and

endpoint_name = 'ImageClassificationEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
endpoint_response = sagemaker.create_endpoint(
    EndpointName=endpoint_name,
    EndpointConfigName=endpoint_config_name)

[Question] how to access the output of Crowd-entity-annotation

I have created a HIT using crowd-entity-annotation and am trying to add a validation function onsubmit to make sure every word in the text block is labeled. However I can't seem to be able to access the output using
'''document.querySelector('crowd-entity-annotation').value''',
'''document.querySelector('crowd-entity-annotation').entities'''
'''
const output = document.querySelector('crowd-entity-annotation')
.shadowRoot
.querySelector('crowd-form').form.;
const formData = new FormData(output)
const formProps = Object.fromEntries(formData)
...
'''

How can I get the endOffset and startOffset of the entities?

sagemaker studio tour missing/out of order steps

regarding this section:
https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/gs-studio-end-to-end.md#keep-track-of-machine-learning-experiments

  • step 1 "Run the following cell..." refers to the 8th code cell in the notebook. The previous 7 code cells need to be run first for the notebook to work, but are never referenced in the tour walkthrough doc. The doc goes from having the user clone the repo:

git clone https://github.com/awslabs/amazon-sagemaker-examples.git

straight to having the user run the 8th code cell in the notebook, skipping the first 7 code cells.

  • step 2 "Create trials and associate...." refers to code cell 11 in the notebook, but again jumps straight from cell 8 without ever running cells 9, or 10

[Question] Getting KeyError: 'SM_CHANNEL_TRAINING' when using the document example

I followed the example "Extend a Prebuilt Container"
However, I am getting a KeyError in Step 5.3. How to solve the problem?

UnexpectedStatusException: Error for Training job pytorch-extended-container-test-2022-10-01-08-48-41-014: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 cifar10.py"
Traceback (most recent call last):
  File "cifar10.py", line 158, in <module>
    parser.add_argument('--data-dir', type=str, default=os.environ['SM_CHANNEL_TRAINING'])
  File "/opt/conda/lib/python3.6/os.py", line 669, in __getitem__
    raise KeyError(key) from None
KeyError: 'SM_CHANNEL_TRAINING', exit code: 1

is there a way to have a studio instance pull code from a private repo?

I was reading https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/studio-tasks-git.md and was wondering if there is a way to have a studio instance pull code from a private repo? like a way to add the necessary access tokens.

I know that we can do it for notebook git repos a la https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-notebooks-now-support-git-integration-for-increased-persistence-collaboration-and-reproducibility/ but wondering about studio git instances

and/or is there a way to the git URL for the git instance that is automatically associated with a studio instance?

feature group arn format seems incorrect

Please see FeatureStoreBatchIngestion.py section on page https://docs.aws.amazon.com/sagemaker/latest/dg/batch-ingestion-spark-connector-setup.html. The following line has the string "Amazon SageMaker" in the ARN which should be just "sagemaker":

feature_group_arn = "arn:aws:Amazon SageMaker:us-west-2:<your-account-id>:feature-group/<your-feature-group-name>" should

probably be something like
region="replace-with-sagemaker-region"
fg_name = "replace-with-your-feature-group-name"
account_id = "replace-with-your-account-id"

feature_group_arn = f"arn:aws:sagemaker:{region}:{account_id}:feature-group/{fg_name}"

A mention of SageMaker ProcessingJob in automating-sagemaker-with-eventbridge.md would be useful

This page https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/automating-sagemaker-with-eventbridge.md does not mention SageMaker ProcessingJob in the list under "SageMaker events monitored by EventBridge" giving the impression that capturing SageMaker ProcessingJob is not supported (but it is). It would be very helpful if this list is updated to include SageMaker Processing and an example is provided.

Further, it would also be helpful if a sample JSON is provided for the EventBridge rule pattern because if someone is creating a rule programatically then they need a reference.

{
  "source": ["aws.sagemaker"],
  "detail-type": ["SageMaker Processing Job State Change"],
  "detail": {
    "ProcessingJobStatus": ["Failed", "Completed", "Stopped"]
  }
}

"Deploy Endpoints from Model Data" link does not exist

in page
https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html
corresponding to github page https://github.com/awsdocs/amazon-sagemaker-developer-guide/tree/master/doc_source/pytorch.md
there is a broken link:

I have a PyTorch model that I trained outside of SageMaker, and I want to deploy it to a SageMaker endpoint For more information, see [Deploy Endpoints from Model Data](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html#deploy-endpoints-from-model-data).

the bookmark
https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#deploy-endpoints-from-model-data
is not available

Change num_classes hyperparameter for SageMaker Incremental training

I am performing incremental training on a model I already trained in SageMaker. I want to add data to the existing classes as well as create new classes. The first model had 4 classes (num_classes = 4) but I want to keep those classes as well as add 3 additional classes.

Capture

The documentation says that the num_classes hyperparameter must be the same when doing incremental training. But if that is the case, that means I cannot add classes to my existing model, I would have to start from scratch each time I wanted to change the number of classes. Is this accurate? Or is there a way to update an existing model and change the number of classes it is trained on?

Thanks in advance!

Here is the example notebook I am using for the incremental training job:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-incremental-training-highlevel.ipynb

How can I treat my python scripts as separate services?

In every ML problem you will almost always have the following three steps:

  1. Prepare dataset.
  2. Train the model.
  3. Generate predictions.

So my project will most probably look like this:

ml-project
|
|--- data.py        // prepare and save the dataset ready to be fed into the model.
|
|--  train.py       // train and save the model.
|
|--- predict.py     // generate predictions.

Because of the three scripts I need to run, I might have to create three separate docker images and publish them. Is there a mechanism like docker-compose where I can treat each script as a service and run them through sagemaker? If not what is the best option for handling these cases other than publishing three different docker images?

IAM Policy incomplete

On the page https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_id-based-policy-examples.html#nbi-ip-filter the example policy isn't valid json and incomplete. The Deny block is cut off.

{

"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "",
"Resource": "
"
},
{
"Effect": "Deny",
"Action": "sagemaker:CreateEndpoint",
"Resource": [
"arn:aws:sagemaker:::endpoint/"
]
{
"Effect": "Allow",
"Action": "sagemaker:CreateEndpoint",
"Resource": [
"arn:aws:sagemaker:
::endpoint/"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/environment": "dev"
}
}
}
]
}

Failed endpoint deployment from model registry< specify entry_point

Issue Summary: Failed endpoint deployment from the model registry.

CloudWatch Error Logs: AttributeError: 'NoneType' object has no attribute 'startswith'

Background: I have successfully trained & deployed an SKLearn model from a Notebooks script using the sagemaker.sklearn module. When creating the estimator, I pass a .py script with the entry point argument:

sklearn_estimator = SKLearn(entry_point='sklearn-train.py',
instance_type='ml.m5.large',
framework_version='0.23-1',
role=role,
sagemaker_session=sagemaker_session)

The sklearn-train.py script contains a function for loading a joblib file from the model directory. Using the estimator.deploy() method in the sagemaker.SKLearn library will fail without this function.

Based on my research, the AttributeError I'm getting when I deploy from the registry is related to a missing entry_point (the one I used to create the SKLearn estimator). The container is looking for a function that will load the tar file as a joblib object from the specified model_uri. After carefully reviewing the create_model() & create_endpoint_config() functions from the SM client library, I don't see any way to specify an entry_point script when saving or deploying a model to/from the registry.

Full stack trace:
Traceback (most recent call last):
File "/miniconda3/lib/python3.7/site-packages/gunicorn/workers/base_async.py", line 55, in handle
self.handle_request(listener_name, req, client, addr)
File "/miniconda3/lib/python3.7/site-packages/gunicorn/workers/ggevent.py", line 143, in handle_request
super().handle_request(listener_name, req, sock, addr)
File "/miniconda3/lib/python3.7/site-packages/gunicorn/workers/base_async.py", line 106, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/serving.py", line 128, in main
serving_env.module_dir)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/serving.py", line 105, in import_module
user_module = importlib.import_module(module_name)
File "/miniconda3/lib/python3.7/importlib/init.py", line 118, in import_module
if name.startswith('.'):

test issue

Mohan fix this topic in this .md file

"Exception: Java gateway process exited before sending its port number" when using pyspark and sagemaker feature store

Following instructions provided here https://docs.aws.amazon.com/sagemaker/latest/dg/batch-ingestion-spark-connector-setup.html, getting this exception:
"Exception: Java gateway process exited before sending its port number"
when running the following code

from pyspark.sql import SparkSession
from feature_store_pyspark.FeatureStoreManager import FeatureStoreManager
import feature_store_pyspark

extra_jars = ",".join(feature_store_pyspark.classpath_jars())
spark = SparkSession.builder \
.config("spark.jars", extra_jars) \
.getOrCreate()

Steps followed:

  1. Create a new SageMaker Notebook (not SageMaker Studio, just regular SageMaker notebook).
  2. Create a new notebook using the conda_python3 kernel.
  3. Install the packages as provided in the instructions here https://docs.aws.amazon.com/sagemaker/latest/dg/batch-ingestion-spark-connector-setup.html. Specifically, copy paste the following in a notebook cell and run.
import os
    
original_spark_version = "2.4.0"
os.environ['SPARK_HOME'] = '/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/pyspark'
    
# Install a newer versiion of Spark which is compatible with spark library
!pip3 install pyspark==3.1.1
!pip3 install sagemaker-feature-store-pyspark --no-binary :all:

See screenshots
sc1
sc2
.

I created a custom conda environment (not on this notebook, on a completely different sagemaker notebook) with Python3.7. Then install pyspark, sagemaker and friends, did not get this error but then started getting other errors related to missing Java packages. Not putting that here to avoid any confusion, just mentioning it as a data point that part of this problem could be that the built in Python3 conda environment is Python3.6, maybe it needs to be Python3.7. There is a conda_mxnet_latest_python3.7 environment also but then that runs into other different problems.

CreateModelPackage API - InferenceSpecification "Environment" property not accepted

I'm currently trying to use the SageMaker API's create_model_package() method in a manner similar to the example shown here. However, I receive the following error when including Environment parameter.

ParamValidationError: Parameter validation failed: Unknown parameter in InferenceSpecification.Containers[0]: "Environment", must be one of: ContainerHostname, Image, ImageDigest, ModelDataUrl, ProductId

Code sample:

import boto3

sm_client = boto3.client('sagemaker')

model_package_group_name = 'my-group-name'
submit_dir = 's3://our_bucket/path'
entry_point_script = 'our_inference_code.py'
model_url='s3://our_bucket/model/path'

modelpackage_inference_specification =  {
    "InferenceSpecification": {
      "Containers": [
         {
            "Image": "'246618743249.dkr.ecr.us-west-2.amazonaws.com/sagemaker-scikit-learn:0.23-1-cpu-py3'",
            "Environment": {
                "SAGEMAKER_SUBMIT_DIRECTORY": submit_dir,
                "SAGEMAKER_PROGRAM": entry_point_script
            }
         }
      ],
      "SupportedContentTypes": [ "text/csv" ],
      "SupportedResponseMIMETypes": [ "text/csv" ],
   }
 }


modelpackage_inference_specification["InferenceSpecification"]["Containers"][0]["ModelDataUrl"]=model_url

create_model_package_input_dict = {
    "ModelPackageGroupName" : model_package_group_name,
    "ModelPackageDescription" : "Model to detect 3 different types of irises (Setosa, Versicolour, and Virginica)",
    "ModelApprovalStatus" : "PendingManualApproval"
}
create_model_package_input_dict.update(modelpackage_inference_specification)

create_mode_package_response = sm_client.create_model_package(**create_model_package_input_dict)
model_package_arn = create_mode_package_response["ModelPackageArn"]
print('ModelPackage Version ARN : {}'.format(model_package_arn))

I would expect this to be valid code, as the docs indicate that the Environment property is listed as a property in the API spec here. I'd appreciate any suggestions. Thanks!

Dropdown label for crowd-entity-annotation

Hello,

Not necessarily an issue but more of a question.

I have a use case where I have many entity labels so I want to group them into broad types.
After I want to have a dropdown menu which when clicked shows the labels for that group. Is that possible to do with the crowd-entity-annotation?
https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/sms-ui-template-crowd-entity-annotation.md

For example I have 20 entity labels for group 1 and 20 entity labels for group 2.

I want the turkers to see group 1 and group2 and click on them to show the finegrained entity labels

Any help would be appreciated. Thank you

ResourceLimitExceeded for ml.m4.xlarge when running SageMaker studio demo in a new AWS account

When walking through the SageMaker Studio tour :

https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-end-to-end.html

for the first time in a new AWS account, the usual service limit issue is hit when running code cell [17] to create an endpoint to host the model.

ResourceLimitExceeded: An error occurred (ResourceLimitExceeded) when calling the CreateEndpoint operation: The account-level service limit 'ml.m4.xlarge for endpoint usage' is 0 Instances, with current utilization of 0 Instances and a request delta of 1 Instances. Please contact AWS support to request an increase for this limit.

Suggestions:

  • The "Prerequistes" section could address this proactively, with a link to the service limit increase page, or...
  • the notebook could be changed to use an instance type for the endpoint that does not have a default service limit of 0

Please LMK which is preferable and I will submit a PR

description for the image_shape parameter seems incorrect.

When I specify an image_shape = '3,416,416' the API errors out as follows:

Customer Error: The value '3,416,416' is not valid for the 'image_shape' hyperparameter which expects one of the following: a string which matches the pattern '^[1-9][0-9]*$'; or an integer between 300 and 10000 (caused by ValidationError)

It appears that it wants a single integer value which i assume is the size of the square image.

if it matters i'm training a resnet-50 network.

[Question] Get Sagemaker Neo Compilation Job Logs

Hey, I'm trying to get some logs out of Sagemaker Neo (for example: Model size/parameters that have been reduced, fps etc) but I don't see anything documented on the topic. Would be great if you can help me out here. So, is there a provision to get such metrics or some logs that would give insight into the way the model has been optimized?

SageMaker image tags ambiguity

IHAC who encountered an issue with the documentation: https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/main/doc_source/ecr-us-east-1.md#xgboost-us-east-1.title because of ambiguous SageMaker XGBoost image tag. From the documentation the customer was using the tag as 0.90-1 and the complete image uri:
683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:0.90-1

But further investigations conducted shows that the tag should rather be 0.90-1-cpu-py3 for the complete image uri 683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost:0.90-1-cpu-py3

Suggestion:
I would suggest that another column (see table below) for Tag should be added to specify all the image tags for all the available SageMaker images, unless there is some sort of consistency in the tagging.

Registry path Version Package version Job types (image scope) Tag
683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost: 1.5-1 1.5.2 inference, training 1.5.2
683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-xgboost: 0.90-1 0.90 inference, training 0.90-1-cpu-py3

S3 CORS policy looks wrong

Hi team,
I follow this documents, and setting S3 CORS policy.

Give your users the ability to upload local files
https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-set-up-local-upload.html
Add the following CORS policy:
[ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "POST" ], "AllowedOrigins": [ "*" ], "ExposedHeaders": [] } ]

So, S3 returned this error.

Unexpected key 'ExposedHeaders' found in params.CORSConfiguration.CORSRules[0]

It seems to be "ExposeHeaders", not "ExposedHeaders"

How to open deployed endpoint on notebook?

I had deployed an endpoint using
predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge').
It is in service.

I found there no mention on opening endpoint on notebook on official doc
Is there any way to open deployed endpoint on notebook and predict again like this
predictor.predict(test_data)

Thank you for help.

Provide supported versions of frameworks

Currently there isn't a great place to find versioning support for frameworks supported by Sagemaker Neo. Since a great starting point are these docs and the notebooks we should provide some info here.

Wrong dynamic_feat array length in DeepAR Inference Formats -document

Document deepar-in-formats.md has sub chapter "DeepAR JSON Request Formats". Subchapter has example of how inference format should look like. For example this snipped:
{ "start": "2012-01-30", "target": [1.0], "cat": [2, 1], "dynamic_feat": [[2.0, 3.1, 4.5, 1.5, 1.8, 3.2, 0.1, 3.0, ...]] }

Isn't "dynamic_feat" having invalid shape? "dynamic_feat" inner array should have the same length as "target"? So in this case it should be for example:

{ "start": "2012-01-30", "target": [1.0], "cat": [2, 1], "dynamic_feat": [[2.0]] }
This is what https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html#deepar-inputoutput is saying about "dynamic_feat":

"...each inner array must have the same length as the associated target value."

Async Inference not able to process later requests

Hi there, hope all of you are fine.

I am trying to deploy a train-on-inference type model. I am done with BYOC, and it is working completely fine with real-time inference endpoints. Also, I am able to make it work with Async inference, and concurrent requests on the same instance are also being handled.
But, the later requests, never get processed, without any logical error. Also once the endpoint gets scaled down to 0 instance, it fails to scales up.

These are some of error and warning messages which I get intermittently:



data-log:
2022-03-23T11:23:17.723:[sagemaker logs] [5ea751c9-9271-4533-bc09-c117791e1372] Received server error (500) from primary with message "<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">



warnings:
/usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])

Kindly help me with this.
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.