Git Product home page Git Product logo

fabric8-analytics-stack-analysis's Introduction

Stack Analysis

Build Status

List of models currently present in the analytics platform

To Deploy Locally

Set up .env file with environment variables, i.e (view docker-compose.yml for possible values)

cat > .env <<-EOF
# Amazon AWS S3 credentials
AWS_S3_ACCESS_KEY_ID=
AWS_S3_SECRET_ACCESS_KEY=

# Kronos environment
KRONOS_SCORING_REGION=
DEPLOYMENT_PREFIX=
GREMLIN_REST_URL=

#Set Post Filtering
USE_FILTERS=
EOF

NOTES:
Do not use any [#] comments or ['"] in the .env file.
For the GREMLIN_REST_URL, you can take a look at out data-model and use the local-setup services

git clone https://github.com/fabric8-analytics/fabric8-analytics-data-model.git
cp -r fabric8-analytics-data-model/local-setup/scripts .
cp fabric8-analytics-data-model/local-setup/docker-compose.yml docker-compose-data-model.yml

# and in .env file
GREMLIN_REST_URL="http://localhost:8182"  # Note that the port is a port accessed from within the container

Otherwise you can use custom gremlin service

Deploy with docker-compose:\

docker-compose build
docker-compose -f docker-compose.yml -f docker-compose-data-model.yml up

To Test Locally

python -m unittest discover tests -v

To Run Evaluation Script Locally

PYTHONPATH=`pwd` python evaluation_platform/uranus/src/kronos_offline_evaluation.py

To Run Training Locally

PYTHONPATH=`pwd` python analytics_platform/kronos/src/kronos_offline_training.py

Deploy to openshift cluster

  • Create project
oc new-project fabric8-analytics-stack-analysis
oc apply -f secret.yaml
oc apply -f config.yaml
  • Deploy app using oc
oc process -f openshift/template.yaml | oc apply -f -

Sample Evaluation Request Input

Request Type: POST
ENDPOINT: api/v1/schemas/kronos_evaluation
BODY: JSON data
{
    "training_data_url":"s3://dev-stack-analysis-clean-data/maven/github/"
}

Sample Scoring Request Input

Request Type: POST 
ENDPOINT: /api/v1/schemas/kronos_scoring
BODY: JSON data
[
        {
            "ecosystem": "maven",
            "comp_package_count_threshold": 5,
            "alt_package_count_threshold": 2,
            "outlier_probability_threshold": 0.88,
            "unknown_packages_ratio_threshold": 0.3,
            "package_list": [         
            "io.vertx:vertx-core",
            "io.vertx:vertx-web"
    ]
        }
]

Sample Response

[
    {
        "alternate_packages": {
            "io.vertx:vertx-core": [
                {
                    "package_name": "io.netty:netty-codec-http",
                    "similarity_score": 1,
                    "topic_list": [
                        "http",
                        "network",
                        "netty",
                        "socket"
                    ]
                }
            ],
            "io.vertx:vertx-web": [
                {
                    "package_name": "org.jspare:jspare-core",
                    "similarity_score": 1,
                    "topic_list": [
                        "framework",
                        "webapp"
                    ]
                }
            ]
        },
        "companion_packages": [
            {
                "cooccurrence_count": 219,
                "cooccurrence_probability": 83.26996197718631,
                "package_name": "org.slf4j:slf4j-api",
                "topic_list": [
                    "logging",
                    "dependency-injection",
                    "api"
                ]
            },
            {
                "cooccurrence_count": 205,
                "cooccurrence_probability": 77.9467680608365,
                "package_name": "org.apache.logging.log4j:log4j-core",
                "topic_list": [
                    "logging",
                    "java"
                ]
            },
            {
                "cooccurrence_count": 208,
                "cooccurrence_probability": 79.08745247148289,
                "package_name": "io.vertx:vertx-web-client",
                "topic_list": [
                    "http",
                    "http-request",
                    "vertx-web-client",
                    "http-response"
                ]
            }
        ],
        "ecosystem": "maven",
        "missing_packages": [],
        "outlier_package_list": [
            {
                "frequency_count": 100,
                "package_name": "io.vertx:vertx-core",
                "topic_list": [
                    "http",
                    "socket",
                    "tcp",
                    "reactive"
                ]
            },
            {
                "frequency_count": 90,
                "package_name": "io.vertx:vertx-web",
                "topic_list": [
                    "vertx-web",
                    "webapp",
                    "auth",
                    "routing"
                ]
            }
        ],
        "package_to_topic_dict": {
            "io.vertx:vertx-core": [
                "http",
                "socket",
                "tcp",
                "reactive"
            ],
            "io.vertx:vertx-web": [
                "vertx-web",
                "webapp",
                "auth",
                "routing"
            ]
        },
        "user_persona": "1"
    }
]

Latest Deployment

  • Maven
    • Retrained on: 2018-04-11 5:43 PM(IST) with hyper-parameters:
      • fp_min_support_count = 300
      • fp_intent_topic_count_threshold = 2
      • FP_TAG_INTENT_LIMIT = 4
    • Used pomegranate version: 0.7.3

Footnotes

Check for all possible issues

The script named check-all.sh is to be used to check the sources for all detectable errors and issues. This script can be run w/o any arguments:

./check-all.sh

Expected script output:

Running all tests and checkers
  Check all BASH scripts
    OK
  Check documentation strings in all Python source file
    OK
  Detect common errors in all Python source file
    OK
  Detect dead code in all Python source file
    OK
  Run Python linter for Python source file
    OK
  Unit tests for this project
    OK
Done

Overall result
  OK

An example of script output when one error is detected:

Running all tests and checkers
  Check all BASH scripts
    Error: please look into files check-bashscripts.log and check-bashscripts.err for possible causes
  Check documentation strings in all Python source file
    OK
  Detect common errors in all Python source file
    OK
  Detect dead code in all Python source file
    OK
  Run Python linter for Python source file
    OK
  Unit tests for this project
    OK
Done

Overal result
  One error detected!

Please note that the script creates bunch of *.log and *.err files that are temporary and won't be commited into the project repository.

Coding standards

  • You can use scripts run-linter.sh and check-docstyle.sh to check if the code follows PEP 8 and PEP 257 coding standards. These scripts can be run w/o any arguments:
./run-linter.sh
./check-docstyle.sh

The first script checks the indentation, line lengths, variable names, white space around operators etc. The second script checks all documentation strings - its presence and format. Please fix any warnings and errors reported by these scripts.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Code complexity measurement

The scripts measure-cyclomatic-complexity.sh and measure-maintainability-index.sh are used to measure code complexity. These scripts can be run w/o any arguments:

./measure-cyclomatic-complexity.sh
./measure-maintainability-index.sh

The first script measures cyclomatic complexity of all Python sources found in the repository. Please see this table for further explanation how to comprehend the results.

The second script measures maintainability index of all Python sources found in the repository. Please see the following link with explanation of this measurement.

You can specify command line option --fail-on-error if you need to check and use the exit code in your workflow. In this case the script returns 0 when no failures has been found and non zero value instead.

Dead code detection

The script detect-dead-code.sh can be used to detect dead code in the repository. This script can be run w/o any arguments:

./detect-dead-code.sh

Please note that due to Python's dynamic nature, static code analyzers are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.

Because of this potential problems, only code detected with more than 90% of confidence is reported.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Common issues detection

The script detect-common-errors.sh can be used to detect common errors in the repository. This script can be run w/o any arguments:

./detect-common-errors.sh

Please note that only semantical problems are reported.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Check for scripts written in BASH

The script named check-bashscripts.sh can be used to check all BASH scripts (in fact: all files with the .sh extension) for various possible issues, incompatibilities, and caveats. This script can be run w/o any arguments:

./check-bashscripts.sh

Please see the following link for further explanation, how the ShellCheck works and which issues can be detected.

Code coverage report

Code coverage is reported via the codecov.io. The results can be seen on the following address:

code coverage report

fabric8-analytics-stack-analysis's People

Contributors

abs51295 avatar fridex avatar harjinder-hari avatar jmelis avatar jpopelka avatar krishnapaparaju avatar miteshvp avatar msrb avatar rootavish avatar sara-02 avatar shaded-enmity avatar surajssd avatar tisnik avatar tuhinsharma avatar tuhinsharma121 avatar tuxdna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric8-analytics-stack-analysis's Issues

Maven Socring takes 6.02 mins to return results( via docker-compose)

When the http://0.0.0.0:6006/api/v1/schemas/kronos_scoring request is fired with the following:

[
        {
            "ecosystem": "maven",
            "comp_package_count_threshold": 2,
            "alt_package_count_threshold": 4,
            "outlier_probability_threshold": 0.8,
            "unknown_packages_ratio_threshold": 0.3,
            "package_list": [
            	"org.mongodb:mongodb-driver-async",
                "io.vertx:vertx-core",
                "io.vertx:vertx-web"
			]
        }
    ]

Then a successful response is returned in 6.02 minutes.
Support threshold for training: 27

Update README for local setup and testing.

Add steps to:

  • Bring up the localhost API enpoints.
  • Obtaining a local copy of the data
  • Locally train and score the model
  • Run the tests Locally
  • Guideline to add more unit_tests

Issue with pip install pomegranate

if pomegranate==0.7.3 is added a dependency in the requirements.txt the docker-build fails. It works fine if we do RUN pip install pomegrante==0.7.3 in the Docker file after the RUN pip install -r requirements.txt

Collecting pomegranate==0.7.3 (from -r /requirements.txt (line 18))
  Downloading pomegranate-0.7.3.tar.gz (7.5MB)
    Complete output from command python setup.py egg_info:
    /tmp/easy_install-FLc9jp/scipy-1.0.0b1/setup.py:323: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
      warnings.warn("Unrecognized setuptools command, proceeding with "
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-ryz4MN/pomegranate/setup.py", line 68, in <module>
        test_suite = 'nose.collector'
      File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
        _setup_distribution = dist = klass(attrs)
      File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
        self.fetch_build_eggs(attrs.pop('setup_requires'))
      File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
        parse_requirements(requires), installer=self.fetch_build_egg
      File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
        dist = best[req.key] = env.best_match(req, self, installer)
      File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
        return self.obtain(req, installer) # try and download/install
      File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
        return installer(requirement)
      File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
        return cmd.easy_install(req)
      File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
        return self.install_item(spec, dist.location, tmpdir, deps)
      File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
        dists = self.install_eggs(spec, download, tmpdir)
      File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
        return self.build_and_install(setup_script, setup_base)
      File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
        self.run_setup(setup_script, setup_base, args)
      File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
        run_setup(setup_script, args)
      File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
        lambda: execfile(
      File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
        return func()
      File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
        {'__file__':setup_script, '__name__':'__main__'}
      File "setup.py", line 418, in <module>
    
      File "setup.py", line 398, in setup_package
    
    ImportError: No module named numpy.distutils.core
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-ryz4MN/pomegranate/
You are using pip version 8.1.2, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
The command '/bin/sh -c pip install -r /requirements.txt && rm /requirements.txt' returned a non-zero code: 1

Empty Response when user input has duplicate packages

When a request is sent to PGM with duplicate packages, the response received is empty. For example:

Request:

[
    {
        "ecosystem": "maven",
        "comp_package_count_threshold": 5,
        "alt_package_count_threshold": 2,
        "outlier_probability_threshold": 0.88,
        "unknown_packages_ratio_threshold": 0.3,
        "package_list": [
          "io.vertx:vertx-core",
          "io.vertx:vertx-core",
          "io.vertx:vertx-web"
       ]
    }
]		

Response:

[
    {
        "alternate_packages": {},
        "companion_packages": [],
        "ecosystem": "maven",
        "missing_packages": [],
        "outlier_package_list": [],
        "package_to_topic_dict": {},
        "user_persona": "1"
    }
]

Add logging

Currently logging is in place for only the test cases, need to add logging to training and scoring components as well.

EMR training failing due to pomegranate versioning.

Installing collected packages: cython, uuid, botocore, s3transfer, boto3, pandas, scikit-learn, sklearn, psutil
  Running setup.py install for cython
  Running setup.py install for uuid
  Found existing installation: botocore 1.4.86
    Uninstalling botocore-1.4.86:
      Successfully uninstalled botocore-1.4.86
  Running setup.py install for pandas
  Running setup.py install for scikit-learn
  Running setup.py install for sklearn
  Running setup.py install for psutil
Successfully installed boto3-1.4.7 botocore-1.7.36 cython-0.27.2 pandas-0.21.0 psutil-5.4.0 s3transfer-0.1.11 scikit-learn-0.19.1 sklearn-0.0 uuid-1.30
Requirement already satisfied (use --upgrade to upgrade): nltk in /usr/local/lib/python2.7/site-packages
Collecting pomegranate
  Downloading pomegranate-0.8.1.tar.gz (2.0MB)
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 20, in <module>
      File "/mnt/tmp/pip-build-DKRWWw/pomegranate/setup.py", line 34, in <module>
        extensions = cythonize( extensions )
      File "/usr/local/lib64/python2.7/site-packages/Cython/Build/Dependencies.py", line 920, in cythonize
        aliases=aliases)
      File "/usr/local/lib64/python2.7/site-packages/Cython/Build/Dependencies.py", line 800, in create_extension_list
        for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
      File "/usr/local/lib64/python2.7/site-packages/Cython/Build/Dependencies.py", line 125, in nonempty
        raise ValueError(error_msg)
    ValueError: 'pomegranate/*.pyx' doesn't match any files
    
    ----------------------------------------

[PRODUCTION] Getting only one companion recommendation for an empty manifest

Problem with the PGM filters.

Pre filtering companion list:

{
    "alternate_packages": {}, 
    "companion_packages": [
        {
            "cooccurrence_probability": 0.77149620186967327, 
            "package_name": "io.vertx:vertx-core", 
            "topic_list": [
                "core", 
                "io", 
                "applicative", 
                "vert.x"
            ]
        }, 
        {
            "cooccurrence_probability": 0.16905001671193653, 
            "package_name": "io.vertx:vertx-rx-java", 
            "topic_list": [
                "java", 
                "io", 
                "rx-java", 
                "project-settings"
            ]
        }, 
        {
            "cooccurrence_probability": 0.16291217385057, 
            "package_name": "io.vertx:vertx-web", 
            "topic_list": [
                "http", 
                "repository", 
                "sonatype", 
                "project-settings"
            ]
        }, 
        {
            "cooccurrence_probability": 0.12553259735065525, 
            "package_name": "io.vertx:vertx-lang-kotlin", 
            "topic_list": [
                "repository", 
                "set", 
                "sonatype", 
                "project-settings"
            ]
        }, 
        {
            "cooccurrence_probability": 0.12441931622099325, 
            "package_name": "com.englishtown.vertx:vertx-zookeeper", 
            "topic_list": [
                "project", 
                "maven", 
                "sonatype", 
                "project-settings"
            ]
        }
    ], 
    "missing_packages": [], 
    "outlier_package_list": [], 
    "package_to_topic_dict": {}
}

Post-filtering companion list:

[
    {
        "alternate_packages": {},
        "companion_packages": [
            {
                "cooccurrence_probability": 154,
                "package_name": "io.vertx:vertx-core",
                "topic_list": [
                    "core",
                    "io",
                    "applicative",
                    "vert.x"
                ]
            }
        ],
        "ecosystem": "maven",
        "missing_packages": [],
        "outlier_package_list": [],
        "package_to_topic_dict": {},
        "user_persona": "1"
    }
]

Expected behavior:

Should get a companion recommendation of more than one package when sending an empty manifest.

[Production] PGM code fails when entire stack is missing packages

Encountered this during training today. Here's the input manifest:

Here's a sample manifest:

[
            "io.vertx:vertx-core1",
            "io.vertx:vertx-web1"
 ]

basically anything where none of the packages were an input to PGM.

Expected result:

Both should be tagged as missing packages/code should not fail.

Current result:

PGM fails to give a prediction and throws below error:

---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
<ipython-input-37-401564f07580> in <module>()
----> 1 print(json.dumps(validate_recommendations(quickstart), indent=4))

<ipython-input-29-9eecc30d02d8> in validate_recommendations(package_lists)
      2         quickstart_recommendations = []
      3         for package_list in package_lists:
----> 4             quickstart_recommendations.append(predict_and_score(create_input_dict(package_list)))
      5         return quickstart_recommendations

<ipython-input-36-af0015473f01> in predict_and_score(input_json)
      5                                                user_eco_kronos_dict=user_eco_kronos_dict,
      6                                                eco_to_kronos_dependency_dict=eco_to_kronos_dependency_dict,
----> 7                                                all_package_list_obj=all_package_list_obj)
      8     else:
      9         response = {"message": "Failed to load model, Kronos Region not available"}

/Users/avgupta/PycharmProjects/fabric8-analytics-stack-analysis/analytics_platform/kronos/src/kronos_online_scoring.py in score_eco_user_package_dict(user_request, user_eco_kronos_dict, eco_to_kronos_dependency_dict, all_package_list_obj)
    306             outlier_probability_threshold=outlier_probability_threshold,
    307             unknown_package_ratio_threshold=unknown_package_ratio_threshold,
--> 308             outlier_package_count_threshold=outlier_package_count_threshold)
    309         prediction_result_dict[KRONOS_SCORE_USER_PERSONA] = user_category
    310         prediction_result_dict[KRONOS_SCORE_ECOSYSTEM] = ecosystem

/Users/avgupta/PycharmProjects/fabric8-analytics-stack-analysis/analytics_platform/kronos/src/kronos_online_scoring.py in score_kronos(kronos, requested_package_list, kronos_dependency, comp_package_count_threshold, alt_package_count_threshold, outlier_probability_threshold, unknown_package_ratio_threshold, outlier_package_count_threshold)
    238     result = dict()
    239     result[KRONOS_ALTERNATE_PACKAGES] = alternate_package_dict
--> 240     result[KRONOS_COMPANION_PACKAGES] = companion_package_dict_same_name_pruned
    241     result[KRONOS_OUTLIER_PACKAGES] = outlier_package_dict_list
    242     result[KRONOS_MISSING_PACKAGES] = missing_package_list

UnboundLocalError: local variable 'companion_package_dict_same_name_pruned' referenced before assignment

Analytics tranining timing issue.

Running kronos_offline_training.py locally took approx 20 mins to complete. The same training process on staging took 5 hour 54 mins to complete.
Against Local : Thinkpad T460s with Fedora 25
Against Staging : Amazon EMR r 4.2

Input Data:
No. Manifest files: 710
No. Package: 1869

Resturcture the analytics_platform modules.

Current structure of the analytics module is based on the structure that each module has its own code and test folders. But with the code folders unfied under this PR, we now have a redundant layer of src folder. That can be restructured for simplicity.

Sample of current structure

analytics_platform
β”‚   Kronos
β”‚   └───module
β”‚            └───src
β”‚                       file.txt

Desired structure

analytics_platform
β”‚   Kronos
β”‚   └───module
β”‚               file.txt

Improve logging message in Kronos offline training.

TypeError: not all arguments converted during string formatting
Logged from file kronos_offline_training.py, line 29

TypeError: not all arguments converted during string formatting
Logged from file kronos_offline_training.py, line 44

Unit test for checking constants values of various modules.

All analytics modules under kronos have a module_constants.py file. These values are used for tuning the model. We may change these values during re-training. But, we are currently not testing the validity of these values if changed.

Improve recommendation for the OSIO example stacks

  • Improve tags for current Maven packages(limited to quick-start and their recommended packages)
  • Prepare ground truth for the 5 example stacks
  • Turn off the outlier flag in the UI (not needed as of now)
  • Filter out the same name alternate package, companion package(done while online scoring)
  • Determine anchor packages for vertx and springboot ecosystem
  • Retrain model and check for accuracy against the ground truth and the current recommendation

Assigned to: @samuzzal-choudhury @arunkumars08 @miteshvp @sara-02
Related issues #34 and #35

Add a description for this repository.

We should add a summary description on here, would be useful if we want other people to understand what this code is all about. I would do it myself but I can't since I'm not an owner πŸ™‚

Error Deploying Kronos on dev-cluster as part of e2e deployment script

As per this housekeeping issue: https://gitlab.cee.redhat.com/dtsd/housekeeping/issues/1151
@jchevret 's observation:
Looks like the problem is that the livenesscheck is failing and thus the container is killed and restarted
Livenesscheck is: http-get http://:6006/
Could it be that the kronos/gunicorn service is failing to start properly?
Doing a curl to :6006 from the node the pod is running on ... one see's
curl: (56) Recv failure: Connection reset by peer

So the process listening on port 6006 receives the connection but crashes for some reason and never returns a response
So:
network from the nodes to the pod works
network from the container out works
node can't complete the liveness check for the kronos container, and thus the container is restarted X times before the deployment times out completely
IMO the next steps would be to understand why the kronos service (gunicorn) won't respond to requests sent to http://:6006

Unified test folder

Currently the unit test for each component are in its respective folder, need to restructure the folders and have all tests under one test folder.

Recommendation Validation

Todo For PR #46

  • Pick top 3 companion_count

  • Documentation for recommendation_validator class

  • Add unit test for the filtering functions

  • Test the new recommendations for quick starter apps and compare results with existing solution

  • Retrain without repetition of quick starter in manifest.json

  • Enable filtering for validations with zero count

Automate testing and evaluation after training.

Currently, we wait for training to complete on EMR(approx 20 mins) and then manually trigger a rest for testing which takes(approx 2 hours). By automatic the second step, we will automatically have evaluated each training iteration.

Split into multiple repositories

It seems like there are 3 services in this repository. There is nothing wrong with it, but it means that PR/commit affecting just one of the services will still trigger build, test and deployment of all 3 of them.

Having each service in a separate repository would enable us to do more fine-grained CI/CD.

Add sample schema for raw manifest.json

As the raw input manifest has the Github stats, and will require pre-processing before it can be fed to the PGM, it will be good to define the expected raw input format.

Recommendations based on key/anchor packages

Understanding the intent of the application would require able to figure out few critical / anchor packages present in the user's application stack and all the recommendations (suggesting usage outliers, companion and alternative packages) would need to be based on the assumption that user is 'not going to give up' on any of these anchor packages and any recommendations would need to fit into the context of these anchor / key packages

Add a threshold for strength of filtered recommendation.

The threshold can act as a goodness measure to further valid a recommendation.
If recommendation is valid i.e frequency count for the recommended set >0, then a threshold can be used to tell that if frequency count for the recommended set >threshold then it is strong recommendation, else a weak one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.