Git Product home page Git Product logo

http-add-on's Introduction

Table of Contents generated with DocToc

Kubernetes-based Event Driven Autoscaling - HTTP Add-on

The KEDA HTTP Add-on allows Kubernetes users to automatically scale their HTTP servers up and down (including to/from zero) based on incoming HTTP traffic. Please see our use cases document to learn more about how and why you would use this project.

๐Ÿšง Project status: beta ๐Ÿšง
๐Ÿ“ข KEDA is actively relying on community contributions to help grow & maintain the add-on. The KEDA maintainers are assisting the community to evolve the add-on but not directly responsible for it. Feel free to open a new discussion in case of questions.

โš  The HTTP Add-on currently is in beta. We can't yet recommend it for production usage because we are still developing and testing it. It may have "rough edges" including missing documentation, bugs and other issues. It is currently provided as-is without support.

HTTP Autoscaling Made Simple

KEDA provides a reliable and well tested solution to scaling your workloads based on external events. The project supports a wide variety of scalers - sources of these events, in other words. These scalers are systems that produce precisely measurable events via an API.

KEDA does not, however, include an HTTP-based scaler out of the box for several reasons:

  • The concept of an HTTP "event" is not well defined.
  • There's no out-of-the-box single system that can provide an API to measure the current number of incoming HTTP events or requests.
  • The infrastructure required to achieve these measurements is more complex and, in some cases, needs to be integrated into the HTTP routing system in the cluster (e.g. the ingress controller).

For these reasons, the KEDA core project has purposely not built generic HTTP-based scaling into the core.

This project, often called KEDA-HTTP, exists to provide that scaling. It is composed of simple, isolated components and includes an opinionated way to put them together.

Adopters - Become a listed KEDA user!

We are always happy to start list users who run KEDA's HTTP Add-on in production or are evaluating it, learn more about it here.

We welcome pull requests to list new adopters.

Walkthrough

Although this is currently a beta release project, we have prepared a walkthrough document with instructions on getting started for basic usage.

See that document at docs/walkthrough.md

Design

The HTTP Add-on is composed of multiple mostly independent components. This design was chosen to allow for highly customizable installations while allowing us to ship reasonable defaults.

  • We have written a complete design document. Please see it at docs/design.md.
  • For more context on the design, please see our scope document.
  • If you have further questions about the project, please see our FAQ document.

Installation

Please see the complete installation instructions.

Roadmap

We use GitHub issues to build our backlog, a complete overview of all open items and our planning.

Learn more about our roadmap.

Contributing

This project follows the KEDA contributing guidelines, which are outlined in CONTRIBUTING.md.

If you would like to contribute code to this project, please see docs/developing.md.


We are a Cloud Native Computing Foundation (CNCF) graduated project.

Code of Conduct

Please refer to the organization-wide Code of Conduct document.

http-add-on's People

Contributors

ajanth97 avatar arschles avatar asw101 avatar dependabot[bot] avatar devopsdynamo avatar embano1 avatar iompo avatar jocelynthode avatar joeyc-dev avatar jorturfer avatar khaosdoctor avatar leska-j avatar lucakuendig avatar luckymrwang avatar maxmoeschinger avatar mend-bolt-for-github[bot] avatar nikhilchintawar avatar ritikaa96 avatar rohithzr avatar similark avatar someshkoli avatar t0rr3sp3dr0 avatar tomkerkhove avatar tommy351 avatar tpiperatgod avatar v-shenoy avatar worldspawn avatar xinydev avatar zorocloud avatar zroubalik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

http-add-on's Issues

Add some color coding to the architecture diagram

The arch. diagram - including the new one in #93 - is a single color, and it's difficult to follow everything. It's also difficult to understand what components are part of the add on, and which are not (as pointed out by @tomkerkhove).

We should add some color to it, and any other indicators to help make the distinction

Provide design doc

Provide design doc elaborating how it currently works, what our dependencies are and how people can plug in.

Ideally this is in a DESIGN.md file.

Interceptor doesn't always "hold" request when there are no replicas available

When there's no app available, we shouldn't get a connection refused. The request should hang until the app scales up from 0 to 1. Sometimes the interceptor doesn't do this.

We should also expose a timeout to the user via the HTTPScaledObject so that they can configure how long they'd like the interceptor to hold the request

Add configuration parameters for operator-created services

After #78 (review), the operator will create the interceptor and external scaler, but not target application resources.

Use-Case

As part of that PR, the operator will create the Service that routes to the interceptor, and users will need to point traffic to that service. Since the Service is part of the critical request path and it's where users will primarily interact with the system, it would be nice to be able to customize it.

Specification

  • Users should be able to say at least the type of the Service (i.e. LoadBalancer, ClusterIP)

Walkthrough not working on k8s 1.19.3

I'm trying to recreate the walkthrough but without success. The operator was installed, but after creating the HTTPScaledObject the keda-http-addon-controller-manager could not create the proxy service.

Thanks a lot for this project, it is exactly what I was looking for.

Sorry if this error was already discussed.

Expected Behavior

keda-http-addon-controller-manager should have created the *-interceptor-proxy service

Actual Behavior

After creating the following HTTPScaledObject from the example:

kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
    name: xkcd
spec:
    scaleTargetRef:
        deployment: xkcd
        service: xkcd
        port: 8080
    replicas:
        min: 5
        max: 10

The keda-http-addon-controller-manager is returning the following error:

2021-03-25T23:30:27.611Z	INFO	controllers.HTTPScaledObject	Reconciliation start	{"HTTPScaledObject.Namespace": "keda", "HTTPScaledObject.Name": "xkcd"}
2021-03-25T23:30:27.611Z	INFO	controllers.HTTPScaledObject	Adding Finalizer for the ScaledObject	{"HTTPScaledObject.Namespace": "keda", "HTTPScaledObject.Name": "xkcd"}
2021-03-25T23:30:27.619Z	ERROR	controllers.HTTPScaledObject	Failed to update HTTPScaledObject with a finalizer	{"HTTPScaledObject.Namespace": "keda", "HTTPScaledObject.Name": "xkcd", "finalizer": "httpscaledobject.http.keda.sh", "error": "HTTPScaledObject.http.keda.sh \"xkcd\" is invalid: spec.scaleTargetRef: Required value"}
github.com/go-logr/zapr.(*zapLogger).Error
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132
github.com/kedacore/http-add-on/operator/controllers.ensureFinalizer
	/go/src/github.com/kedahttp/http-add-on/operator/controllers/finalizer.go:29
github.com/kedacore/http-add-on/operator/controllers.(*HTTPScaledObjectReconciler).Reconcile
	/go/src/github.com/kedahttp/http-add-on/operator/controllers/httpscaledobject_controller.go:112
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:297
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:252
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:215
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.UntilWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

Steps to Reproduce the Problem

  1. Just followed the walkthrough

Specifications

  • **KEDA Version: 2.0.0
  • **Platform & Version: Azure AKS
  • **Kubernetes Version: 1.19.3
  • **Scaler(s): v0.0.1

Create 2 services for the inteceptors

Each interceptor has two endpoints, which are treated differently:

  • The public proxy port, from which public traffic is directed
  • The "admin" port, which has the /queue endpoint on it and the external scaler uses

Use-Case

The former endpoint should be exposed to the public internet - via LoadBalancer as I write this, but with an Ingress forthcoming. see #10 - but the latter should not. That should be a ClusterIP service

Specification

N/A

Use the external push gRPC protocol for external scaler => interceptor communication

We currently use a simple HTTP/JSON protocol for the scaler to communicate with the interceptor. Because this protocol is just request/response, it has limitations on latency that can cause very slow scale up times, which are especially noticeable when scaling up from 0 replicas.

It would be helpful to have some kind of two way protocol rather than what we have now. The external push scaler provides just this. We should take this and use it in the scaler.

Use-Case

As mentioned above, this would be useful to speed up scale-up (and down) latencies. This may help with #60 as well, but that's not the primary goal here.

Specification

Cannot pull ghcr.io images

After https://github.com/kedacore/http-add-on/runs/1841953328?check_suite_focus=true, docker images should have been pushed to GH container registry. It appears from https://github.com/orgs/kedacore/packages that the images didn't get pushed at all, but running docker pull tells another story:

~ docker pull ghcr.io/kedacore/http-add-on-operator:sha-8c5a4c8
Error response from daemon: pull access denied for ghcr.io/kedacore/http-add-on-operator, repository does not exist or may require 'docker login': denied: permission_denied: read_package

I don't know how reliable that error message is, but I presume that it implies that anonymous read access is disabled.

Expected Behavior

docker pull ghcr.io/kedacore/http-add-on-operator:sha-8c5a4c8 should succeed, as should docker pull ghcr.io/kedacore/http-add-on-operator:canary should pass

Actual Behavior

See failure above

Steps to Reproduce the Problem

  1. Run docker pull ghcr.io/kedacore/http-add-on-operator:sha-8c5a4c8

Scale up the interceptor

Use-Case

When a lot of traffic comes in, the interceptor should scale up in addition to the app. This will require:

  • A second trigger in the ScaledObject
  • Another metric in the external scaler
  • Possibly another queue endpoint on the interceptors (but I think the standard queue size should do it)

Specification

  • When more traffic enters the system, interceptors should scale up

Document use cases

We need a document that outlines use cases for the HTTP add on.

Use-Case

The document should be used by users (prospective and current) of the HTTP add on to guide their decision to use the project, and by developers to decide if/how a feature will fit.

Specification

  • A new document with use cases on it
  • A link from the README to this new document

Make the operator "maintain" all the app resources it creates

Currently if you create an HTTPScaledObject, the operator will create a bunch of Kubernetes resources to encompass the "app" - things like Services, Deployments, ScaledObjects...

If you delete one of those resources, though, the app won't work properly and the operator isn't aware of that. When you go to delete or edit (#7) the app, the resource just won't be there and the operator will just fail silently (on deletes, it just logs the missing resource and moves on).

It would be most helpful for the operator to somehow ensure the resources don't get deleted. I think this could be done with a finalizer on each of them.

Use-Case

I'd like to more tightly "tie" the app resources to the HTTPScaledObject that caused them to be created, so that:

  • Resources are more difficult to delete without also deleting the HTTPScaledObject
  • If a resource is deleted, the operator will know about it and can (theoretically) re-create it

Specification

  • Demand #1

Timing issue when a new app is created

In some cases, KEDA will attempt to ping the external scaler before it is ready to serve. In this case, KEDA doesn't seem to keep checking for at least a while (more than the 250ms that the ScaledObject indicates it should)

We should figure out either why this happens and fix it, or a backup plan would be to add logic in the operator to start up the scaler, ensure it's responsible and only then create the ScaledObject

This is possibly related to #19

Add branch protection rule

Add a branch protection rule for the main branch to ensure:

  • the branch is up to date before merging
  • contributors can't push directly to the main branch
  • status checks all pass before merge
  • PR reviews should be done before merge

Use-Case

The above list is important for the health of all pull requests, because it ensures that the code in them is tested against the up-to-date main branch (to which it's going to be merged) and otherwise of a good quality

Specification

See attached screenshot. The checked boxes represent almost everything in the list herein.

Screen Shot 2021-02-11 at 3 03 58 PM

@tomkerkhove can you do this?

Change base docker image to scratch

The operator is using a distroless image, but the interceptor and the scaler are not. We should change the final base image to further reduce the image size

Set up CI & Push Images to GitHub Container Registry

Use-Case

Images for the interceptor, scaler and operator need to be pushed to somewhere. Right now, the Makefile hard-codes to push to arschles/$WHATEVER. Change that to push to the new GitHub container registry

Specification

  • Have GitHub Actions automatically run on pushes to main
  • Have each run automatically push to the GH container registry

Add service mesh / GAMMA project based scaling

The interceptor works with the scaler to provide KEDA with the HTTP queue size, which KEDA in turn uses to scale the app. That arrangement lets a user submit a single HTTPScaledObject and have the operator create all the moving pieces required (interceptor, scaler, etc...). We should, however, support service meshes, or really any HTTP server that can (a) route traffic from outside the cluster into a pod (the app) and (b) emit SMI-compatible metrics.

Note: kedacore/keda has discussed consuming SMI metrics as well in kedacore/keda#615. This is a specific use case for consuming those metrics, but if KEDA core doesn't support SMI when we implement SMI support in the external scaler, it could be a "test drive" for building the same functionality into core

Create a dev container

VS Code dev containers are a technology that let us define and use a docker container for the development environment.

Use-Case

Setting up the development environment for the HTTP add has a few steps including setting up Go and kubebuilder, so it would be really useful to pre-configure this with a Dockerfile and define a dev container using that. It would save contributors a non-trivial amount of setup time.

Specification

  • Create a .devcontainer directory with a Dockerfile and devcontainer.json file in it. These should be set up properly for a dev container

Way to modify annotations and labels for created resources

The operator creates all the resources with default labels and specs, it'd be good to add a section to the CRD spec that allowed users to add their own labels and annotations.

A simple use case would be a user who wants to use the HTTP Application Routing add-on from AKS, it needs the annotation in the ingress resource so it can update the DNS records.

Improve contrib experience with magefiles

Today there are a lot of makefiles to run scripts in the project, they're fine but take a while. The docker-push-all task runs all the builds sequentially while they could be running in parallel, this is just an example.

I propose to replace makefiles for magefiles as @arschles pointed out in one of our discussions and port the scripts to these files. This will allow us to have a better handling of files and allow us to do more complex tasks in the dev environment.

Plus, to improve contribex we can also version and ship the mage binary along with the repository to make the usage easier.

[Meta] Testing

Use-Case

We have almost no automated testing currently. That's terrible! We need to (a) figure out how to test and (b) write unit/integration/e2e tests.

Specification

  • Automate as much of the testing as possible
  • Make it run automatically inside GitHub actions

Outdated API groups in chart

In #2, the helm chart has outdated API groups. When installing on a modern microk8s cluster:

W0125 16:46:55.554047   72890 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable 
in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
microkW0125 16:46:57.569877   72890 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavai
lable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0125 16:46:58.994128   72890 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22
+; use rbac.authorization.k8s.io/v1 ClusterRole
W0125 16:46:59.152679   72890 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22
+; use rbac.authorization.k8s.io/v1 ClusterRole

The chart should be updated to use the new groups/versions.

Document the scope of functionality

The HTTP add on needs to document the functionality it provides.

Use-Case

This document will set expectations for new users and developers of the project, and also help guide new feature development, include deciding whether to implement a feature at all.

Specification

  • A docs/scope.md document that outlines the scope of functionality, including clear delineation on where the project starts and stops
  • Link to the new document from the README

Use an ingress controller

Instead of creating a Service with type: LoadBalancer on it for the interceptors, use a single ingress controller (it should be installed by the helm chart) and create an Ingress for each new app.

Use-Case

Users should be able to use their existing ingress controller and have the operator integrate with it. Three use cases:

  • A user has a brand new, vanilla k8s cluster and wants to install an ingress controller. they helm install the chart and the operator spits out Ingress resources
  • A user has already installed the chart on their cluster and wants to reuse their existing ingress controller, so they helm install the chart and turn off the ingress controller option. the operator still spits out Ingress resources and the existing ingress controller picks them up properly
  • A user has a new k8s cluster on their cloud provider, which has a built-in ingress controller, so they helm install the chart and the operator spits out Ingress objects and the cloud provider ingress picks them up properly

Specification

  • The helm chart should deploy an ingress controller
    • Have the helm chart take a dependency on the nginx ingress controller chart
    • Allow users to override that dependency, or turn it off completely if they already have one
  • The operator should deploy Ingress objects

Deploy helm charts to a repository

Currently, we just instruct people to install from source. It would be nicer and more convenient to deploy charts to a helm repo so that we can instruct people to install without making them check out the code first.

Use-Case

Instead of requiring folks to git clone https://github.com/kedacore/http-add-on.git before they run a helm install, I'd like folks to be able to run a helm repo add and then helm install. This allows users and operators to use the add on without interacting with the code at all.

We may consider putting charts inside of https://github.com/kedacore/charts, or hosting them with artifact hub

Specification

  • Install instructions should look similar to the ones in the keda core ones

The scaler needs code to ping the interceptors for the queue size

  • The scaler needs to look for the applicable Service that fronts the interceptor pods, then reach out to each interceptor pod to get its queue size
  • ... interceptors need to have an "admin" endpoint that scalers can ping to get the queue size. this endpoint should run on a different port than the public "forwarder" endpoint, and be behind a ClusterIP Service

Provide support for scaling from 0 -> n and vice versa

Provide support for scaling from 0 -> n and vice versa s that KEDA handles everything rather than just activation.

This should follow the KEDA core approach where it can create an HPA for the user or fully rely on external push.

Make image pull policy configurable on helm chart

Right now, it's hard-coded to Always. We should make that configurable.

Use-Case

Some folks might rather not always re-pull images, especially for the scaler and interceptor, which might be spinning up and down repeatedly, if they add more apps or, in the case of the interceptor, traffic increases happen.

Specification

  • A new value is in values.yaml called operator.pullPolicy that parameterizes the operator's pull policy
  • The operator has a new env var called INTERCEPTOR_PULL_POLICY that controls the interceptor's pull policy when the operator creates it
  • Similarly, the operator has a SCALER_PULL_POLICY that controls the scaler's pull policy

Support Kubernetes Gateway APIs for Ingress

The network SIG has been hard at work developing the new Gateway API, aka Ingress V2. A Gateway is responsible for routing a request to a service. There's already an HTTPRoute implementation. The API has several integration points and I think the current Interceptor could be integrated or evolved to integrate with the existing HTTP implementation. Alternatively, a new GatewayClass can be designed to work specifically with KEDA and still make use of the standardized API.

Might relate to #10 and possibly #6. /cc folks from Services API who know way more than I do :) @hbagdi @jpeach @robscott

Edit @tomkerkhove: Updated link given it was renamed to Gateway API.

make docker-build-operator fails

This is the error that it fails with:

Step 9/14 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
 ---> Running in 92894df450ac
go: finding github.com/kedacore/http-add-on/pkg latest
go: finding github.com/kedacore/http-add-on/pkg/k8s latest
go: finding github.com/kedacore/http-add-on/operator/controllers latest
go: finding github.com/kedacore/http-add-on/operator latest
go: finding github.com/kedacore/http-add-on/operator/api/v1alpha1 latest
go: finding github.com/kedacore/http-add-on/operator/api latestbuild command-line-arguments: cannot load github.com/kedacore/http-add-on/operator/api/v1alpha1: no matching versions for query "lates
t"The command '/bin/sh -c CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go' returned a non-zero code:
 1
make: *** [Makefile:28: docker-build-operator] Error 1

Improve the status field of HTTPScaledObjects

Currently, the Status field is not updated

Expected Behavior

  • The status field needs to be updated properly after an HTTPScaledObject gets created
  • The status field should have a conditions list, similar to the one in ScaledObjects. That way, there is an auditable log of mutations of the HTTPScaledObject

Actual Behavior

The status field remains blank

Steps to Reproduce the Problem

  1. kubectl create -f examples/httpscaledobject.yaml
  2. Observe application components like Service and Deployment get created
  3. kubectl get -n kedahttp xkcd -o yaml
  4. Observe that there is no status field

Specifications

N/A

  • KEDA Version: Please elaborate
  • Platform & Version: Please elaborate
  • Kubernetes Version: Please elaborate
  • Scaler(s): Please elaborate

Provide walkthrough for scaling an app

Provide walkthrough of how to install (done) and configure an app to scale based on HTTP traffic.

Would help us as maintainers to also keep track of where we are and what the UX is.

Allow users to scale an existing Deployment

Use-Case

Currently you can't start scaling an existing Deployment, you can only create a new one by submitting a new HTTPScaledObject. It would be helpful to allow someone to submit a new HTTPScaledObject to begin scaling an already existing Deployment.

This idea came from #31 in this comment

Specification

  • User should be able to specify an existing Deployment in an HTTPScaledObject
    • They will also need to specify the Service name and port to which traffic should be forwarded to their app
  • KEDA and the HTTP add on should begin routing traffic to their app and scaling based on HTTP traffic

Autoscaling reaction time is slow

If you create a new app and run this command with hey:

hey -n 2000000 -c 1000 ${APP_IP_OR_DOMAIN}

You'll see that it takes a while for KEDA to make a request to the GetMetrics external scaler endpoint, and so it takes a bit of time to scale up.

Expected Behavior

Application pods should scale up much faster.

Actual Behavior

Application pods can take 10s of seconds to scale up.

Steps to Reproduce the Problem

  1. Create a new HTTPScaledObject
  2. Get the IP of your new app (kubectl get svc -n $NAMESPACE, and copy the public IP from the LoadBalancer service)
  3. Use the hey command above
  4. Watch the app pods as they very slowly scale up

Note: you can speed this process way up if you set the pollingInterval to 1 on the ScaledObject. In most cases, you can get sub-second scale-up times. The cost of that is that you have KEDA polling the external scaler every 1ms. Surely there's a better way - maybe to use StreamIsActive more intelligently from the external scaler.

Specifications

N/A

Add documentation on using the interceptor or scaler without other components

The operator exists for convenience, and you can use the interceptor or scaler without the other pieces of the puzzle. It would be nice to have documentation on how to do so.

Use-Case

Some folks might want to use these components in isolation, without the rest of the system, so we should document how to do so

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.