Git Product home page Git Product logo

cluster-network-addons-operator's Introduction

Cluster Network Addons Operator

This operator can be used to deploy additional networking components on top of Kubernetes cluster.

Configuration

Configuration of desired network addons is done using NetworkAddonsConfig object:

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  multus: {}
  multusDynamicNetworks: {}
  linuxBridge: {}
  kubeMacPool: {}
  ovs: {}
  macvtap: {}
  kubeSecondaryDNS: {}
  kubevirtIpamController: {}
  imagePullPolicy: Always

Multus

The operator allows administrator to deploy multi-network Multus plugin. It is done using multus attribute.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  multus: {}

Additionally, container image used to deliver this plugin can be set using MULTUS_IMAGE environment variable in operator deployment manifest.

Multus Dynamic Networks Controller

This controller allows hot-plug and hot-unplug of additional Pod intefaces. It is done using multusDynamicNetworks attribute.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  multus: {}
  multusDynamicNetworks: {}

Additionally, container image used to deliver this plugin can be set using MULTUS_DYNAMIC_NETWORKS_CONTROLLER_IMAGE environment variable in operator deployment manifest.

Linux Bridge

The operator allows administrator to deploy Linux Bridge CNI plugin simply by adding linuxBridge attribute to NetworkAddonsConfig.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  linuxBridge: {}

Additionally, container image used to deliver this plugin can be set using LINUX_BRIDGE_IMAGE environment variable in operator deployment manifest.

The bridge marker image used to deliver a bridge marker detecting the availability of linux bridges on nodes can be set using the LINUX_BRIDGE_MARKER_IMAGE environment variable in operator deployment manifest.

Configure bridge on node

Following snippets can be used to configure linux bridge on your node.

# create the bridge using NetworkManager
nmcli con add type bridge ifname br10

# allow traffic to go through the bridge between pods
iptables -I FORWARD 1 -i br10 -j ACCEPT

Kubemacpool

The operator allows administrator to deploy the Kubemacpool. This project allow to allocate mac addresses from a pool to secondary interfaces using Network Plumbing Working Group de-facto standard.

Note: Administrator can specify a requested range, if the range is not requested a random range will be provided. This random range spans from 02:XX:XX:00:00:00 to 02:XX:XX:FF:FF:FF, where 02 makes the address local unicast and XX:XX is a random prefix.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  kubeMacPool:
   rangeStart: "02:00:00:00:00:00"
   rangeEnd: "FD:FF:FF:FF:FF:FF"

Open vSwitch

The operator allows administrator to deploy OVS CNI plugin simply by adding ovs attribute to NetworkAddonsConfig. Please note that in order to use this plugin, openvswitch have to be up and running at nodes.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  ovs: {}

NMState

Note: The cluster-network-addons-operator is no longer installing kubernetes-nmstate, refer to its own operator release notes to install it.

Macvtap

Note: This feature is experimental. Macvtap-cni is unstable and its API may change.

The operator allows the administrator to deploy the macvtap CNI plugin, simply by adding macvtap attribute to NetworkAddonsConfig.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  macvtap: {}

Macvtap-cni must be explicitly configured by the administrator, indicating the interfaces on top of which logical networks can be created.

A simple example on how to do so, the user must deploy a ConfigMap, such as in this example.

Currently, this configuration is not dynamic.

KubeSecondaryDNS

This controller allows to support FQDN for VMI's secondary networks.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  kubeSecondaryDNS:
    DOMAIN: ""
    NAME_SERVER_IP: ""

Additionally, container image used to deliver this plugin can be set using KUBE_SECONDARY_DNS_IMAGE environment variable in operator deployment manifest.

kubevirtIpamController

This controller allows to support IPAM for user defined networks.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  multus: {}
  kubevirtIpamController: {}

Additionally, container image used to deliver this plugin can be set using KUBEVIRT_IPAM_CONTROLLER_IMAGE environment variable in operator deployment manifest.

Note: This component requires certificates mounted on the controller pods for the webhook to work. On non OpenShift clusters, the user should manually install a certificate library (e.g. cert-manager). It is done for convenience as part of the helper scripts.

Image Pull Policy

Administrator can specify image pull policy for deployed components. Default is IfNotPresent.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  imagePullPolicy: Always

Self Signed Certificates Configuration

Administrator can specify webhook self signed certificates configuration for deployed components. Default is caRotateInterval: 168h, caOverlapInterval: 24h, certRotateInterval: 24h, certOverlapInterval: 8h

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  selfSignConfiguration:
    caRotateInterval: 168h
    caOverlapInterval: 24h
    certRotateInterval: 24h
    certOverlapInterval: 8h

The selfSignConfiguration parameters has to be all or none set, setting some of them fails at validation. They have to conform to golang time.Duration string format. Additionally the following checks are done at validation:

  • caRotateInterval >= caOverlapInterval && caRotateInterval >= certRotateInterval && certRotateInterval >= certOverlapInterval

This parameters are consumed by Kubemacpool component.

Placement Configuration

CNAO deploys two component categories: infra and workload. Workload components manage node configuration and need to be scheduled on the nodes where actual user workload is scheduled. Infra components provide cluster-wide service and do not need to be running on the same nodes as user workload.

Administrator can specify placement preferences for deployed infra and workload components by defining affinity, nodeSelector and tolerations.

By default, infra components are scheduled on control-plane nodes and workload components are scheduled on all nodes. To adjust this behaviour, provide custom placementConfiguration to the NetworkAddonsConfig.

In the following example, nodeAffinity is used to schedule infra components to control-plane nodes and nodeSelector to schedule workloads on worker nodes. Note that worker nodes need to be labeled with node-role.kubernetes.io/worker label.

apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  placementConfiguration:
    infra:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
    workloads:
      nodeSelector:
        node-role.kubernetes.io/worker: ""

Deployment

First install the operator itself:

kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.95.0/namespace.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.95.0/network-addons-config.crd.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.95.0/operator.yaml

Then you need to create a configuration for the operator example CR:

kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.95.0/network-addons-config-example.cr.yaml

Finally you can wait for the operator to finish deployment:

kubectl wait networkaddonsconfig cluster --for condition=Available

In case something failed, you can find the error in the NetworkAddonsConfig Status field:

kubectl get networkaddonsconfig cluster -o yaml

You can follow the deployment state through events produced in the default namespace:

kubectl get events

Events will be produced whenever the deployment is applied, configured or failed. The expected events are:

Event type Reason
Progressing When operator had started deploying the components
Failed When one or more components failed to deploy
Available When all components finished to deploy
Modified When the configuration was modified or applied for the first time

For more information about the configuration format check configuring section.

Upgrades

Starting with version 0.89.0, this operator supports upgrades to any newer version. If you wish to upgrade, remove old operator (operator.yaml) and install new, operands will remain available during the operator's downtime.

Development

Make sure you have either Docker >= 17.05 / podman >= 3.1 installed.

# run code validation and unit tests
make check

# perform auto-formatting on the source code (if not done by your IDE)
make fmt

# generate source code for API
make gen-k8s

# build images (uses multi-stage builds and therefore requires Docker >= 17.05 / podman >= 3.1)
make docker-build

# or build only a specific image
make docker-build-operator
make docker-build-registry

# bring up a local cluster with Kubernetes
make cluster-up

# bridge up a local cluster with kubernetes 1.25
export KUBEVIRT_PROVIDER=k8s-1.25
make cluster-up

# build images and push them to the local cluster
make cluster-operator-push

# install operator on the local cluster
make cluster-operator-install

# run workflow e2e tests on the cluster, requires cluster with installed operator,
# workflow covers deployment of operands
make test/e2e/workflow

# run lifecycle e2e tests on the cluster, requires cluster without operator installed,
# lifecycle covers deployment of operator itself and its upgrades
make test/e2e/lifecycle

# access kubernetes API on the cluster
./cluster/kubectl.sh get nodes

# ssh into the cluster's node
./cluster/cli.sh ssh node01

# clean up all resources created by the operator from the cluster
make cluster-clean

# delete the cluster
make cluster-down

For developing at an external cluster:

export KUBEVIRT_PROVIDER=external

export KUBECONFIG=[path to external's cluster kubeconfig]

# This is the registry used to push and pull the dev image
# it has to be accessible by the external cluster
export DEV_IMAGE_REGISTRY=quay.io/$USER

# Then is possible to follow normal dev flow

make cluster-operator-push
make cluster-operator-install
make test/e2e/workflow
make test/e2e/lifecycle

Releasing

  1. Checkout a public branch
  2. Call make prepare-patch|minor|major and prepare release notes
  3. Open a new PR
  4. Once the PR is merged, create a new release in GitHub and attach new manifests

cluster-network-addons-operator's People

Contributors

alonakaplan avatar alonsadan avatar andreatp avatar assafad avatar atul9 avatar avlitman avatar bcrochet avatar booxter avatar davidvossel avatar dependabot[bot] avatar djzager avatar eddev avatar erkanerol avatar fedepaol avatar github-actions[bot] avatar irosenzw avatar kubevirt-bot avatar machadovilaca avatar maiqueb avatar nunnatsa avatar ormergi avatar oshoval avatar phoracek avatar pitik3u avatar qinqon avatar ramlavi avatar rhrazdil avatar schseba avatar sjpotter avatar tiraboschi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-network-addons-operator's Issues

Add openshift scc

We need to check if we are running under openshift and add scc configuration for every component

Example

Warning  FailedCreate  4m (x22 over 44m)  daemonset-controller  Error creating: pods "kube-multus-ds-amd64-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]

KubeMacPool does not stay ready

When enabling KubeMacPool component. Status reports Progressing condition, then it successfully turns Ready. However, after some time (sometimes sooner than in a minute), it triggers operator to turn into Progressing/Failing again.

tests/e2e/deployment_test.go from #118 should include skipped tests that are reproducing the issue.

This issue might be a cause of https://bugzilla.redhat.com/show_bug.cgi?id=1712851, since it caused other components that were deployed at the same time fail to create their resources.

Support upgrade rollback

In case system stopped working after upgrade, we should make rollback possible. This rollback would be only allowed if no other changes but version bump were done.

Fix kubemacpool RBAC

:latest kubemacpool has extended RBAC needs. That needs to be reflected in this repo.

NetworkAttachmentDefinition CRD and SR-IOV NetworkAttachment CR race

When deploying both Multus and Sriov components at the same time, there is a race where we fail with:

01:58:16  		  conditions:
01:58:16  		  - type: Failing
01:58:16  		    status: "True"
01:58:16  		    lastprobetime: "2019-05-31T23:53:18Z"
01:58:16  		    lasttransitiontime: "2019-05-31T23:47:21Z"
01:58:16  		    reason: FailedToApply
01:58:16  		    message: 'could not apply (k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition)
01:58:16  		      sriov/sriov-network: could not retrieve existing (k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition)
01:58:16  		      sriov/sriov-network: no matches for kind "NetworkAttachmentDefinition" in version
01:58:16  		      "k8s.cni.cncf.io/v1"'

We should improve the code and apply namespace+CRDs before the rest of objects.

Apply the same fix to https://github.com/openshift/cluster-network-operator/blob/master/pkg/controller/operconfig/operconfig_controller.go. Howevern, first make sure this is a common problem, not only OCP 3.11 specific.

Support advanced operator upgrade

Improve operator upgrades. One of the missing features (in #65) that comes to my mind is possibility to remove objects that are not needed anymore. e.g. when previous version deployed a configmap as a part of component X, but in the new version the configmap is not part of X manifests anymore.

Support basic operator upgrades

We should be able to upgrade operator and deployed components to a newer version. In the first implementation all we should support is upgrading image name and adding new objects to Kubernetes API. Removals of old objects won't be supported in the initial implementation.

We may need to resolve #34 to fully support upgrades.

make OLM deployment easier

Currently, marketplace container image for this operator is hosted anywhere. Steps needed to be taken in order to deploy the operator are written in README, but it is a little too complicated.

We should:

  • Keep OperatorGroup, CatalogSource and Subscription generated under deploy/ and in release
  • Build registry image and keep it in quay, for both master and versioned
  • Describe oc way to deploy operator as well as via web UI
  • Deploy released SVC, not master

Rename NetworkAddonsConfig(s) to NetworkAddon(s)

Let's save a little bit of space and time (pronouncing it) and rename NetworkAddonsConfig object to NetworkAddons. That would mimic KubeVirt and CDI operators with their KubeVirt and CDI objects.

On the other hand, it does not follow OpenShift operator naming. But I would not mind that that much, since we don't really follow OpenShift operator concept at all (with service objects, openshift operator state reporting etc).

Opinions @booxter @SchSeba?

correct version, provider and name

I have just tried to deploy cna-operator following ReadMe - Deploy using OLM and using this image.
In the Web UI, I see that

  • There is no 'Operator' in the name in Operators Catalog. I don't know if there is any spec for operators naming and if it should be there or not.
  • version 0.0.0
  • "provided by KubeVirt project". Should it be "Red Hat", maybe?
    Screenshot from 2019-04-30 14-17-31

Add finalizer

Finalizer should prevent NetworkAddonsConfig object to be removed before all components are successfully removed.

Improve functional tests performance

Functional tests #118 take way to long to finish. Investigate what is the bottleneck. Do we download all needed images every time? Is it the instantiation of daemon sets?

release v0.3.0

Let's use this issue to track what is needed for the next stable release.

I want #29 in there. It will include kubemacpool and sr-iov that were not shipped in v0.2.0. @SchSeba @booxter are you aware of any must-have fixes? The main use of a new release would be to ship addons operator in hyperconverged-cluster-operator.

deployed components should have fixed version

We should use images with specific version instead of :latest. One reason is that we cannot be sure that our manifests will work with whatever is shipped in the image. Another reason is that we make sure all components work well together. Finally it would make upgrades more obvious - upgrade from components:x to components:y.

@SchSeba @booxter does it make sense and is it doable?

  • SR-IOV DP
  • SR-IOV CNI
  • Linux bridge CNI
  • Linux bridge marker
  • Multus
  • kubemacpool
  • nmstate

TODO: Keep table of image/component versions matched for image release in README

Catch up on openshift network operator

Apply objects in dry run first

Not sure if it will be possible (due to missing namespaces). It should prevent us from partial updates where half of components is successfully redeployed and other half stuck.

kubemacpool is not completely removed

There is an issue caused by kubemacpool component after its deprovision.

I started local cluster and deployed kubemacpool there:

make cluster-up cluster-sync
cat <<EOF | ./cluster/kubectl.sh create -f -
apiVersion: networkaddonsoperator.network.kubevirt.io/v1alpha1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  kubeMacPool:
   startPoolRange: "02:00:00:00:00:00"
   endPoolRange: "FD:FF:FF:FF:FF:FF"
EOF

Then I removed the NetworkAddonsConfig object, so kubemacpool was removed:

./cluster/kubectl.sh delete networkaddonsconfig cluster

Finally I wanted to create a Pod:

cat <<EOF | ./cluster/kubectl.sh create -f -
apiVersion: v1
kind: Pod
metadata:
  name: samplepod
spec:
  containers:
  - name: samplepod
    command: ["/bin/sh", "-c", "sleep 99999"]
    image: alpine
EOF

That however failed with:

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling admission webhook "mutatepods.example.com": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-pods?timeout=30s: service "kubemacpool-service" not found                                          

@SchSeba I see kubemacpool creates MutatingWebhookConfiguration "mutating-webhook-configuration". We need to set its owner reference to something that is created by manifests, maybe the Service? Not sure if we can do something like https://github.com/kubevirt/cluster-network-addons-operator/blob/master/pkg/controller/networkaddonsconfig/controller.go#L177 on regular objects (not a controller). Maybe https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/ helps.

BTW we should renamed it to something kubemacpool specific.

Add option to read image registry for components from operator

In order to make it easier to use operator on custom registry, make it possible to use operator's registry as a registry for components too.

In case USE_OPERATOR_REGISTRY_FOR_COMPONENTS is set to true, ready operator image registry and use it as a base registry for components. In that case, components' images should be set to "$OPERATOR_REGISTRY/name-of-the-image:tag_of_the_image".

Update to OpenShift Network object

Currently we check for existing NetworkConfig object (created by cluster-network-operator) for existing Multus configuration. With latest openshift/cluster-network-operator, this object has been renamed to Network. Sync with this change.

nmstate is not working

nmstate DS is full of:

E0516 00:15:47.519995       1 reflector.go:205] github.com/nmstate/kubernetes-nmstate/pkg/client/informers/externalversions/factory.go:117: Failed to list *v1.NodeNetworkState: the server could not find the requested resource (get nodenetworkstates.nmstate.io)                                                                                                                                                                 

controller logging refactoring

Let's try to make logging code and log output cleaner.

We can improve the way we do logging in the main controller https://github.com/kubevirt/cluster-network-addons-operator/blob/master/pkg/controller/networkaddonsconfig/networkaddonsconfig_controller.go.

Lines like:

log.Printf("could not apply (%s) %s/%s: %v", obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
err = errors.Wrapf(err, "could not apply (%s) %s/%s", obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName())

Should be:

err = errors.Wrapf(err, "could not apply (%s) %s/%s", obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName())
log.Printf(err.Error())

Or even better, those errors are all raised to Reconcile method, we can log them once there (or maybe they are already logged using return reconcile.Result{}, err and we just make mess in the log output?).

Finally, we have don't use Wrapf properly:

// this should not pass err twice
err = errors.Wrapf(err, "failed to retrieve previously applied configuration: %v", err)

development documentation in README.md is incorrect

It says

generate sources (requires operator-sdk installed on your host)
operator-sdk generate k8s

but if one tries that, one gets

$ operator-sdk generate k8s
FATA[0000] Must run command in project root dir: project structure requires ./build/Dockerfile

if one moves ./build/operator/Dockerfile to ./build/ it "seems" to work, but don't know if that's correct.

Make operator manifests vendorable

It should be possible to import our manifests to a different project in form of native kubernetes objects.

We have two options to do that.

A) Generate Go modules from manifests
B) Generate manifests from Go files

fighting over bridge and plugins path on OpenShift 4

Network operator on OpenShift 4 is deploying binaries of CNI plugins, including bridge and tuning. Our addons operator is trying to deploy different version of bridge and tuning to the same path. Therefore, there is a race for the spot, who is slower, wins.

We have to tackle this issue, maybe by adding a kubevirt- prefix to our binaries.

Status reporting of pods doesn't work

Changes from OLM integration #42 broke status reporting #29. I am the one in fault, not testing status reporting after rebase. It seems like the problem is in strict RBAC rules.

Deploy SR-IOV bits

This issue is to track SR-IOV deployment: CRD, CNI, DP. (Maybe also leveraging other components like kubernetes-nmstate for VF enablement / VFIO configuration.)

The implementation should consider that OpenShift has its own plans to deploy SR-IOV components, and the plans are maybe not compatible with KubeVirt expectations. (Versions shipped too old for KubeVirt.)

OpenShift patch with SR-IOV network type: openshift/cluster-network-operator#84

kubemacpool validation doesn't work

I applied invalid kubemacpool attributes, but error wasn't raised:

apiVersion: networkaddonsoperator.network.kubevirt.io/v1alpha1
kind: NetworkAddonsConfig
metadata:      
  creationTimestamp: 2019-04-19T16:54:04Z
  generation: 1
  name: cluster
  resourceVersion: "1313"
  selfLink: /apis/networkaddonsoperator.network.kubevirt.io/v1alpha1/networkaddonsconfigs/cluster
  uid: bd69b15e-62c3-11e9-9d38-525500d15501
spec:
  imagePullPolicy: Always
  kubeMacPool:
    rangeStart: this:aint:right
  linuxBridge: {}
  multus: {}
  sriov: {}
status:
  conditions:
  - lastProbeTime: 2019-04-19T16:55:50Z
    lastTransitionTime: 2019-04-19T16:55:50Z
    status: "False"
    type: Progressing
  - lastProbeTime: 2019-04-19T16:55:50Z
    lastTransitionTime: 2019-04-19T16:55:50Z
    status: "True"
    type: Ready

Unit test coverage for state_manager

State manager contains a lot of non-trivial logic. It should get a test coverage before we start improving it with better reflection of operator state.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.