Git Product home page Git Product logo

monitoring's Introduction

OpenEBS Monitoring add-on

FOSSA Status

A set of Grafana dashboards and Prometheus alerts for OpenEBS that can be installed as a helm chart or imported as jsonnet mixin.

Status

Beta. This repository currently supports dashboards and alerts for Mayastor, LocalPV LVM, LocalPV ZFS OpenEBS storage engines. This project is under active development and seeking contributions from the community.

Install

Using helm

Setup the monitoring helm repository.

helm repo add monitoring https://openebs.github.io/monitoring/
helm repo update

You can then run helm search repo monitoring to see the charts.

Install the helm chart.

helm install monitoring monitoring/monitoring --namespace openebs --create-namespace

The detailed chart documentation is available in charts directory.

Using kubectl

You can generate YAMLs and install using kubectl. See detailed steps at ./jsonnet.

Usage

Accessing Grafana

# Look at the Grafana pod and check that the pod is in running state
kubectl get pods -n [NAMESPACE] | grep -i grafana
# Note the public IP of any one of the nodes
kubectl get nodes -o wide
# Note the Grafana Service IP
kubectl get svc -n [NAMESPACE] | grep -i grafana
# Open browser and visit http://<NodeIp>:<NodePort> 
#  (where <NodeIp> is the public IP address of your node, and <NodePort> is Grafana Service Port)
#  Default Grafana login credentials- [username: admin, password: admin]

NOTE: If public IP is not available then you can access it via port-forwarding

# Perform port-forwarding
# kubectl port-forward --namespace [NAMESPACE] pods/[grafana-pod-name] [grafrana-foward-port]:[grafana-cluster-port]
# Open browser and visit http://127.0.0.1:[grafana-forward-port]
# Default Grafana login credentials- [username: admin, password: admin]

Contributing

OpenEBS welcomes your feedback and contributions in any form possible.

Community

Code of conduct

Participation in the OpenEBS community is governed by the CNCF Code of Conduct.

License

FOSSA Status

monitoring's People

Contributors

abhinandan-purkait avatar avishnu avatar datacore-vvarakantham avatar fossabot avatar kasimon avatar kmova avatar niladrih avatar omeiirr avatar r1jt avatar rahul799 avatar rajasahil avatar rweilg avatar sanjay1611 avatar survivant avatar w3aman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monitoring's Issues

rule error

I am getting errors with the 2nd rule for localpv. I believe it has to do with multiple pods that use the same pvc. I'm still pretty rough at writing rules or I would fix and send a pr.

Rule:

kube_persistentvolumeclaim_info unless (kube_persistentvolumeclaim_info
        * on(persistentvolumeclaim) group_left kube_pod_spec_volumes_persistentvolumeclaims_info)
        == 1

Error:

Error executing query: found duplicate series for the match group {persistentvolumeclaim="media"} on the right hand-side of the operation: [{__name__="kube_pod_spec_volumes_persistentvolumeclaims_info", container="kube-state-metrics", endpoint="http", instance="172.16.0.249:8080", job="kube-state-metrics", namespace="media", persistentvolumeclaim="media", pod="mkvtoolnix-844d557d49-499rx", service="kube-prometheus-stack-kube-state-metrics", volume="data"}, {__name__="kube_pod_spec_volumes_persistentvolumeclaims_info", container="kube-state-metrics", endpoint="http", instance="172.16.0.249:8080", job="kube-state-metrics", namespace="media", persistentvolumeclaim="media", pod="lidarr-6c86fc7f57-rpz95", service="kube-prometheus-stack-kube-state-metrics", volume="data"}];many-to-many matching not allowed: matching labels must be unique on one side

cStor Volume Replica Dashboard is displaying information about cStor volumes and not it's replicas.

cStor volume replica dashboard has a replicas variable which gets its value from the following query:
"label_values(openebs_replica_status{cstor_pool="$Pool"}, vol)". But this query only returns the volume name of the replicas and not exactly the individual replicas. However, the metric: openebs_replica_status contains separate rows for each replica but has the vol label same for the replicas that belong to the same volume.

Currently, there is no cStor metric available that could help us identify the individual replicas. Therefore the dashboard seems inconsistent with its naming and definition. All the panels inside this dashboard for eg: replica status, replica rebuild count and others defeats the purpose of visualising each replicas as a separate indentity.

[BREAKING CHANGE] Removal of dashboards for legacy engines

OpenEBS is deprecating a number of legacy projects. These legacy projects along with their repositories will be moved to a new CNCF-owned GitHub Org on 8 April 2024 (or soon after).

This should not come as a surprise; we began informing the community from Jan 2024 of our work with CNCF TOC to optimize and modernize the project structure. The legacy projects include older data engines, subprojects, unused repositories and build dependencies; and over the last 12 months we have only made minor changes to these projects. The deprecated repositories will move to a new GitHub organization named “openebs-archive”. As the projects are moved, some will be tagged as archived, some will not be marked as archived, but over time they all will be.

Deprecated Engines:

  1. cStor
  2. Jiva
  3. NFS Provisioner
  4. Device LocalPV

Since openebs/monitoring repo holds the code related to above engines, as a part of upcoming major release the monitoring stack for these engines would also be deprecated. In the same attempt we have also tried to bump up the respective dependencies, which restricts us from a having a direct upgrade path. It is expected that the upgrades will break.

This issue serves as a notice to inform about the breaking change in the coming major release. Users are requested to plan their upgrade paths accordingly.

StalePersistentVolumeClaim triggers for any ephemeral PVC

StalePersistentVolumeClaim triggers for any ephemeral volume most likely due to the fact that ephemeral volumes do not have a matching claimName in the pod but rather the volume template itself.

This also manifests in the fact that "Used by" is empty for any emphemeral volume in eg .kubectl describe pvc

All this leads to kube_pod_spec_volumes_persistentvolumeclaims_info missing which then triggers the alert.

No idea where is the right place to fix this yet tbh.

Prometheus rules imported as json instead of yaml

I think I found a issue with the monitoring stack when importing Prometheus rules. here what I found. I did a helm install --dry-run > all.yaml if I check the content of that file for

kind: PrometheusRule
I found this for prometheus rules
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: monitoring-kube-prometheus-prometheus
  namespace: default
  labels:
    app: kube-prometheus-stack
    
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: monitoring
    app.kubernetes.io/version: "18.0.5"
    app.kubernetes.io/part-of: kube-prometheus-stack
    chart: kube-prometheus-stack-18.0.5
    release: "monitoring"
    heritage: "Helm"
spec:
  groups:
  - name: prometheus
    rules:
    - alert: PrometheusBadConfig
      annotations:

but for openebs rules

# Source: monitoring-stack/charts/openebs-monitoring/templates/prometheusRules.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: openebs-monitoring-cstor-rules
  namespace: default
  labels:
    app: openebs-monitoring
    helm.sh/chart: openebs-monitoring-0.4.8
    app.kubernetes.io/name: openebs-monitoring
    app.kubernetes.io/instance: monitoring
    release: "monitoring"
    app.kubernetes.io/version: "2.12.0"
    app.kubernetes.io/managed-by: Helm
spec:
  {
     "groups": [
        {
           "name": "openebs-cStor-pool",
           "rules": [
              {
                 "alert": "PoolCapacityLow",
                 "annotations": {
                    "componentType": "pool",

FOr openebs rules they are imported as json format not yaml.

Setting additional labels is not possible

I would like to be able to set additional labels on all or certain component in the template directory.

The additional labels can be for example be used in situations where an other "kube-prometheus-stack" deployment is used which requires certain labels to pick up for example a "podMonitor" or "serviceMonitor"

`StalePersistentVolumeClaim` duplicate time series in Victoria Metrics

Hi,

The StalePersistentVolumeClaim query here breaks Victoria Metrics in some configurations:

"expr": "kube_persistentvolumeclaim_info unless (kube_persistentvolumeclaim_info * on(persistentvolumeclaim) group_left kube_pod_spec_volumes_persistentvolumeclaims_info) == 1",

422: error when executing query="kube_persistentvolumeclaim_info unless (kube_persistentvolumeclaim_info * on(persistentvolumeclaim) group_right kube_pod_spec_volumes_persistentvolumeclaims_info) == 1" on the time range (start=1700931915000, end=1700932215000, step=15000): cannot execute "kube_persistentvolumeclaim_info unless ((kube_persistentvolumeclaim_info * on(persistentvolumeclaim) group_right() kube_pod_spec_volumes_persistentvolumeclaims_info) == 1)": cannot execute "(kube_persistentvolumeclaim_info{persistentvolumeclaim=~\"audit-vault-0|audit-vault-1|audit-vault-2|... duplicate time series on the left side of `* on(persistentvolumeclaim) group_right()`: ...

In our particular case, this happens on any ReadWriteMany PVC that occurs in more than one pod in the same namespace.

Add an IO monitoring dashboard

As an SRE I'd like to be able to monitor my IO performance closely. node_exporter allows for polling and calculating in-depth disk related metrics, such as queue depth and latency (IOPS don't mean much without latency after all). Throughput and average request size are also a nice-to-have.

This panel can be used, or similar metrics added to our own panels: https://grafana.com/grafana/dashboards/11801
Here's a guide for building a good comprehensive panel, again, nothing bu node_exporter is used: https://devconnected.com/monitoring-disk-i-o-on-linux-with-the-node-exporter/

impossible to deploy the chart in multiple namespaces

here my usecase :

I'm on premise (no-cloud access). I have a cluster that will be use for dev, qa, preprod...

I will deploy all my applications in different namespaces like dev,qa...

My application need to pass in dev before going to qa and qa before preprod..

so I'll have to deploy the monitoring stack in dev and qa and preprod.

example :

  • dev : monitoring-0.22
  • qa : monitoring-0.21
  • preprod: monitoring-0.20

but it's not possible right now because so artifacts in prometheus-stack are using clusterrole.

I try to fix that by deactivate clusterrole and switch to role instead, but at the end, I obtain this error

PS C:\workspace\bidgroup\iep\cicd\dev\monitoring-stack> helm --kube-context cluster109 -n default install monitoring-stack .
Error: template: monitoring-stack/charts/monitoring-stack/charts/kube-state-metrics/templates/rolebinding.yaml:2:22: executing "monitoring-stack/charts/monitoring-stack/charts/kube-state-metrics/templates/rolebinding.yaml" at <$.Values.namespaces>: wrong type for value; expected string; got interface {}
PS C:\workspace\bidgroup\iep\cicd\dev\monitoring-stack>

so if I want to use roles in kube-metrics, I will have to force the names of the namespaces that I'll use. It won't work in my case, If I want to use gitops and CICD in my cluster.

Solutions could be :

  • Open a PR in Prometheus-stack to add this possibility,
  • disable the rbac creation and create our own roles
root@test-pcl4014:~# helm install monitoring openebs-monitoring/openebs-monitoring
NAME: monitoring
LAST DEPLOYED: Tue Jul  6 07:25:56 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
OpenEBS monitoring has been installed.
Check its status by running:
$ kubectl get pods -n default -o wide

Use `kubectl get svc -n default` to list all the
services in the `default` namespace.

To access the dashboards, form the Grafana URL and open it in the browser
  export NODE_PORT=32515
  export NODE_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=monitoring" -o jsonpath="{.items[0].spec.nodeName}")
  export NODE_IP=$(kubectl get node $NODE_NAME -o jsonpath='{$.status.addresses[?(@.type=="ExternalIP")].address}')
  echo http://$NODE_IP:$NODE_PORT
  NOTE: The above IP should be a public IP

For more information, visit our Slack at https://openebs.io/community
root@test-pcl4014:~# helm install monitoring openebs-monitoring/openebs-monitoring -n dev
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "monitoring-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "dev": current value is "default"
root@test-pcl4014:~#

here a little example of what I try

grafana:
    rbac:
      namespaced: true
      pspEnabled: false
      pspUseAppArmor: false

  prometheus-node-exporter:
    rbac:
      namespaced: true
      pspEnabled: false
      pspUseAppArmor: false

  prometheus:
    rbac:
      namespaced: true
      pspEnabled: false
      pspUseAppArmor: false
    server:
      podSecurityPolicy:
        enabled: false

  kube-state-metrics:
    rbac:
      useClusterRole: false
    podSecurityPolicy:
      enabled: false

  kubeStateMetrics:
    rbac:
      useClusterRole: false
    podSecurityPolicy:
      enabled: false

Received a error message when installation when Grafana Ingress is enabled and hosts are specified

I added this part in values.yaml to enable Grafana ingress, but I received that error

PS C:\openebs-monitoring\monitoring\deploy\charts> helm --kube-context cluster114 install monitoring .
Error: INSTALLATION FAILED: template: openebs-monitoring/templates/NOTES.txt:12:12: executing "openebs-monitoring/templates/NOTES.txt" at <.paths>: can't evaluate field paths in type interface {}
PS C:\openebs-monitoring\monitoring\deploy\charts>

here my modification in values.yaml

...
grafana.ini:
      ## If you have Ingress, you need to change the root_url to match ingress path
      server:
        root_url: http://localhost:3000/grafana
....
ingress:
      ## If true, Grafana Ingress will be created
      ##
      enabled: true

      annotations:
        nginx.ingress.kubernetes.io/ssl-redirect: "false"
        nginx.ingress.kubernetes.io/rewrite-target: /$2
      path: /grafana(/|$)(.*)
      hosts:
        - default.dynamic.cluster114.local

but if I don't specify the hosts, It will pass

ingress:
      ## If true, Grafana Ingress will be created
      ##
      enabled: true

      annotations:
        nginx.ingress.kubernetes.io/ssl-redirect: "false"
        nginx.ingress.kubernetes.io/rewrite-target: /$2
      path: /grafana(/|$)(.*)
      #hosts:
#        - default.dynamic.cluster114.local
PS C:\openebs-monitoring\monitoring\deploy\charts> helm --kube-context cluster114 install monitoring .
NAME: monitoring
LAST DEPLOYED: Fri Oct 15 13:38:52 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
OpenEBS monitoring has been installed.
Check its status by running:
$ kubectl get pods -n default -o wide

Use `kubectl get svc -n default` to list all the
services in the `default` namespace.

To access the dashboards, form the Grafana URL and open it in the browser

For more information, visit our Slack at https://openebs.io/community

here the ingress yaml generated

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: monitoring
    meta.helm.sh/release-namespace: default
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  creationTimestamp: "2021-10-15T17:39:15Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: monitoring
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 8.1.5
    helm.sh/chart: grafana-6.16.9
  name: monitoring-grafana
  namespace: default
  resourceVersion: "40300396"
  uid: 0f2ce36d-1143-4dbc-bd83-e148bad0d247
spec:
  rules:
  - http:
      paths:
      - backend:
          service:
            name: monitoring-grafana
            port:
              number: 80
        path: /grafana(/|$)(.*)
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.1.34.55

here a example of a ingress generated with a host

root@test-pcl4004:~# kubectl get ingress monitoring-grafana -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: monitoring
    meta.helm.sh/release-namespace: default
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  creationTimestamp: "2021-10-13T12:26:05Z"
  generation: 3
  labels:
    app.kubernetes.io/managed-by: Helm
  name: monitoring-grafana
  namespace: dev
  resourceVersion: "108710984"
  uid: 31a8eaf8-007e-4413-ae30-c47df20a2aa4
spec:
  rules:
  - host: default.dynamic.cluster114.local
    http:
      paths:
      - backend:
          service:
            name: monitoring-grafana
            port:
              number: 80
        path: /grafana(/|$)(.*)
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.1.34.55
root@test-pcl4004:~#

Access control to the Grafana dashboards.

Dashboards can be accessed by admin or applications teams.

Few things to consider are:

  • Admin should be able to see all dashboards
  • application teams should only be allowed to access the objects in their namespaces

Unable to update configuration for node-exporter in jsonnet.

Issue

We have Node FileSystem Stats for localPV Stateful workload dashboard and the metrics used by the dashboard panels are being exported by node-exporter. To export these metrics, we have updated the node-exporter configuration i.e --collector.filesystem.ignored-mount-points.

In helm, we have updated this configuration through values.yaml

- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)

The same configuration has to be updated for jsonnet. I tried to update it but was not able to.

Have created an Issue in kube-prometheus repo for support.
Issue: prometheus-operator/kube-prometheus#1374

Bug: NDM dashboard doesn't show metrics when ndm is installed via helm

Ref: https://github.com/openebs/monitoring/blob/develop/jsonnet/config.libsonnet#L265

NDM Service Monitor is setup with matching labels name: openebs-ndm-exporter

Depending on how the NDM is installed ( kubectl vs helm), the label name is different.

  • when installed via kubectl, the name is openebs-ndm-exporter.
  • when installed via helm the name is openebs-ndm-node-exporter

This results in NDM metrics not being reported when installed through the helm.

To fix the above - use a common label from both installations. Change the NDM service monitor to use component.

component: 'openebs-ndm-node-exporter'

Metrics for Jiva CSI volumes are not being captured

I installed a mongodb STS with 3 jiva csi pvs. Deleted the STS, but the Jiva volumes are still present.

( Noticed that the service monitor uses port as exporter while jiva sets it as m-exporter. Manually edited the jiva service object to name the field as exporter. But the metrics are not seen)

Mismatch in prometheusRules mixin for lvmLocalPV

As per the title, the mismatch is due to case difference in "lvmLocalPV" between these two files:

This prevents compilation, as showed here:

RUNTIME ERROR: Field does not exist: lvmlocalpv
        vendor/openebs-mixin/rules/prometheus-rules.libsonnet:26:113-175        object <anonymous>
        vendor/openebs-mixin/mixin.libsonnet:9:20-62    object <anonymous>
        vendor/kube-prometheus/lib/mixin.libsonnet:24:78-109
        vendor/kube-prometheus/lib/mixin.libsonnet:26:19-20     object <anonymous>
        During manifestation

Changing to mixcase in jsonnet/openebs-mixin/rules/prometheus-rules.libsonnet#L26 fixes the issue for me, i.e.

- ... + lvmLocalPV(prometheusRules._config).prometheusRules.lvmlocalpv + ...
+ ... + lvmLocalPV(prometheusRules._config).prometheusRules.lvmLocalPV + ...

Grafana Dashboard for settings adjustments

I'll look to have a dashboard that can help us to identify which drives can't handle the throughput or which ctor pool doesn't have enough resources to handle the "charge".

Let me explain in more details.

It's possible to configure the ctor pool with those settings

# values for the volumes
    volumes:
      queueDepth: 32
      luWorkers: 16
      zvolWorkers: 16
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"
      auxResources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"

but it's possible that my settings are too low. It will be useful to see in a dashboard /alerts to AlertManager too, that I have a ctor pool that is overloaded. Not enough resources to handle all the requests. Like a gauge or Health bar.. RED when it's overloaded.

same thing when the pods are trying to write in a pool and the drives are not able to support it.. a RED health bar.. or a gauge : max throutpuyt supported / current. Something that will tell us that the drives are overloaded and we need to reduce the speed that we write on them.. or change the pool to include another disk ?..

At this point, I changed the settings without knowing if it's too much are not enough.

Deprecation Warning in k8s 1.21

On a fresh 1.21.2 deployment I am getting the following:

W0714 11:03:34.634539 1922149 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+

Versions:
helm version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"}
kubernetes: v1.21.2

given URL from NOTES.txt return blank for the IP if there are no externalIP returned

helm install openebs-monitoring openebs-monitoring/openebs-monitoring --namespace openebs --create-namespace

NAME: openebs-monitoring
LAST DEPLOYED: Tue Oct 12 13:15:31 2021
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
NOTES:
OpenEBS monitoring has been installed.
Check its status by running:
$ kubectl get pods -n openebs -o wide

Use `kubectl get svc -n openebs` to list all the
services in the `openebs` namespace.

To access the dashboards, form the Grafana URL and open it in the browser
export NODE_PORT=32515
export NODE_NAME=$(kubectl get pods --namespace openebs -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=openebs-monitoring" -o jsonpath="{.items[0].spec.nodeName}")
export NODE_IP=$(kubectl get node $NODE_NAME -o jsonpath='{$.status.addresses[?(@.type=="ExternalIP")].address}')
echo http://$NODE_IP:$NODE_PORT
NOTE: The above IP should be a public IP

If I enter those commands I obtain this :


For more information, visit our Slack at https://openebs.io/community
root@test-pcl4014:~#
root@test-pcl4014:~# export NODE_PORT=32515
root@test-pcl4014:~#   export NODE_NAME=$(kubectl get pods --namespace openebs -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=openebs-monitoring" -o jsonpath="{.items[0].spec.nodeName}")
  export NODE_IP=$(kubectl get node $NODE_NAME -o jsonpath='{$.status.addresses[?(@.type=="ExternalIP")].address}')
  echo http://$NODE_IP:$NODE_PORT

root@test-pcl4014:~#   export NODE_IP=$(kubectl get node $NODE_NAME -o jsonpath='{$.status.addresses[?(@.type=="ExternalIP")].address}')
root@test-pcl4014:~#   echo http://$NODE_IP:$NODE_PORT
http://:32515

it's because, I don't have a external IP.

root@test-pcl4014:~# kubectl get node $NODE_NAME -o wide
NAME           STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
test-pcl4014   Ready    control-plane,master   188d   v1.20.4   10.1.34.14    <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.4
root@test-pcl4014:~#

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.