Git Product home page Git Product logo

nimblearchitect / kubectl-ice Goto Github PK

View Code? Open in Web Editor NEW
244.0 4.0 9.0 665 KB

Kubectl-ice is an open-source tool for Kubernetes users to monitor and optimize container resource usage. Features include usage breakdowns for pods and containers, making scaling and optimization easier. The tool is compatible with major cloud providers and is actively developed by a community of contributors

License: Apache License 2.0

Makefile 0.26% Go 99.08% Shell 0.66%
kubectl-plugins kubectl kubernetes golang kubectl-plugin krew-plugin krew sidecar-container multi-container cli

kubectl-ice's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kubectl-ice's Issues

ice ip doesn't honor args

hi @NimbleArchitect

issue one trying to find by IP addr

✗  k ice ip -m ip==10.110.5.20 
NAME                                 IP
staging-psql-0               10.110.5.20
staging-psql-port-forward-0  10.110.5.17

-T

➜ k ice ip -T
NAME                                 IP
staging-psql-0               10.110.5.20
staging-psql-port-forward-0  10.110.5.17

others

➜ k ice ip --show-namespace --show-node --show-type
NAME                                 IP
staging-psql-0               10.110.5.20
staging-psql-port-forward-0  10.110.5.17

Wrong probe name in `kubectl ice probes` output

While executing kubectl ice probes command, in place of startup in PROBE column, it's showing as liveness.

$ kubectl ice probes
PODNAME CONTAINER PROBE DELAY PERIOD TIMEOUT SUCCESS FAILURE CHECK ACTION
test-pod test-container liveness 0 10 15 1 3 HTTPGet http://:9003/health/liveness
test-pod test-container readiness 0 10 15 1 3 HTTPGet http://:9003/health/readiness
test-pod test-container liveness 0 10 15 1 3 HTTPGet http://:9003/health/startup

Colorizing output

Currently ice seems to output purely an off yellow output. Any chance it could be modified to output similar to kubecolor, or at least be configurable? Image of differences here:
image

priority class

it would be nice to include an option to:

  • display all pods with priority classes
  • display all pods with a specific priority class
kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName,VALUE:.spec.priority

Something like this.

init containers always shown

Hi there,

Report

I installed version 0.2.9 of kubectl ice and the help page says that init containers are omitted in cpu and memory outputs.
But regardless of whether I set the -i, --include-init flag or not, init containers are always shown in my output.

Expected Behavior

Omit init containers in cpu and memory output by default.
See: -i, --include-init include init container(s) in the output, by default init containers are hidden

Actual Behavior

I get init containers in my output every time, regardless of the flag.

Description of my pod:

Init Containers:
  certs:
    Container ID:  <redacted>
    Image:         <redacted>
    Image ID:      <redacted>
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      cp -r /etc/ssl/certs /transfer

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 14 Sep 2022 20:28:19 +0200
      Finished:     Wed, 14 Sep 2022 20:28:19 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  256Mi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /transfer from transfer (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-abcde (ro)
Containers:
  application:
    Container ID:   <redacted>
    Image:          <redacted>
    Image ID:       <redacted>
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running

kubectl ice cpu output:

❯ kubectl ice cpu application-p-0
CONTAINER        USED  REQUEST  LIMIT  %REQ  %LIMIT
certs            0m    100m     100m   -     -
application      7m    200m     500m   3.36  1.34

kubectl ice version

❯ kubectl-ice -v
kubectl-ice version 0.2.9

Kubernetes Version

1.22.13

failed to create clientset

[root@mexo-relay ~]# kubectl-ice cpu -A
failed to create clientset: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

how much I love the tool :)

➜ k ice image -A | grep k8s.gcr.io
insights-agent                   insights-agent-vpa-recommender-7f866c4698-6jw5k                  vpa                                    Always        k8s.gcr.io/autoscaling/vpa-recommender                                          0.10.0
kube-system                      ebs-csi-controller-549546d89f-wg8j9                              csi-provisioner                        IfNotPresent  k8s.gcr.io/sig-storage/csi-provisioner                                          v3.1.0
kube-system                      ebs-csi-controller-549546d89f-wg8j9                              csi-attacher                           IfNotPresent  k8s.gcr.io/sig-storage/csi-attacher                                             v3.4.0
kube-system                      ebs-csi-controller-549546d89f-wg8j9                              csi-resizer                            IfNotPresent  k8s.gcr.io/sig-storage/csi-resizer                                              v1.4.0
kube-system                      ebs-csi-controller-549546d89f-wg8j9                              liveness-probe                         IfNotPresent  k8s.gcr.io/sig-storage/livenessprobe                                            v2.6.0
kube-system                      ebs-csi-node-4kfb5                                               node-driver-registrar                  IfNotPresent  k8s.gcr.io/sig-storage/csi-node-driver-registrar                                v2.5.1
kube-system                      ebs-csi-node-4kfb5                                               liveness-probe                         IfNotPresent  k8s.gcr.io/sig-storage/livenessprobe                                            v2.6.0
kube-system                      ebs-csi-node-59gkf                                               node-driver-registrar                  IfNotPresent  k8s.gcr.io/sig-storage/csi-node-driver-registrar                                v2.5.1
kube-system                      ebs-csi-node-59gkf                                               liveness-probe                         IfNotPresent  k8s.gcr.io/sig-storage/livenessprobe                                            v2.6.0
kube-system                      ebs-csi-node-5gs6s                                               node-driver-registrar                  IfNotPresent  k8s.gcr.io/sig-storage/csi-node-driver-registrar                                v2.5.1
kube-system                      ebs-csi-node-5gs6s                                               liveness-probe                         IfNotPresent  k8s.gcr.io/sig-storage/livenessprobe                                            v2.6.0

just compare it with

kubectl get po -A -o jsonpath='{range .items[*]}{"namespace="}{.metadata.namespace}{" "}{"pod="}{.metadata.name}{" "}{range .spec.containers[*]}{"image="}{.image}{" "}{end}{"\n"}{end}' | grep k8s.gcr.io

Thank you for the GREAT Tool

skip type = "I" containers

➜ k ice cpu -l "app.kubernetes.io/name in (backend)"    
T  PODNAME                                   CONTAINER      USED  REQUEST  LIMIT  %REQ  %LIMIT
I  protct-backend-v1-5648c9d8df-nx9nd        install        0m    0m       0m     -     -
S  protct-backend-v1-5648c9d8df-nx9nd        backend        72m   0m       1000m  -     7.13
I  protct-celery-beat-v1-6686dfc9df-lw48m    install        0m    0m       0m     -     -
S  protct-celery-beat-v1-6686dfc9df-lw48m    celery-beat    1m    0m       1000m  -     0.02
I  protct-celery-worker-v1-8664579cd5-xhxqk  install        0m    0m       0m     -     -
S  protct-celery-worker-v1-8664579cd5-xhxqk  celery-worker  2m    0m       1000m  -     0.19
I  protct-django-migrations-v1--1-crcb8      install        0m    0m       0m     -     -
S  protct-django-migrations-v1--1-crcb8      migrations     0m    0m       256m   -     -

I assume "I" == init container? What is "S" type container?

Also is it possible all init containers and concentrate only on non-terminated containers?

displaying node name and selected node label(s) the pod is placed on

it would be nice to include the node name the pod is scheduled on OR the label's value of the node the pod is placed on.

the reason:
when I change affinity or nodeSelector (especially in the case of complex affinity/antiaffinity rules) I have to double check that the pod was placed in a correct node. Typically by doing:

kg pod NAME -o yaml
kdpo NAME

Which is kinda time consuming. It would be great if we can

k ice status --node --nl eks.amazonaws.com/nodegroup

which will output the name of the node and the value of the label eks.amazonaws.com/nodegroup associated with the node the pod is placed on. So we will get 2 new columns:

node                                         eks.amazonaws.com/nodegroup
ip-10-110-4-137.ec2.internal                 staging-ops-v2

current output

➜ k ice status                                         
T  PODNAME                             CONTAINER              READY  STARTED  RESTARTS  STATE       REASON     EXIT-CODE  SIGNAL  TIMESTAMP                      MESSAGE
S  aws-node-27r6g                      aws-node               true   true     0         Running     -          -          -       2022-06-15 22:27:53 -0500 CDT  -
I  aws-node-27r6g                      aws-vpc-cni-init       true   -        0         Terminated  Completed  0          0       2022-06-15 22:27:50 -0500 CDT  -
S  aws-node-8f6rc                      aws-node               true   true     0         Running     -          -          -       2022-06-16 00:31:56 -0500 CDT  -
I  aws-node-8f6rc                      aws-vpc-cni-init       true   -        0         Terminated  Completed  0          0       2022-06-16 00:31:53 -0500 CDT  -
S  aws-node-hk5l2                      aws-node               true   true     0         Running     -          -          -       2022-06-15 22:27:56 -0500 CDT  -
I  aws-node-hk5l2                      aws-vpc-cni-init       true   -        0         Terminated  Completed  0          0       2022-06-15 22:27:53 -0500 CDT  -
S  aws-node-p8vpt                      aws-node               true   true     0         Running     -          -          -       2022-06-15 22:27:45 -0500 CDT  -
I  aws-node-p8vpt                      aws-vpc-cni-init       true   -        0         Terminated  Completed  0          0       2022-06-15 22:27:42 -0500 CDT  -
S  coredns-84888f77cf-hl85g            coredns                true   true     0         Running     -          -          -       2022-06-16 17:42:10 -0500 CDT  -
S  ebs-csi-controller-cf8d4fd68-9p62b  csi-attacher           true   true     0         Running     -          -          -       2022-06-16 00:33:22 -0500 CDT  -
S  ebs-csi-controller-cf8d4fd68-9p62b  csi-provisioner        true   true     0         Running     -          -          -       2022-06-16 00:33:20 -0500 CDT  -
S  ebs-csi-controller-cf8d4fd68-9p62b  csi-resizer            true   true     0         Running     -          -          -       2022-06-16 00:33:24 -0500 CDT  -
S  ebs-csi-controller-cf8d4fd68-9p62b  ebs-plugin             true   true     0         Running     -          -          -       2022-06-16 00:33:05 -0500 CDT  -
S  ebs-csi-controller-cf8d4fd68-9p62b  liveness-probe         true   true     0         Running     -          -          -       2022-06-16 00:33:24 -0500 CDT  -
S  ebs-csi-node-f47vk                  ebs-plugin             true   true     0         Running     -          -          -       2022-06-16 00:32:12 -0500 CDT  -
S  ebs-csi-node-f47vk                  liveness-probe         true   true     0         Running     -          -          -       2022-06-16 00:32:14 -0500 CDT  -
S  ebs-csi-node-f47vk                  node-driver-registrar  true   true     0         Running     -          -          -       2022-06-16 00:32:13 -0500 CDT  -
S  ebs-csi-node-fdpjl                  ebs-plugin             true   true     0         Running     -          -          -       2022-06-15 22:28:05 -0500 CDT  -
S  ebs-csi-node-fdpjl                  liveness-probe         true   true     0         Running     -          -          -       2022-06-15 22:28:09 -0500 CDT  -
S  ebs-csi-node-fdpjl                  node-driver-registrar  true   true     0         Running     -          -          -       2022-06-15 22:28:07 -0500 CDT  -
S  ebs-csi-node-lzfqh                  ebs-plugin             true   true     0         Running     -          -          -       2022-06-15 22:28:13 -0500 CDT  -
S  ebs-csi-node-lzfqh                  liveness-probe         true   true     0         Running     -          -          -       2022-06-15 22:28:15 -0500 CDT  -
S  ebs-csi-node-lzfqh                  node-driver-registrar  true   true     0         Running     -          -          -       2022-06-15 22:28:14 -0500 CDT  -
S  ebs-csi-node-ntfq8                  ebs-plugin             true   true     0         Running     -          -          -       2022-06-15 22:28:12 -0500 CDT  -
S  ebs-csi-node-ntfq8                  liveness-probe         true   true     0         Running     -          -          -       2022-06-15 22:28:16 -0500 CDT  -
S  ebs-csi-node-ntfq8                  node-driver-registrar  true   true     0         Running     -          -          -       2022-06-15 22:28:13 -0500 CDT  -
S  kube-proxy-9cw8f                    kube-proxy             true   true     0         Running     -          -          -       2022-06-15 22:27:38 -0500 CDT  -
S  kube-proxy-dq7q7                    kube-proxy             true   true     0         Running     -          -          -       2022-06-15 22:27:46 -0500 CDT  -
S  kube-proxy-fkq8q                    kube-proxy             true   true     0         Running     -          -          -       2022-06-16 00:31:53 -0500 CDT  -
S  kube-proxy-z4wvs                    kube-proxy             true   true     0         Running     -          -          -       2022-06-15 22:27:49 -0500 CDT  -

kubectl ice cpu - incorrect value (v0.0.9+)

Hi,

After the update from v0.0.8 to v0.0.10 the command kubectl ice cpu always shows 1000 for requests/limits.
I had to downgrade to v0.0.8, since v0.0.9 is also having the same issue.

Example:
v0.0.8

T  PODNAME                                               CONTAINER                     USED  REQUEST  LIMIT     %REQ  %LIMIT
S  api-redis-master-0                                    api-redis                     16    125m     500m      15.44 -

v0.0.9+

T  PODNAME                                               CONTAINER                     USED  REQUEST  LIMIT     %REQ  %LIMIT
S  api-redis-master-0                                    api-redis                     16    1000     1000      15.15 -

Thanks for the plugin!

Kind regards,
Sergei

memory used format

would be nice to have a consistent format in USED memory output, either all in Gi or all in Mi.

➜ k ice mem -A --match 'used>0' --sort '!USED'
T  PODNAME                                                   CONTAINER                                 USED      REQUEST  LIMIT  %REQ  %LIMIT
S  protct-backend-v1-5648c9d8df-nx9nd                        backend                                   0.60Gi    0        1Gi    -     59.98
S  prometheus-server-6686598cfb-s7c9q                        prometheus-server                         0.54Gi    0        0      -     -
S  argo-cd-argocd-application-controller-0                   application-controller                    0.18Gi    0        0      -     -
S  protct-celery-worker-v1-8664579cd5-xhxqk                  celery-worker                             0.16Gi    0        1Gi    -     15.88
S  loki-0                                                    loki                                      116.70Mi  0        0      -     -
S  protct-celery-beat-v1-6686dfc9df-lw48m                    celery-beat                               0.09Gi    0        1Gi    -     8.96
S  ingress-nginx-controller-v1-68fc5bfbbb-jmrvr              controller                                0.08Gi    256Mi    0      33.06 -
S  promtail-d4gqw                                            promtail                                  54.18Mi   128Mi    512Mi  42.33 10.58
S  grafana-677f589788-frwgp                                  grafana                                   0.05Gi    0        0      -     -
S  promtail-8np94                                            promtail                                  0.05Gi    128Mi    512Mi  37.78 9.45
S  aws-node-zgfmr                                            aws-node                                  0.04Gi    0        0      -     -
S  aws-node-jrns2                                            aws-node                                  0.04Gi    0        0      -     -
S  aws-node-gjfsx                                            aws-node                                  43.57Mi   0        0      -     -
S  promtail-w9dnz                                            promtail                                  0.04Gi    128Mi    512Mi  33.61 8.40
S  aws-node-zn96w                                            aws-node                                  0.04Gi    0        0      -     -
S  argo-cd-argocd-server-9b954f7bb-45wh2                     server                                    0.04Gi    64Mi     512Mi  65.33 8.17
S  argo-cd-argocd-repo-server-f7f6f7b9b-cgmhq                repo-server                               0.04Gi    0        0      -     -
S  karpenter-55cccdf8d5-lsh69                                controller                                0.04Gi    250Mi    250Mi  15.27 15.27
S  teleport-68bc766fc4-wqp4w                                 teleport                                  0.04Gi    0        0      -     -
S  promtail-sgp5z                                            promtail                                  0.03Gi    128Mi    512Mi  24.76 6.19
S  tekton-pipelines-controller-557dbc9d45-g2kwn              tekton-pipelines-controller               31.31Mi   0        0      -     -
S  velero-5b8b75d9f5-trq9s                                   velero                                    31.21Mi   128Mi    1000Mi 24.38 3.12
S  cert-manager-cainjector-cbd7f4d85-g65m2                   cert-manager                              29.68Mi   0        0      -     -
S  cert-manager-7665c9bb9f-w2r48                             cert-manager                              28.18Mi   0        0      -     -
S  tekton-pipelines-webhook-597b975f68-lf885                 webhook                                   0.03Gi    100Mi    500Mi  28.11 5.62
S  argo-cd-argocd-dex-server-fd676fbb9-cptq4                 dex-server                                24.30Mi   0        0      -     -
S  tekton-triggers-controller-6b7df75f4f-hqf48               tekton-triggers-controller                23.78Mi   0        0      -     -
S  argo-cd-argocd-notifications-controller-5ff558bdcd-th2z9  notifications-controller                  0.02Gi    0        0      -     -
S  coredns-657694c6f4-blzqr                                  coredns                                   0.02Gi    70Mi     170Mi  28.39 11.69
S  karpenter-55cccdf8d5-lsh69                                webhook                                   0.02Gi    50Mi     50Mi   39.01 39.01
S  metrics-server-d97f885d8-rrd8h                            metrics-server                            18.61Mi   0        0      -     -
S  external-dns-aws-74b7d94758-8bswq                         external-dns                              0.02Gi    0        0      -     -
S  argo-cd-argocd-applicationset-controller-5848d6bc4-bw4qb  applicationset-controller                 0.02Gi    0        0      -     -
S  argo-cd-argocd-redis-cbbc8d467-bzz46                      argo-cd-argocd-redis                      17.41Mi   0        0      -     -
S  prometheus-alertmanager-55dc7b5f9b-zn7bh                  prometheus-alertmanager                   0.02Gi    0        0      -     -
S  kubernetes-event-exporter-68cfd5b88f-258q9                event-exporter                            0.02Gi    25Mi     100Mi  63.05 15.76
S  kube-proxy-hp6hn                                          kube-proxy                                0.01Gi    0        0      -     -
S  tekton-triggers-webhook-cdb76c444-l6n5n                   webhook                                   0.01Gi    0        0      -     -
S  kube-proxy-mzf5b                                          kube-proxy                                14.85Mi   0        0      -     -
S  kube-proxy-jz8dz                                          kube-proxy                                14.84Mi   0        0      -     -
S  prometheus-kube-state-metrics-8f587d557-8qtf5             kube-state-metrics                        14.34Mi   0        0      -     -
S  kube-proxy-cwn8f                                          kube-proxy                                0.01Gi    0        0      -     -
S  cert-manager-webhook-7ccffb55fb-zszn9                     cert-manager                              0.01Gi    0        0      -     -
S  prometheus-node-exporter-qqtg6                            prometheus-node-exporter                  13.43Mi   0        0      -     -
S  ebs-csi-controller-cf8d4fd68-krmq2                        csi-provisioner                           0.01Gi    0        0      -     -
S  ebs-csi-controller-cf8d4fd68-krmq2                        ebs-plugin                                0.01Gi    0        0      -     -
S  el-build-pipeline-event-listener-794f9f494-md9jt          event-listener                            0.01Gi    0        0      -     -
S  prometheus-node-exporter-6l8tw                            prometheus-node-exporter                  0.01Gi    0        0      -     -
S  ebs-csi-controller-cf8d4fd68-krmq2                        csi-resizer                               12.06Mi   0        0      -     -
S  tekton-triggers-core-interceptors-b4c5f8bb8-cmhgq         tekton-triggers-core-interceptors         0.01Gi    0        0      -     -
S  ebs-csi-controller-cf8d4fd68-krmq2                        csi-attacher                              0.01Gi    0        0      -     -
S  prometheus-node-exporter-sfqz2                            prometheus-node-exporter                  0.01Gi    0        0      -     -
S  ebs-csi-node-c9p62                                        ebs-plugin                                0.01Gi    0        0      -     -
S  prometheus-node-exporter-m6xwl                            prometheus-node-exporter                  0.01Gi    0        0      -     -
S  ebs-csi-node-btk6f                                        ebs-plugin                                0.01Gi    0        0      -     -
S  ebs-csi-node-fkk2l                                        ebs-plugin                                0.01Gi    0        0      -     -
S  ebs-csi-node-s86q2                                        ebs-plugin                                0.01Gi    0        0      -     -
S  tekton-dashboard-5b9c556d64-t4kxt                         tekton-dashboard                          9.20Mi    0        0      -     -
S  ebs-csi-node-c9p62                                        liveness-probe                            0.01Gi    0        0      -     -
S  ebs-csi-controller-cf8d4fd68-krmq2                        liveness-probe                            0.01Gi    0        0      -     -
S  prometheus-postgres-exporter-697995f785-j4dcn             prometheus-postgres-exporter              0.01Gi    0        0      -     -
S  ebs-csi-node-fkk2l                                        liveness-probe                            0.01Gi    0        0      -     -
S  ebs-csi-node-btk6f                                        liveness-probe                            0.01Gi    0        0      -     -
S  ebs-csi-node-s86q2                                        liveness-probe                            0.01Gi    0        0      -     -
S  protct-frontend-v1-6b8dbb985d-w7stb                       frontend                                  5.08Mi    0        128Mi  -     3.97
S  ebs-csi-node-s86q2                                        node-driver-registrar                     3.69Mi    0        0      -     -
S  ebs-csi-node-fkk2l                                        node-driver-registrar                     3.65Mi    0        0      -     -
S  ebs-csi-node-btk6f                                        node-driver-registrar                     3.58Mi    0        0      -     -
S  ebs-csi-node-c9p62                                        node-driver-registrar                     3.46Mi    0        0      -     -
S  prometheus-server-6686598cfb-s7c9q                        prometheus-server-configmap-reload        3.04Mi    0        0      -     -
S  prometheus-alertmanager-55dc7b5f9b-zn7bh                  prometheus-alertmanager-configmap-reload  2.90Mi    0        0      -     -
S  protct-staging-psql-port-forward-6d98779c97-vsmrb         psql-port-forward                         1.10Mi    0        256Mi  -     0.43
S  protct-staging-psql-5dd99d4c9b-kwcnn                      psql                                      1.06Mi    0        256Mi  -     0.42
S  protct-staging-psql-5dd99d4c9b-kwcnn                      aws                                       1.06Mi    0        256Mi  -     0.41

status: revisualize to enhance UX

Thanks for such a great tool! I have some feedback about the status command.


  1. Use Tree Structure

Currently, status command prints a big table that is slightly complicated and hard to distinguish. It would be great to list all containers in tree structure, for example:

NAMESPACE  NAME                          READY  RUNNING  REASON  AGE
aa         Pod/foo-6f67dcc579-znb55      True   -                10m
aa         └─Container/first-container   -      True
aa         └─Container/second-container  -      True
  1. Timestamp is too detailed
 TIMESTAMP
 2022-06-17 10:34:57 +0300 +03

Do we really need to see date/second details? We can create a new flag called --details to expand timestamp. Otherwise, just current time is fine: 10:34:57

  1. MESSAGE is really necessary?

I am not sure how we can use MESSAGE column since the terminal has limited width. Currently I see all values set to -.

Any thougts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.