Git Product home page Git Product logo

thanos-receive-controller's Introduction

Observatorium

Build Status Slack

Configuration for Multi-Tenant, Flexible, Scalable, Observability Backend

Observatorium allows you to run and operate effectively a multi-tenant, easy to operate, scalable open source observability system on Kubernetes. This system will allow you to ingest, store and use common observability signals like metrics, logging and tracing. Observatorium is a "meta project" that allows you to manage, integrate and combine multiple well-established existing projects like Thanos, Loki, Tempo/Jaeger, Open Policy Agent etc under a single consistent system with well-defined tenancy APIs and signal correlation capabilities.

As active maintainers and contributors to the underlying projects, we created a reference configuration, with extra software that connects those open source solutions into one unified and easy to use service. It adds missing gaps between those projects like consistency, multi-tenancy, security and resiliency pieces that are needed for a robust backend.

Read more on High Level Architecture docs.

Context

As the Red Hat Monitoring Team, we were focusing on the Observability software and concepts since the CoreOS acquisition. From the beginning, one of our main goals was to establish a stable in-cluster metric collection, querying, and alerting for OpenShift clusters. With the growth of managed OpenShift (OSD) clusters, the scope of the team goal has extended: we had to develop a scalable, global, metric stack that can be run in local as well as a central location for monitoring and telemetry purposes. We also worked together with Red Hat Logging and Tracing teams to implement something similar for logging and tracing. We’re also working on Continuous Profiling aspects.

From the very beginning our teams were leveraging Open Source to accomplish all those goals. We believe that working with the communities is the best way to have long term, successful systems, share knowledge and establish solid APIs. You might have not seen us, but members of our teams have been actively maintaining and contributing to major Open Source standards and projects like Prometheus, Thanos, Loki, Grafana, kube-state-metrics (KSM), prometheus-operator, kube-prometheus, Alertmanager, cluster-monitoring-operator (CMO), OpenMetrics, Jaeger, ConProf, Cortex, SIG CNCF Observability, SIG K8s Instrumentation and more.

What's Included

  • Observatorium is primarily defined in Jsonnet, which allows great flexibility and reusability. The main configuration resources are stored in components directory, and they import further official resources like kube-thanos. Some Examples:

  • We are aware that not everybody speaks Jsonnet, and not everybody have it's own GitOps pipeline, so we designed alternative deployments based on the main Jsonnet resources. Operator project delivers Kubernetes plain Operator that operates Observatorium.

NOTE: Observatorium is set of cloud native, mostly stateless components that mostly do not have special operating logic. For those operations that required automation, specialized controllers were designed. Use Operator only if this is your primary installation logic or if you don't have CI pipeline.

NOTE2: Operator is in heavy progress. There are already plans to streamline its usage and redesign current CustomResourceDefinition in next version. Yet, it's currently used in production by many bigger users, so any changes will be done with care.

  • The Thanos Receive Controller is a Kubernetes controller written in Go that distributes essential tenancy configuration to the desired pods.

  • The API is the facet of Observatorium service. It's a lightweight proxy written in Go that helps with multi-tenancy, tenancy (isolation, cross tenancy requests, rate-limiting, roles, tracing). This proxy should be used for all external traffic with Observatorium.

  • OPA-AMS is our Go library for integrating Open Policy Agent with Red Hat authorization service for smooth OpenShift experience.

  • up is a useful Go service that periodically queries Observatorium and outputs vital metrics on the Observatorium read path healthiness and performance over time.

  • token-refresher is a simple Go CLI allowing to perform OIDC refresh flow.

Getting Started

Status: Work In Progress

While metric and logging part using Thanos and Loki is used in production at Red Hat,documentation, full design, user guides, different configurations support are in progress.

Stay Tuned!

Missing something or not sure?

Let us know! Visit our Slack channel or put a GitHub issue!

thanos-receive-controller's People

Contributors

abohne avatar alex1989hu avatar andreassko avatar brancz avatar bwplotka avatar c10l avatar clyang82 avatar dependabot[bot] avatar douglascamata avatar hayk96 avatar jacobbaungard avatar jmichalek132 avatar kakkoyun avatar matej-g avatar metalmatze avatar morvencao avatar onprem avatar philipgough avatar r0mdau avatar songleo avatar squat avatar tekicode avatar vanugrah avatar yeya24 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

thanos-receive-controller's Issues

--allow-dynamic-scaling does not respond to pod disruptions

In the readme about --allow-dynamic-scaling:

By default, the controller does not react to voluntary/involuntary disruptions to receiver replicas in the StatefulSet. This flag allows the user to enable this behavior. When enabled, the controller will react to voluntary/involuntary disruptions to receiver replicas in the StatefulSet. When a Pod is marked for termination, the controller will remove it from the hashring and the replica essentially becomes a "router" for the hashring. When a Pod is deleted, the controller will remove it from the hashring. When a Pod becomes unready, the controller will remove it from the hashring. This behaviour can be considered for use alongside the Ketama hashing algorithm.

The two highlighted lines are incorrect, the controller does not have a podInformer subscribed to receive updates from pods associated with the hashring.

As such, the allow-dynamic-scaling flag only responds to changes in the replica count of the statefulset. This only happens if the statefulset is updated; that is separate from the health of pods.

I've explored adding a podInformer, updating the configmapInformer, and reworking the logic around how pods are chosen while keeping backwards compatibility.

But I've seen a lot of previous discussion/issues about this/related problems in the past. Is this seen as a problem? (it is to me) If so, what opinions do others have on how the controller should behave in this situation?

Data race in Controller, sending on closed channel

Running go test -v -race ./... throws an error:

=== RUN   TestController/OneHashringNoStatefulSet
==================
WARNING: DATA RACE
Write at 0x00c0001ec730 by goroutine 20:
  runtime.closechan()
      /usr/lib/go/src/runtime/chan.go:334 +0x0
  github.com/observatorium/thanos-receive-controller.(*controller).run()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main.go:186 +0x550
  github.com/observatorium/thanos-receive-controller.TestController.func1.1()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main_test.go:154 +0x46

Previous read at 0x00c0001ec730 by goroutine 42:
  runtime.chansend()
      /usr/lib/go/src/runtime/chan.go:142 +0x0
  github.com/observatorium/thanos-receive-controller.(*controller).addWorkItem()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main.go:194 +0x5a
  github.com/observatorium/thanos-receive-controller.(*controller).run.func3()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main.go:177 +0x41
  k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnUpdate()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:202 +0xa6
  k8s.io/client-go/tools/cache.(*processorListener).run.func1.1()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:605 +0x31d
  k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:284 +0x5e
  k8s.io/client-go/tools/cache.(*processorListener).run.func1()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:601 +0xdb
  k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x61
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0x108
  k8s.io/client-go/tools/cache.(*processorListener).run()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0xa9
  k8s.io/client-go/tools/cache.(*processorListener).run-fm()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:593 +0x41
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x5c

Goroutine 20 (running) created at:
  github.com/observatorium/thanos-receive-controller.TestController.func1()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main_test.go:153 +0x39a
  testing.tRunner()
      /usr/lib/go/src/testing/testing.go:865 +0x163

Goroutine 42 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:69 +0x6f
  k8s.io/client-go/tools/cache.(*sharedProcessor).addListener()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:443 +0x2dd
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandlerWithResyncPeriod()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:391 +0x280
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandler()
      /home/metalmatze/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:327 +0x69
  github.com/observatorium/thanos-receive-controller.(*controller).run()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main.go:174 +0x365
  github.com/observatorium/thanos-receive-controller.TestController.func1.1()
      /home/metalmatze/src/github.com/observatorium/thanos-receive-controller/main_test.go:154 +0x46

Something we should fix and then always run go test with -race. 😊

/cc @brancz @squat

Issues updating Pod annotation during reconcilliation

Rolled out 02aec09ce44b2f26ec9364469c2c6396f58702eb and configured the bool flag --annotate-pods-on-change I see the following log

level=error caller=main.go:744 ts=2023-03-30T14:55:18.653695011Z msg="failed to update pod" err="Operation cannot be fulfilled on pods \"observatorium-thanos-receive-default-0\": the object has been modified; please apply your changes to the latest version and try again"

RFE: Thanos operator

I was wondering if it makes sense to build a kubernetes operator that'll help with maintenance and configuration of the controller, and other components such as receive, querier and compactor. This will require a lot of resources, and I'm willing to help as much as I can.

I'm just throwing it out here to stir up a conversation to see if what I'm proposing/requesting actually makes sense.

Image runs as root

Using thanos-receive-controller in security-enhanced (like active PodSecurityPolicy) Kubernetes (or OpenShift) cluster requires non-root containers. It is common to use scratch/ distroless image to reduce attack surface and get a smaller final image.

Vulnerabilities in latest docker image

# grype --only-fixed quay.io/observatorium/thanos-receive-controller:main-2023-11-06-c57219e
 βœ” Vulnerability DB                [no update available]
 βœ” Pulled image
 βœ” Loaded image                                                                                                                                                                                                                                                                                                                                                 quay.io/observatorium/thanos-receive-controller:main-2023-11-06-c57219e
 βœ” Parsed image                                                                                                                                                                                                                                                                                                                                                 sha256:3788f75bd36ad57a71cc8f547ada4ccd9c3eed7d9f6185d2f0082521eb5aee5f
 βœ” Cataloged contents                                                                                                                                                                                                                                                                                                                                                  63a03728a2f951929d49eee395b4914c8ddcd2bc31ed93be7bafc2f129656751
   β”œβ”€β”€ βœ” Packages                        [96 packages]
   └── βœ” Executables                     [1 executables]
 βœ” Scanned for vulnerabilities     [22 vulnerability matches]
   β”œβ”€β”€ by severity: 0 critical, 7 high, 9 medium, 0 low, 0 negligible (6 unknown)
   └── by status:   7 fixed, 15 not-fixed, 0 ignored
NAME                        INSTALLED  FIXED-IN  TYPE       VULNERABILITY        SEVERITY
golang.org/x/crypto         v0.1.0     0.17.0    go-module  GHSA-45x7-px36-x8w8  Medium
golang.org/x/net            v0.7.0     0.17.0    go-module  GHSA-4374-p667-p6c8  High
golang.org/x/net            v0.7.0     0.17.0    go-module  GHSA-qppj-fm5r-hxr3  Medium
golang.org/x/net            v0.7.0     0.13.0    go-module  GHSA-2wrh-6pvc-2jm9  Medium
google.golang.org/grpc      v1.40.0    1.56.3    go-module  GHSA-m425-mq94-257g  High
google.golang.org/grpc      v1.40.0    1.56.3    go-module  GHSA-qppj-fm5r-hxr3  Medium
google.golang.org/protobuf  v1.28.1    1.33.0    go-module  GHSA-8r3f-844c-mc37  Medium

Move to Different CI

No one has access to DroneCI and it seems deprecated and heavily rate-limited. We can't retrigger builds easily etc. Let's fix this.

I don't see any magic requirements, so GH actions would do (or CircleCI).

Help wanted (:

No Warn Message on Missing Statefulset Labels

There is some improvement to logging that I would recommend implementing.

The statefulset for the thanos-receiver has to have the label key controller.receive.thanos.io/hashring in order for the controller to work correctly. However, if there is no statefulset that has a matching label, is does not log a warn. There should be another logging message that does this.

Multiple receivers in one controller

Hey I'm currently in the progress of setting up a multi-tentant thanos deployment.

Our plan is to have a unique object store per tenant to easily be able to see how much each customer costs + separation of data.
And my understanding is that I can't have multiple object stores in one receiver even though I can have multiple hashrings.

So my next idea was to have multiple receiver statefulsets. But now I'm taking a deeper look at controller and my understanding is that it only supports one receiver per controller due to how the configmap is managed.
Is this correct?

Any interest in supporting multiple statefulsets in one controller?

Got receive error

I got this error from receive. How can I solve it?

level=error ts=2021-04-15T07:31:53.789289648Z caller=handler.go:331 component=receive component=receive-handler err="forwarding request to endpoint http://receive-dev.thanos.svc.cluster.local:19291/api/v1/receive: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: address http://receive-dev.thanos.svc.cluster.local:19291/api/v1/receive: too many colons in address\"" msg="internal server error"

The endpoints in hashring.json needs protocol ?
ref: https://github.com/dsayan154/thanos-receiver-demo

The generate hashring.json content is

apiVersion: v1
data:
  hashrings.json: '[{"hashring":"receive-dev","tenants":["receive-dev"],"endpoints":["http://receive-dev.thanos.svc.cluster.local:19291/api/v1/receive"]}]'
kind: ConfigMap
...

Support annotating pods on change for thanos receiver router and ingestor setup

Currently, thanos-receive-controller uses the same label to watch receiver pods and update them on hashring changes.

However, in a Thanos receiver router and ingestor setup (https://thanos.io/tip/proposals-accepted/202012-receive-split.md/), while we need to watch the ingestor pods to update the hashring configmap, the hashring configmap gets used in the router pods. In this scenario, the router pods need to be annotated on hashring change instead of the ingestor pods.

For thanos-receive-controller to support this Thanos receiver router+receiver setup, it needs to support an option to specify a separate pod label (different from the pod label used to watch the receiver ingestor pods) to use for annotating pods on hashring change.

change the namespace listening to

Hi,
i have the issue that my Thanos is running on a different namespace, not on the default one which is configured in the main.go..
Is there flag with which i can change it from "default" to "whatever"

Vulnerabilities in latest docker image

# grype --only-fixed quay.io/observatorium/thanos-receive-controller
 βœ” Vulnerability DB        [no update available]
 βœ” Loaded image
 βœ” Parsed image
 βœ” Cataloged packages      [88 packages]
 βœ” Scanned image           [12 vulnerabilities]

NAME                              INSTALLED                             FIXED-IN        TYPE       VULNERABILITY        SEVERITY
github.com/prometheus/prometheus  v1.8.2-0.20211119115433-692a54649ed7                  go-module  CVE-2019-3826        Medium (suppressed)
k8s.io/kubernetes                 v1.13.0                                               go-module  GHSA-74j8-88mm-7496  Medium (suppressed)
k8s.io/kubernetes                 v1.13.0                                               go-module  GHSA-j9wf-vvm6-4r9w  Medium (suppressed)
k8s.io/kubernetes                 v1.13.0                                               go-module  GHSA-vw47-mr44-3jf9  Low (suppressed)
k8s.io/kubernetes                 v1.13.0                               1.16.11         go-module  GHSA-wqv3-8cm6-h6wg  High
k8s.io/kubernetes                 v1.13.0                               1.18.18         go-module  GHSA-g42g-737j-qx6j  Medium
k8s.io/kubernetes                 v1.13.0                               1.18.19         go-module  GHSA-qh36-44jv-c8xj  Low
k8s.io/kubernetes                 v1.13.0                               1.19.15         go-module  GHSA-f5f7-6478-qm6p  High
k8s.io/kubernetes                 v1.13.0                               1.20.0-alpha.1  go-module  GHSA-8mjg-8c8g-6h85  Medium
k8s.io/kubernetes                 v1.13.0                               1.20.0-alpha.2  go-module  GHSA-8cfg-vx93-jvxw  Medium```

Allow configuration of waitForPod timeout

Hello!

Is it possible to allow the configuration of the waitForPod timeout?

return wait.PollImmediate(time.Second, time.Minute, func() (bool, error) {

It's currently hardcoded as 1 minute, but pods in my cluster can take longer than that to start up (e.g. when spinning up a new node), resulting in this warning: level=warn caller=main.go:573 ts=2024-01-11T09:24:33.383554728Z msg="failed polling until pod is ready" pod=thanos-receive-3 duration=1m0.010513876s err="timed out waiting for the condition". This results in the Hashring ConfigMap getting updated before all pods are ready.

FYI we have --allow-only-ready-replicas enabled and are on image version main-2023-11-06-c57219e.

Question : Labelling Thanos receivers statefulset

In the README, it's said at the end :

Finally, deploy StatefulSets of Thanos receivers labeled with controller.receive.thanos.io=thanos-receive-controller.

Does the value of the label controller.receive.thanos.io is thanos-receive-controller because it's hardcoded somewhere ?
Or is it because it's the name of the Deployment which is declared before in the documentation ?

Extract of the Deployment declaration

apiVersion: apps/v1
kind: Deployment
metadata:
  name: thanos-receive-controller

Thanks for your help

Docker - Support Arm Architecture

We use ARM based Graviton instances in AWS and need ARM support for running thanos-receive-controller.
It would be nice if the project supported building multiple architectures in the Github Actions publish pipeline to support use cases like ours.

Dependency version warning

Dependency line:

github.com/observatorium/thanos-receive-controller --> github.com/thanos-io/thanos --> github.com/bradfitz/gomemcache

github.com/thanos-io/thanos v0.25.2 --> github.com/bradfitz/gomemcache 24332e2

https://github.com/thanos-io/thanos/blob/v0.25.2/go.mod#L210

Background

Repo github.com/thanos-io/thanos at version v0.25.2 uses replace directive to pin dependency github.com/bradfitz/gomemcache to version 24332e2.

According to Go Modules wikis, replace directives in modules other than the main module are ignored when building the main module.
It means such replace usage in dependency's go.mod cannot be inherited when building main module. And it turns out that observatorium/thanos-receive-controller indirectly relies on bradfitz/gomemcache@a41fca8, which is different from the pinned version thanos-io/thanos needed.

https://github.com/observatorium/thanos-receive-controller/blob/master/go.mod(Line 26)

github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b // indirect

https://github.com/thanos-io/thanos/blob/v0.25.2/go.mod(line 18&210)

github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b
github.com/bradfitz/gomemcache => github.com/themihai/gomemcache v0.0.0-20180902122335-24332e2d58ab

So this is just a reminder in the hope that you can notice such an inconsistency.

Solution

1. Bump the version of dependency github.com/thanos-io/thanos

You may try upgrading dependency github.com/thanos-io/thanos to a newer version, which may have eliminated the use of this directive.

2. Add the same replace rule to your go.mod

replace github.com/bradfitz/gomemcache => github.com/themihai/gomemcache v0.0.0-20180902122335-24332e2d58ab

Reconciliation loop fails on save hashring

In an OpenShift 4.12.x environment, we see the following log with the latest commit built from main

level=error caller=main.go:572 ts=2023-03-30T11:14:16.976340006Z msg="failed to save hashrings" err="configmaps \"observatorium-thanos-receive-controller-tenants-generated\" is forbidden: cannot set an ownerRef on a resource you can't delete: , <nil>"

It appears the creation of an owner ref has been in place since #62 but looks like we are missing some perms in the Role created via jsonnet.

Proposal: Move to Thanos-community or Thanos orgs.

I feel having this controller is a must-have for any Thanos deployment with receives (it works only on k8s, something like that is needed for other systems). This means we might want to give more love to this as Thanos maintainers. WDYT?

Make lint fails on master

commit 11e63ca

main.go:90:2: assignments should only be cuddled with other assignments (wsl)
main.go:173:2: if statements should only be cuddled with assignments (wsl)
main.go:244:2: if statements should only be cuddled with assignments used in the if statement itself (wsl)
main.go:255:2: return statements should not be cuddled if block has more than two lines (wsl)
main.go:411:2: go statements can only invoke functions assigned on line above (wsl)
main.go:412:2: if statements should only be cuddled with assignments (wsl)
main.go:415:2: expressions should not be cuddled with blocks (wsl)
main.go:425:2: only one cuddle assignment allowed before go statement (wsl)
main.go:428:2: return statements should not be cuddled if block has more than two lines (wsl)
main.go:444:4: assignments should only be cuddled with other assignments (wsl)
main.go:441:2: only one cuddle assignment allowed before range statement (wsl)
main.go:449:2: if statements should only be cuddled with assignments (wsl)
main.go:452:2: expressions should not be cuddled with blocks (wsl)
main.go:453:2: return statements should not be cuddled if block has more than two lines (wsl)
main.go:465:2: only one cuddle assignment allowed before if statement (wsl)
main.go:470:2: assignments should only be cuddled with other assignments (wsl)
main.go:483:3: only one cuddle assignment allowed before if statement (wsl)
main.go:480:2: ranges should only be cuddled with assignments used in the iteration (wsl)
main.go:545:3: expressions should not be cuddled with blocks (wsl)
main.go:539:2: only one cuddle assignment allowed before if statement (wsl)
main.go:572:2: declarations should never be cuddled (wsl)
main.go:573:2: expressions should not be cuddled with declarations or returns (wsl)
main_test.go:194:3: only cuddled expressions if assigning variable or using from line above (wsl)
main_test.go:271:3: only cuddled expressions if assigning variable or using from line above (wsl

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.