Git Product home page Git Product logo

clickhouse_exporter's Introduction

Clickhouse Exporter for Prometheus (old clickhouse-server versions)

This is a simple server that periodically scrapes ClickHouse stats and exports them via HTTP for Prometheus consumption.

Exporter could used only for old ClickHouse versions, modern versions have embedded prometheus endpoint. Look details https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-prometheus

To run it:

./clickhouse_exporter [flags]

Help on flags:

./clickhouse_exporter --help

Credentials(if not default):

via environment variables

CLICKHOUSE_USER
CLICKHOUSE_PASSWORD

Build Docker image

docker build . -t clickhouse-exporter

Using Docker

docker run -d -p 9116:9116 clickhouse-exporter -scrape_uri=http://clickhouse-url:8123/

Sample dashboard

Grafana dashboard could be a start for inspiration https://grafana.com/grafana/dashboards/882-clickhouse

clickhouse_exporter's People

Contributors

aleksi avatar alexey-milovidov avatar askomorokhov avatar bobrik avatar cherts avatar dependabot[bot] avatar f1yegor avatar fuxingzhang avatar mstrzele avatar nvartolomei avatar nyantechnolog avatar percona-csalguero avatar slach avatar tfronn avatar tkroman avatar yareach avatar zhangguanzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clickhouse_exporter's Issues

cant compile

i ran a git clone https://github.com/f1yegor/clickhouse_exporter and then ran go get which gave me the following error:
`
go get

clickhouse_exporter

./clickhouse_exporter.go:32:25: cannot use e (type *exporter.Exporter) as type "clickhouse_exporter/vendor/github.com/prometheus/client_golang/prometheus".Collector in argument to "clickhouse_exporter/vendor/github.com/prometheus/client_golang/prometheus".MustRegister:
*exporter.Exporter does not implement "clickhouse_exporter/vendor/github.com/prometheus/client_golang/prometheus".Collector (wrong type for Collect method)
have Collect(chan<- "github.com/f1yegor/clickhouse_exporter/vendor/github.com/prometheus/client_golang/prometheus".Metric)
want Collect(chan<- "clickhouse_exporter/vendor/github.com/prometheus/client_golang/prometheus".Metric)
`
Am i doing something wrong here

Publishing releases

Hi, Yegor

I wanted to propose the automatic releasing of clickhouse_exporter as it's what people (including me) really want (see, for example, #32). This will make installation of clickhouse_exporter via config management tools very simple, e.g. I could finally add to my Ansible role downloading of clickhouse_exporter from GitHub.

This can be achieved with promu release but it requires creating the release in Github. Which in turn can be made by invoking github-release but all of that seems overly complicated.

So instead I want to rework the building and CI a little bit, namely:

  • Remove promu dependency because I don't see why it's needed. It's used only with build command in Makefile but the project is built in the previous line of Makefile with go install.
  • Fix Makefile to test all packages with ./.... Currently, it doesn't invoke any tests because main package doesn't have any.
  • Add goreleaser support to automate releases.
  • Rework Travis configuration:
    • Invoke make test to actually run tests on PRs
    • Create release with goreleaser when tag is pushed. This will create downloadable artifacts for multitude of operating systems and architectures. Even deb and rpm packages can be built!

Let me know what you think because maybe I'm missing something and I'm completely wrong and shouldn't do this.

Thanks!

Create a counter/gauge metric that exposes scrape errors

Right now it's not easy to monitor if clickhouse_exporter can reach it's target.

For example: To solve that problem Prometheus' jmx_exporter exposes a gauge:

# HELP jmx_scrape_error Non-zero if this scrape failed.
# TYPE jmx_scrape_error gauge
jmx_scrape_error 0.0

that goes to 1 when there is a problem scraping it's target.

It would be great to have something like this for clickhouse_exporter.

exec /usr/local/bin/clickhouse_exporter: no such file or directory

I built container

docker build --tag myrepo/clickhouse-exporter:latest .

But got an error on running

docker run -ti myrepo/clickhouse-exporter:latest
exec /usr/local/bin/clickhouse_exporter: no such file or directory

After some digging, I found the binary is dynamically linked:

~ $ ldd /usr/local/bin/clickhouse_exporter
        /lib64/ld-linux-x86-64.so.2 (0x7f2b1739c000)
        libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f2b1739c000)

and alpine have non-standard libc - https://stackoverflow.com/questions/66963068/docker-alpine-executable-binary-not-found-even-if-in-path

So my solution:

apk add libc6-compat 

cannot parse Float type

Hello, I found a bug in the code.
Metrics in system.asynchronous_ values may not be of Int type, but may also be of Float type.
If the code is not modified, when a value of float type appears in metrics, the following problems will occur:

Error scraping clickhouse url http://xxx :8123/?query=select+metric%2C+value+from+ system.asynchronous_ metrics: strconv.Atoi : parsing "2199.998": invalid syntax file= exporter.go line=292

After the code in exporter.go is modified as follows, the problem is solved
q.Set("query", "select metric, value from system.asynchronous_ metrics") -> q.Set("query", "select metric, toInt64(value) from system.asynchronous_ metrics")

Dependencies packages are outdated

Can't install package as described on documentation.
After downloading and running make, those are the issues I get. Tried to download packages from prometheus-junkyard but still didn't work.

clickhouse_exporter.go:16:2: cannot find package "github.com/prometheus/client_golang/prometheus" in any of:
	/usr/src/github.com/prometheus/client_golang/prometheus (from $GOROOT)
	/home/user/go/src/github.com/prometheus/client_golang/prometheus (from $GOPATH)

not working for clickhouse-server ver. 18.12.5 Changed format

clickhouse-server ver. 18.12.5

clickhouse_exporter error:
clickhouse_exporter[7274]: time="2018-09-07T13:44:19Z" level=info msg="Error scraping clickhouse: Error scraping clickhouse url https://localhost:8443/?query=select+%2A+from+system.metrics: parseKeyValueResponse: unexpected 0 line: Query 1 Number of executing queries" file=exporter.go line=299

Changed format!

old_ver 1.1.54388:

root@clickhouse1 ~ # curl -k  https://localhost:8443/?query=select+%2A+from+system.metrics
Query	1
Merge	0
PartMutation	0
ReplicatedFetch	0
ReplicatedSend	0
ReplicatedChecks	0
BackgroundPoolTask	0
BackgroundSchedulePoolTask	0
DiskSpaceReservedForMerge	0
DistributedSend	0
QueryPreempted	0
TCPConnection	126
HTTPConnection	1
InterserverConnection	0
OpenFileForRead	0
OpenFileForWrite	0
Read	1
Write	0
SendExternalTables	0
QueryThread	0
ReadonlyReplica	0
LeaderReplica	116
MemoryTracking	8704
MemoryTrackingInBackgroundProcessingPool	29216
MemoryTrackingInBackgroundSchedulePool	0
MemoryTrackingForMerges	0
LeaderElection	116
EphemeralNode	232
ZooKeeperSession	1
ZooKeeperWatch	349
ZooKeeperRequest	0
DelayedInserts	0
ContextLockWait	0
StorageBufferRows	0
StorageBufferBytes	0
DictCacheRequests	0
Revision	54388
RWLockWaitingReaders	0
RWLockWaitingWriters	0
RWLockActiveReaders	1
RWLockActiveWriters	0

new_ver 18.12.5:

root@ch1 ~ # curl -k   https://localhost:8443/?query=select+%2A+from+system.metrics
Query	1	Number of executing queries
Merge	0	Number of executing background merges
PartMutation	0	Number of mutations (ALTER DELETE/UPDATE)
ReplicatedFetch	0	Number of data parts fetching from replica
ReplicatedSend	0	Number of data parts sending to replicas
ReplicatedChecks	0	Number of data parts checking for consistency
BackgroundPoolTask	0	Number of active tasks in BackgroundProcessingPool (merges, mutations, fetches or replication queue bookkeeping)
BackgroundSchedulePoolTask	0	Number of active tasks in BackgroundSchedulePool. This pool is used for periodic tasks of ReplicatedMergeTree like cleaning old data parts, altering data parts, replica re-initialization, etc.
DiskSpaceReservedForMerge	0	Disk space reserved for currently running background merges. It is slightly more than total size of currently merging parts.
DistributedSend	0	Number of connections sending data, that was INSERTed to Distributed tables, to remote servers. Both synchronous and asynchronous mode.
QueryPreempted	0	Number of queries that are stopped and waiting due to \'priority\' setting.
TCPConnection	0	Number of connections to TCP server (clients with native interface)
HTTPConnection	1	Number of connections to HTTP server
InterserverConnection	0	Number of connections from other replicas to fetch parts
OpenFileForRead	0	Number of files open for reading
OpenFileForWrite	0	Number of files open for writing
Read	1	Number of read (read, pread, io_getevents, etc.) syscalls in fly
Write	0	Number of write (write, pwrite, io_getevents, etc.) syscalls in fly
SendExternalTables	0	Number of connections that are sending data for external tables to remote servers. External tables are used to implement GLOBAL IN and GLOBAL JOIN operators with distributed subqueries.
QueryThread	0	Number of query processing threads
ReadonlyReplica	0	Number of Replicated tables that are currently in readonly state due to re-initialization after ZooKeeper session loss or due to startup without ZooKeeper configured.
LeaderReplica	21	Number of Replicated tables that are leaders. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks. There may be no more than one leader across all replicas at one moment of time. If there is no leader it will be elected soon or it indicate an issue.
MemoryTracking	69847347	Total amount of memory (bytes) allocated in currently executing queries. Note that some memory allocations may not be accounted.
MemoryTrackingInBackgroundProcessingPool	0	Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn\'t indicate memory leaks.
MemoryTrackingInBackgroundSchedulePool	0	Total amount of memory (bytes) allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables).
MemoryTrackingForMerges	0	Total amount of memory (bytes) allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn\'t indicate memory leaks.
LeaderElection	21	Number of Replicas participating in leader election. Equals to total number of replicas in usual cases.
EphemeralNode	42	Number of ephemeral nodes hold in ZooKeeper.
ZooKeeperSession	1	Number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows.
ZooKeeperWatch	64	Number of watches (event subscriptions) in ZooKeeper.
ZooKeeperRequest	0	Number of requests to ZooKeeper in fly.
DelayedInserts	0	Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table.
ContextLockWait	0	Number of threads waiting for lock in Context. This is global lock.
StorageBufferRows	0	Number of rows in buffers of Buffer tables
StorageBufferBytes	0	Number of bytes in buffers of Buffer tables
DictCacheRequests	0	Number of requests in fly to data sources of dictionaries of cache type.
Revision	54407	Revision of the server. It is a number incremented for every release or release candidate.
RWLockWaitingReaders	0	Number of threads waiting for read on a table RWLock.
RWLockWaitingWriters	0	Number of threads waiting for write on a table RWLock.
RWLockActiveReaders	1	Number of threads holding read lock in a table RWLock.
RWLockActiveWriters	0	Number of threads holding write lock in a table RWLock
.```

Add customization for name conversion of metrics

User level query stats

Internally we have the following view:

CREATE MATERIALIZED VIEW IF NOT EXISTS system.clickhouse_query_stats_by_user ENGINE = SummingMergeTree(dummy_date, (type, user), 8192) POPULATE AS
SELECT
    toDate('1970-01-01') AS dummy_date,
    type,
    initial_user AS user,
    count(initial_user) AS count,
    sum(query_duration_ms) AS query_duration_ms,
    sum(read_rows) AS read_rows,
    sum(read_bytes) AS read_bytes
FROM system.query_log
GROUP BY
    type,
    user;

This can be used to retrieve query stats on user granularity to find offenders:

-- SELECT
--     type,
--     user,
--     sum(count) AS count,
--     sum(query_duration_ms) AS query_duration_ms,
--     sum(read_rows) AS read_rows,
--     sum(read_bytes) AS read_bytes
-- FROM system.clickhouse_query_stats_by_user
-- GROUP BY
--     type,
--     user
--
-- Example output:
--
-- ┌─type─┬─user─────┬─count─┬─query_duration_ms─┬───read_rows─┬───read_bytes─┐
-- │    2 │ mvavrusa │    78 │            104566 │ 75332316665 │ 422028872602 │
-- │    1 │ mvavrusa │    78 │                 0 │           0 │            0 │
-- │    3 │ mvavrusa │     8 │                 0 │           0 │            0 │
-- └──────┴──────────┴───────┴───────────────────┴─────────────┴──────────────┘
--
-- The column `type` has the following meaning:
--
-- 1 - successful start of query execution
-- 2 - successful end of query execution
-- 3 - exception before start of query execution
-- 4 - exception while query execution

We have a tool that scrapes ClickHouse instances and exports these metrics:

clickhouse_user_query_read_rows{cluster="http",type="end_success",user="mvavrusa"} 660593724120

This seems quite useful to have in prometheus exporter itself, but I'm not sure what's the best way to have it. Expect user to create the table for themselves? Install table automatically?

cc @vavrusa

I found a bug where rancher would run and go would return an error

The specific error is as follows:
panic: descriptor Desc{fqName: "clickhouse_block_write_time_dm-0", help: "Number of BlockWriteTime_dm-0 async processed", constLabels: {}, variableLabels: []} is invalid: "clickhouse_block_write_time_dm-0" is not a valid metric name

It seems to be caused by the index hyphen in clickhouse. Is there a problem with the last submission? Check and see that the hyphen of one index was not replaced

When pure docker runs, all indicators can be replaced normally, but k8s seems not to be replaced normally, which is strange

Allow using credentials to connect to servers

the clickhouse http api allows you to pass credentials as query params

curl  http://localhost:8123?user=bob&password=secret

or as headers

curl -H "X-ClickHouse-User: user" -H "X-ClickHouse-Key: password"  'http://localhost:8123/'

this exporter doesn't offer any feature for it's users to pass these credentials.

If you try to run the binary like

clickhouse_exporter -scrape_uri=http://localhost:8123?user=bob&password=secret

it will try to request the target with invalid urls like

http://localhost:8123?user=bob&password=secret?query=select%20*%20from%20system.metrics

because of this
https://github.com/f1yegor/clickhouse_exporter/blob/master/clickhouse_exporter.go#L48

Conection refused

Hi! I tried to use docker container but have an error 'Connection refused'

time="2019-01-23T08:10:50Z" level=info msg="Error scraping clickhouse: Error scraping clickhouse url http://localhost:8123?query=select+metric%2C+value+from+system.metrics: Error scraping clickhouse: Get http://localhost:8123?query=select+metric%2C+value+from+system.metrics: dial tcp 127.0.0.1:8123: connect: connection refused" file=exporter.go line=299
time="2019-01-23T08:10:50Z" level=info msg="Starting Server: :9116" file="clickhouse_exporter.go" line=34
time="2019-01-23T08:11:08Z" level=info msg="Error scraping clickhouse: Error scraping clickhouse url http://localhost:8123?query=select+metric%2C+value+from+system.metrics: Error scraping clickhouse: Get http://localhost:8123?query=select+metric%2C+value+from+system.metrics: dial tcp 127.0.0.1:8123: connect: connection refused" file=exporter.go line=299

Curl query works fine

curl http://localhost:8123\?query\=select+metric%2C+value+from+system.metrics
Query 1
Merge 0
PartMutation 0
ReplicatedFetch 0
ReplicatedSend 0
ReplicatedChecks 0
BackgroundPoolTask 0

hyphens in metric names causing scrape failures

In the latest clickhouse version, clickhouse is exporting metrics about devices, and this is generating metric names with hyphens in them, which are not valid for prometheus and scraping fails

# HELP clickhouse_block_active_time_dm-0 Number of BlockActiveTime_dm-0 async processed
# TYPE clickhouse_block_active_time_dm-0 gauge

Docker image build is not working

Hello I am trying to build a docker image and it's failing with the following error message:

Step 5/11 : RUN make init && make
 ---> Running in 408022b2de85
go get -u github.com/AlekSi/gocoverutil
go: downloading github.com/AlekSi/gocoverutil v0.2.0
go: downloading golang.org/x/tools v0.1.9
go get: added github.com/AlekSi/gocoverutil v0.2.0
go get: upgraded golang.org/x/sys v0.0.0-20170710161658-abf9c25f5445 => v0.0.0-20211019181941-9d821ace8654
go get: added golang.org/x/tools v0.1.9
go install -v
go: inconsistent vendoring in /go/src/github.com/ClickHouse/clickhouse_exporter:
        github.com/AlekSi/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
        golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
        golang.org/x/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod

        To ignore the vendor directory, use -mod=readonly or -mod=mod.
        To sync the vendor directory, run:
                go mod vendor
make: *** [Makefile:7: build] Error 1
The command '/bin/sh -c make init && make' returned a non-zero code: 2

Could you please help.

Please expose number of rows in parts

In addition to clickhouse_table_parts_bytes and clickhouse_table_parts_count it would be helpful to have clickhouse_table_parts_rows. ClickHouse returns rows in system.parts.

is not a valid metric name

panic: descriptor Desc{fqName: "clickhouse_block_discard_ops_dm-0", help: "Number of BlockDiscardOps_dm-0 async processed", constLabels: {}, variableLabels: []} is invalid: "clickhouse_block_discard_ops_dm-0" is not a valid metric name

clickhouse-server 22.2.2.1

Dangling sockets in a CLOSE_WAIT state

Prometheus can't scrapes metrics from the exporter, due to lots of CLOSE_WAIT opened sockets, thus exporter can't accept any further http request.

Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 5ms
Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 10ms
Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 20ms
Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 40ms
Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 80ms
Dec 13 19:00:35 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:35 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 160ms
Dec 13 19:00:36 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:36 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 320ms
Dec 13 19:00:36 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:36 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 640ms
Dec 13 19:00:37 localhost prometheus-clickhouse-exporter[61675]: 2018/12/13 19:00:37 http: Accept error: accept tcp 127.0.0.1:3000: accept4: too many open files; retrying in 1s
:~$ sudo netstat -ntp | grep 3010 | grep CLOSE_WAIT | wc -l
3912

prometheus/prometheus#2388

Error scraping clickhouse: Error scraping clickhouse url ***

step 1:
mkdir -p $GOPATH/src/github.com/Percona-Lab
cd $GOPATH/src/github.com/Percona-Lab
git clone https://github.com/Percona-Lab/clickhouse_exporter
step2 : mac本地编译
cd $GOPATH/src/github.com/Percona-Lab/clickhouse_exporter
GO111MODULE=off CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build clickhouse_exporter.go
step3: 上传服务器 centos6
step4:执行
./clickhouse_exporter -scrape_uri=http://clickhouse_server_ip:8123/ -log.level=info

如下图错误:
image
按照 #54 #44 都修改了,也不可行。

image

期待回复!!!

disk metrics error

An error occurred in getting metrics
image

Related SQL
image

Related merges
added disk metrics

Execute the above SQL in Clickhouse
image

I found that it was caused by the query getting 2 rows of data.
image

solution

  1. sum
select
	sum(free_space) as free_space_in_bytes,
	sum(total_space) as total_space_in_bytes
from
	system.disks

image

  1. or: use disk name to distinguish data
select
	name,
	sum(free_space) as free_space_in_bytes,
	sum(total_space) as total_space_in_bytes
from
	system.disks
group by
	name

image

Vendor dependencies

It'd be nice to have a way to get a reproducible builds, so vendored dependencies are needed.

ReadonlyReplica metric

sometimes the ClickHouseMetrics_ReadonlyReplica metric takes the value -1, what does this mean?

CentOS7 go install Problems arise

[root@iznpqe8ynbddhpz clickhouse_exporter-master]# go install
clickhouse_exporter.go:9:2: cannot find package "github.com/f1yegor/clickhouse_exporter/exporter" in any of:
/opt/soft/gopath/src/github.com/clickhouse_exporter-master/vendor/github.com/f1yegor/clickhouse_exporter/exporter (vendor tree)
/opt/soft/go/src/github.com/f1yegor/clickhouse_exporter/exporter (from $GOROOT)
/opt/soft/gopath/src/github.com/f1yegor/clickhouse_exporter/exporter (from $GOPATH)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.