Git Product home page Git Product logo

pg_easy_replicate's Introduction

pg_easy_replicate

CI Smoke spec Gem Version

pg_easy_replicate is a CLI orchestrator tool that simplifies the process of setting up logical replication between two PostgreSQL databases. pg_easy_replicate also supports switchover. After the source (primary database) is fully replicated, pg_easy_replicate puts it into read-only mode and via logical replication flushes all data to the new target database. This ensures zero data loss and minimal downtime for the application. This method can be useful for performing minimal downtime (up to <1min, depending) major version upgrades between a Blue/Green PostgreSQL database setup, load testing and other similar use cases.

Battle tested in production at Tines πŸš€

Installation

Add this line to your application's Gemfile:

gem "pg_easy_replicate"

And then execute:

$ bundle install

Or install it yourself as:

$ gem install pg_easy_replicate

This will include all dependencies accordingly as well. Make sure the following requirements are satisfied.

Or via Docker:

docker pull shayonj/pg_easy_replicate:latest

https://hub.docker.com/r/shayonj/pg_easy_replicate

Requirements

  • PostgreSQL 10 and later
  • Ruby 3.0 and later
  • Database users should have SUPERUSER permissions, or pass in a special user with privileges to create the needed role, schema, publication and subscription on both databases. More on --special-user-role section below.
  • See more on FAQ below

Limits

All Logical Replication Restrictions apply.

Usage

Ensure SOURCE_DB_URL and TARGET_DB_URL are present as environment variables in the runtime environment. The URL are of the postgres connection string format. Example:

$ export SOURCE_DB_URL="postgres://USERNAME:PASSWORD@localhost:5432/DATABASE_NAME"
$ export TARGET_DB_URL="postgres://USERNAME:PASSWORD@localhost:5433/DATABASE_NAME"

Optional

You can extend the default timeout by setting the following environment variable

$ export PG_EASY_REPLICATE_STATEMENT_TIMEOUT="10s" # default 5s

Any pg_easy_replicate command can be run the same way with the docker image as well. As long the container is running in an environment where it has access to both the databases. Example

docker run -e SOURCE_DB_URL="postgres://USERNAME:PASSWORD@localhost:5432/DATABASE_NAME"  \
  -e TARGET_DB_URL="postgres://USERNAME:PASSWORD@localhost:5433/DATABASE_NAME" \
  -it --rm shayonj/pg_easy_replicate:latest \
  pg_easy_replicate config_check

CLI

$  pg_easy_replicate
pg_easy_replicate commands:
  pg_easy_replicate bootstrap -g, --group-name=GROUP_NAME    # Sets up temporary tables for information required during runtime
  pg_easy_replicate cleanup -g, --group-name=GROUP_NAME      # Cleans up all bootstrapped data for the respective group
  pg_easy_replicate config_check                             # Prints if source and target database have the required config
  pg_easy_replicate help [COMMAND]                           # Describe available commands or one specific command
  pg_easy_replicate start_sync -g, --group-name=GROUP_NAME   # Starts the logical replication from source database to target database provisioned in the group
  pg_easy_replicate stats  -g, --group-name=GROUP_NAME       # Prints the statistics in JSON for the group
  pg_easy_replicate stop_sync -g, --group-name=GROUP_NAME    # Stop the logical replication from source database to target database provisioned in the group
  pg_easy_replicate switchover  -g, --group-name=GROUP_NAME  # Puts the source database in read only mode after all the data is flushed and written
  pg_easy_replicate version                                  # Prints the version

Replicating all tables with a single group

You can create as many groups as you want for a single database. Groups are just a logical isolation of a single replication.

Config check

$ pg_easy_replicate config_check

βœ… Config is looking good.

Bootstrap

Every sync will need to be bootstrapped before you can set up the sync between two databases. Bootstrap creates a new super user to perform the orchestration required during the rest of the process. It also creates some internal metadata tables for record keeping.

$ pg_easy_replicate bootstrap --group-name database-cluster-1 --copy-schema

{"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":21485,"level":30,"time":"2023-06-19T15:51:11.015-04:00","v":0,"msg":"Setting up schema","version":"0.1.0"}
...

Bootstrap and Config Check with special user role in AWS or GCP

If you don't want your primary login user to have superuser privileges or you are on AWS or GCP, you will need to pass in the special user role that has the privileges to create role, schema, publication and subscription. This is required so pg_easy_replicate can create a dedicated user for replication which is granted the respective special user role to carry out its functionalities.

For AWS the special user role is rds_superuser, and for GCP it is cloudsqlsuperuser. Please refer to docs for the most up to date information.

Note: The user in the connection url must be part of the special user role being supplied.

Config Check

$ pg_easy_replicate config_check --special-user-role="rds_superuser" --copy-schema

βœ… Config is looking good.

Bootstrap

$ pg_easy_replicate bootstrap --group-name database-cluster-1 --special-user-role="rds_superuser" --copy-schema

{"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":21485,"level":30,"time":"2023-06-19T15:51:11.015-04:00","v":0,"msg":"Setting up schema","version":"0.1.0"}
...

Start sync

Once the bootstrap is complete, you can start the sync. Starting the sync sets up the publication, subscription and performs other minor housekeeping things.

NOTE: Start sync by default will drop all indices in the target database for performance reasons. And will automatically re-add the indices during switchover. It is turned on by default and you can opt out of this with --no-recreate-indices-post-copy

$ pg_easy_replicate start_sync --group-name database-cluster-1

{"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":22113,"level":30,"time":"2023-06-19T15:54:54.874-04:00","v":0,"msg":"Setting up publication","publication_name":"pger_publication_database_cluster_1","version":"0.1.0"}
...

Stats

You can inspect or watch stats any time during the sync process. The stats give you an idea of when the sync started, current flush/write lag, how many tables are in replicating, copying or other stages, and more.

You can poll these stats to perform any other after the switchover is done. The stats include a switchover_completed_at which is updated once the switch over is complete.

$ pg_easy_replicate stats --group-name database-cluster-1

{
  "lag_stats": [
    {
      "pid": 66,
      "client_addr": "192.168.128.2",
      "user_name": "jamesbond",
      "application_name": "pger_subscription_database_cluster_1",
      "state": "streaming",
      "sync_state": "async",
      "write_lag": "0.0",
      "flush_lag": "0.0",
      "replay_lag": "0.0"
    }
  ],
  "message_lsn_receipts": [
    {
      "received_lsn": "0/1674688",
      "last_msg_send_time": "2023-06-19 19:56:35 UTC",
      "last_msg_receipt_time": "2023-06-19 19:56:35 UTC",
      "latest_end_lsn": "0/1674688",
      "latest_end_time": "2023-06-19 19:56:35 UTC"
    }
  ],
  "sync_started_at": "2023-06-19 19:54:54 UTC",
  "sync_failed_at": null,
  "switchover_completed_at": null

  ....

Performing switchover

pg_easy_replicate doesn't kick off the switchover on its own. When you start the sync via start_sync, it starts the replication between the two databases. Once you have had the time to monitor stats and any other key metrics, you can kick off the switchover.

switchover will wait until all tables in the group are replicating and the delta for lag is <200kb (by calculating the pg_wal_lsn_diff between sent_lsn and write_lsn) and then perform the switch.

Additionally, switchover will take care of re-adding the indices (it had removed in start_sync) in the target database before hand. Depending on the size of the tables, the recreation of indexes (which happens CONCURRENTLY) may take a while. See start_sync for more details.

The switch is made by putting the user on the source database in READ ONLY mode, so that it is not accepting any more writes and waits for the flush lag to be 0. It’s up to the user to kick off a rolling restart of their application containers or failover DNS (more on these below in strategies) after the switchover is complete, so that your application isn't sending any read + write requests to the old/source database.

$ pg_easy_replicate switchover  --group-name database-cluster-1

{"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":24192,"level":30,"time":"2023-06-19T16:05:23.033-04:00","v":0,"msg":"Watching lag stats","version":"0.1.0"}
...

Replicating single database with custom tables

By default all tables are added for replication but you can create multiple groups with custom tables for the same database. Example

$ pg_easy_replicate bootstrap --group-name database-cluster-1 --copy-schema
$ pg_easy_replicate start_sync --group-name database-cluster-1 --schema-name public --tables "users, posts, events"

...

$ pg_easy_replicate bootstrap --group-name database-cluster-2 --copy-schema
$ pg_easy_replicate start_sync --group-name database-cluster-2 --schema-name public --tables "comments, views"

...
$ pg_easy_replicate switchover  --group-name database-cluster-1
$ pg_easy_replicate switchover  --group-name database-cluster-2
...

Switchover strategies with minimal downtime

For minimal downtime, it'd be best to watch/tail the stats and wait until switchover_completed_at is updated with a timestamp. Once that happens you can perform any of the following strategies. Note: These are just suggestions and pg_easy_replicate doesn't provide any functionalities for this.

Rolling restart strategy

In this strategy, you have a change ready to go which instructs your application to start connecting to the new database. Either using an environment variable or similar. Depending on the application type, it may or may not require a rolling restart.

Next, you can set up a program that watches the stats and waits until switchover_completed_at is reporting as true. Once that happens it kicks off a rolling restart of your application containers so they can start making connections to the DNS of the new database.

DNS Failover strategy

In this strategy, you have a weighted based DNS system (example AWS Route53 weighted records) where 100% of traffic goes to a primary origin and 0% to a secondary origin. The primary origin here is the DNS host for your source database and secondary origin is the DNS host for your target database. You can set up your application ahead of time to interact with the database using DNS from the weighted group.

Next, you can set up a program that watches the stats and waits until switchover_completed_at is reporting as true. Once that happens it updates the weight in the DNS weighted group where 100% of the requests now go to the new/target database. Note: Keeping a low ttl is recommended.

FAQ

Adding internal user to pg_hba or pgBouncer userlist

pg_easy_replicate sets up a designated user for managing the replication process. In case you handle user permissions through pg_hba, it's necessary to modify this list to permit sessions from pger_su_h1a4fb. Similarly, with pgBouncer, you'll need to authorize pger_su_h1a4fb for login access by including it in the userlist.

Contributing

PRs most welcome. You can get started locally by

  • docker compose down -v && docker compose up --remove-orphans --build
  • Install ruby 3.1.4 using RVM (instruction)
  • bundle exec rspec for specs

pg_easy_replicate's People

Contributors

andyatkinson avatar dependabot[bot] avatar faridco avatar felixonmars avatar honzasterba avatar natacado avatar shayonj avatar tfmcloughlin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pg_easy_replicate's Issues

support ddl sync

due to original logical replication does not support ddl replicate ,can support ddl replicate before switching ?many thanks

stop_sync should pause replication, not drop it

Currently, stop_sync drops the pub/sub, rather than pausing it. It would be preferable to have this command pause replication (disable pub/sub), rather than reset it. Perhaps dropping pub/sub could be moved to the cleanup command.

Then, start_sync can resume the existing pub/sub or create it.

syntax error at or near "-"

Hi

My username is the following format my-user.

Bootstrapping fails with:

{"name":"pg_easy_replicate","hostname":"3d3214eab576","pid":1,"level":50,"time":"2023-08-01T05:13:31.277+00:00","v":0,"msg":"PG::SyntaxError: ERROR:  syntax error at or near \"-\"\nLINE 2: grant usage on schema pger to my-user;\n                                            ^: create schema if not exists pger;\ngrant usage on schema pger to my-user;\ngrant create on schema pger to my-user;\n","version":"0.1.8"}
Unable to bootstrap: Unable to setup schema: PG::SyntaxError: ERROR:  syntax error at or near "-"

Protecting username with quotes should fix this :)

Drop and re-create indices for faster catchup on large databases

On larger databases, the initial COPY can be slow, depending on the database engine and storage type this behavior may not be acceptable. Part of the reason why COPY can be slow is because each batched write id updating the index. So before creating the subscription we can capture all indices in groups table, drop it and create the subscription. Once all tables have replicating stage, we can re-add the indices.

Getting "Unable to check config: Unable to check superuser conditions: PG::ConnectionBad: FATAL: password authentication failed for user" in config_check

Hi! I am attempting to use this tool on a replication setup. It has worked fine for one database, but another has been nothing but trouble. I have created new superusers on each side, I have confirmed that I can login with psql to each of the URL setups on the same system as the pg_easy_replicate install. But this particular configuration always returns: Unable to check config: Unable to check superuser conditions: PG::ConnectionBad: FATAL: password authentication failed for user $USERNAME

Is there something I'm doing wrong in the setup? Thanks for your time.

stop_sync does not work as expected

(sorry, pressed enter to fast)

$ pg_easy_replicate stop_sync --group-name=dedic-migrate
/var/lib/gems/3.0.0/gems/pg_easy_replicate-0.1.4/lib/pg_easy_replicate/orchestrate.rb:172:in `stop_sync': wrong number of arguments (given 1, expected 0; required keywords: target_conn_string, source_conn_string, group_name) (ArgumentError)
        from /var/lib/gems/3.0.0/gems/pg_easy_replicate-0.1.4/lib/pg_easy_replicate/cli.rb:84:in `stop_sync'
        from /var/lib/gems/3.0.0/gems/thor-1.2.2/lib/thor/command.rb:27:in `run'
        from /var/lib/gems/3.0.0/gems/thor-1.2.2/lib/thor/invocation.rb:127:in `invoke_command'
        from /var/lib/gems/3.0.0/gems/thor-1.2.2/lib/thor.rb:392:in `dispatch'
        from /var/lib/gems/3.0.0/gems/thor-1.2.2/lib/thor/base.rb:485:in `start'
        from /var/lib/gems/3.0.0/gems/pg_easy_replicate-0.1.4/bin/pg_easy_replicate:6:in `<top (required)>'
        from /usr/local/bin/pg_easy_replicate:25:in `load'
        from /usr/local/bin/pg_easy_replicate:25:in `<main>'

and, of course,

$ pg_easy_replicate stop_sync
No value provided for required options '--group-name'

"connection failed" trying to use account not specified in config

During bootstrap phase, pg_easy_replicate fails with

Unable to bootstrap: PG::ConnectionBad: connection to server at "10.7.0.10", port 25432 failed: SSL error: certificate verify failed
connection to server at "10.7.0.10", port 25432 failed: FATAL:  password authentication failed for user "pger_su"

message. That is strange, because postgres://postgres:[email protected]:25432/database credentials are used.

How can this error be fixed?

Switchover fails because of timeout

After I ran pg_easy_replicate switchover, it failed on full vacuum because of timeouts. I've manually removed vacuum from code and switchover completed successfully. I think adding flag that allows to skip vacuum or increase timeouts could be helpful

Add support for Azure

Hi, I was wondering if support for azure postgres (Flexible Server) could also be added?

Smoke tests in CI

This work ideally involves
Setting up databases, populating and running data with pgbench, and performing a switchover. In the end we should expect all the data to be present in pgbench. Perhaps, even track total number of dropped requests. The client should retry when conns are dropped.

start_sync failing

I am upgrading from PG12 and I initially was trying to upgrade to PG15.

The error message I am seeing:

ERROR: CREATE SUBSCRIPTION ... WITH (create_slot = true) cannot run inside a transaction block

I enabled extra logging with the DEBUG=1 environment setting, and I can see that there we are not running the Query with a transaction.

Things I tried

I tried upgrading to PG 14 and PG 13 to see if there was an issue with postgres 15 creating the slot on postgres 12.

I tried using the deferred slot approach from the postgres docs:

  1. Change CREATE SUBSCRIPTION to have connect = false.
  2. Run pg_create_logical_replication_slot on the source db.
  3. Run the ALTER SUBSCRIPTION ... ENABLE.
  4. Run the `ALTER SUBSCRIPTION .. REFRESH PUBLICATION.

The REFRESH PUBLICATION failed with the same error message.

Questions:

  1. Is this an issue that my source database version is too old? If so, what version of postgres is supported at minimum?
  2. Is this an issue of a long outstanding transaction on the source database? I don't think so since I would expect an issue about locks expiring if so...
  3. Any other ideas to consider or try?

pg_easy_replicate.rb:32:in `config': undefined method `success?' for nil:NilClass (NoMethodError)

I'm attempting to get get working on a local macOS build. On my machine the following NoMethodError is raised running when running pg_easy_replicate config_check:

❯ pg_easy_replicate config_check
/usr/local/bin/pg_dump
/Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/pg_easy_replicate-0.1.8/lib/pg_easy_replicate.rb:32:in `config': undefined method `success?' for nil:NilClass (NoMethodError)

      pg_dump_exists = $CHILD_STATUS.success?
                                    ^^^^^^^^^
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/pg_easy_replicate-0.1.8/lib/pg_easy_replicate.rb:65:in `assert_config'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/pg_easy_replicate-0.1.8/lib/pg_easy_replicate/cli.rb:20:in `config_check'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/thor-1.2.2/lib/thor/command.rb:27:in `run'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/thor-1.2.2/lib/thor/invocation.rb:127:in `invoke_command'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/thor-1.2.2/lib/thor.rb:392:in `dispatch'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/thor-1.2.2/lib/thor/base.rb:485:in `start'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/pg_easy_replicate-0.1.8/bin/pg_easy_replicate:6:in `<top (required)>'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/bin/pg_easy_replicate:25:in `load'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/bin/pg_easy_replicate:25:in `<main>'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/bin/ruby_executable_hooks:22:in `eval'
        from /Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/bin/ruby_executable_hooks:22:in `<main>'❯ pg_easy_replicate config_check
/usr/local/bin/pg_dump
/Users/thomas.mcloughlinlifelenz.com/.rvm/gems/ruby-3.1.1@apidev/gems/pg_easy_replicate-0.1.8/lib/pg_easy_replicate.rb:32:in `config': undefined method `success?' for nil:NilClass (NoMethodError)

It seems lib/pg_easy_replicate is using the global variable $CHILD_STATUS which is not defined in my environment.

Adding require 'english' manually near the top of lib/pg_easy_replicate.rb to the fixes the issue:

❯ pg_easy_replicate config_check
/usr/local/bin/pg_dump
βœ… Config is looking good.

I'm using:
pg_easy_replicate version: 0.1.8
ruby version: ruby 3.1.1p18 (2022-02-18 revision 53f5fc4236) [x86_64-darwin22]

Please fix you gem spec file to keep local bin directory out of global $PATH

Your gemspec file puts the local bin directory into the global $PATH. This makes your common console and setup scripts conflict with all the other gem developers who do not properly configure their gemspec.

If it is your intent to place the local bin directory into the global $PATH then please choose different names for console and setup.

config_check crashes with NoMethodError

Hi,

running via docker :latest and basic pg_easy_replicate config_check

/usr/local/bundle/gems/pg_easy_replicate-0.2.2/lib/pg_easy_replicate.rb:106:in assert_config: undefined method split' for nil:NilClass (NoMethodError)
if tables.split(",").size > 0 && (schema_name.nil? || schema_name == "")
^^^^^^
from /usr/local/bundle/gems/pg_easy_replicate-0.2.2/lib/pg_easy_replicate/cli.rb:28:in `config_check'

Support bi-directional replication

It'd be great to start feeding all the writes from the target database to the source database after switchover. In case the application requires a rollback. The replication can be maintained post switchover with bi-directional replication.

This will require re-plumbing the concept of source and target database in the code and always passing the conn strings as input to the various operations

Add handling of sequences which are not associated with a table column

I ran across this project in the ever-expanding ecosystem of Postgres tools. Given it's a replication / sync tool, I took a look at the sequence handling code, as this is an area that's often covered improperly.

I noticed in your refresh_sequences method that you're only "refreshing" sequences which are directly associated with a table column, and doing so by implicitly using the max() function on the underlying table.column.

This does not cover cases where bare sequences are referenced by application code and are thus not associated with a table at all. A safer approach would be to obtain the current values of the upstream publishing system and adjust all sequences based on those values. You can do this by leveraging the last_value column in the sequence itself.

More meaningful config_check

It'd be good if config_check

  • can create and drop pger_su user successfully
  • continue ensure wal_level and other parameters are present
  • can create groups tables using the new user
  • Has replication privileges

It'd be really nice to support blob sync

I'd love for pg_easy_replicate to support blob sync. I feel like that'd really bring it home when it comes to supporting a full replication. While its hard to do that while replication is ongoing - I am wonder we can do something where the blobs are synced over post switchover.

The benefit of post switchover is that the source database would be in ready only mode. The trade off here is that the blobs sync would be delay, so as long as your application can tolerate the delay its fine. Not the perfect solution but at least you won't have write custom tooling to perform the sync yourself.

It could work something like this

  • pg_easy_replicate can take in an additional flag --copy-blob-post-sync
  • Makes use of lo_import and lo_export to perform an export of large objects from source DB on to file system
  • Create a temporary table to store OID mappings. Update this table for exported data.
  • Import the large object file and get the new OID and update it in the temp table.
  • Update the referencing tables as needed based on the mapping

Open questions

  • Will need to handle bytea?
  • Easy way to figure out which OIDs map to what tables

Docker image does not seem to work on linux/amd64

$ docker run -it --rm shayonj/pg_easy_replicate:latest pg_easy_replicate
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested
exec /usr/local/bundle/bin/pg_easy_replicate: exec format error

Seems like it's trying to run an arm version.

Version:
Docker version 24.0.2, build cb74dfcd85

I'm on latest Arch Linux on an intel i7 machine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.