Git Product home page Git Product logo

pisanix's Introduction

GitHub release License Slack FOSSA Status

Briefly, if applications want a MySQL, just access localhost:3306.

Introduction

Pisanix [Pi-sanics] is a modern database governance framework for Kubernetes. Pisanix adds SQL-aware traffic control, audit, security and extension abilities to help manage various databases in the Database Mesh way.

Pisanix has 4 major features:

  1. Local Service as Database: Pisanix provides a local database service, that is one applications could access a MySQL at localhost:3306 without any knowledge of the real data source.
  2. Unified Config Management: Pisanix provides a centralized management of Database Mesh configurations, including traffic strategies like read-write splitting, sharding, encryption and concurrency control.
  3. Multi Protocol Support: Pisanix has a bunch of different plugins to help build a Glue Layer for any database protocols.
  4. Cloud Native Architecture: Pisanix takes advantage of the classic control plane and data plane pattern, using Infrastructure as Code to make it a versioned database access behavior.

Current Status

Pisanix now supports TrafficStrategy of Database Mesh Specificiation, besides VirtualDatabase, DatabaseEndpoint, and other features like AuditRequest and AccessControl are also on the way:

  • TrafficStrategy
    • Load Balance
      • Simple LoadBalance
      • Read Write Splitting
        • Static
        • Dynamic
          • Master-Slave Replication
          • MHA
    • Plugins
      • Circuit Break
      • Concurrency Control
  • DataStrategy
    • Sharding
      • Sharding with keys
        • Single Database Sharding Tables
        • Sharding Databases
        • Sharding Databases with Sharding Tables
  • AuditRequest
    • Audit with AWS
  • AccessControl
    • Fine-Grained Access Control
  • QoSClaim
    • TrafficQoS
  • DatabaseClass
    • AWSRdsInstance

Getting Started

Highlights

Pisanix has 3 components:

  • Pisa-Controller: A Golang control plane designed for sidecar injection and configuration transformation
  • Pisa-Proxy: A high performance Rust data plane used as SQL traffic proxy, support various of traffic governance capabilities.
  • Pisa-Daemon(Coming Soon): A optional data plane works on every node, provide programmable runtime management such as TrafficQoS.

Goals

Pisanix has the following goals:

  1. SQL-Aware Traffic Control: supports SQL traffic load balancing, access control, observability.
  2. Runtime Resource-oriented Programming: supports extensible resource control abilities.
  3. Database Reliability Engineering: make DBA's life easier with Kubernetes

Database traffic governance

Applications access databases with SQL, so Pisanix will hijack all SQL traffic. This is a great opportunity to do a lot of things around traffic, like loadbalancing and SQL firewall.

Observability

In the past, metrics could be retrieved from database instances and display in kinds of charts. Now with Pisanix, DBAs could have more chances to achieve better observability.

Programmable

For DBAs who could and would like to solve problems with programming. Pisanix supports many kinds of plugin mechanism, like Lua and Wasm. People will have the chance to 'reshape' the expected behavior of databases.

Documentation

Full documentation will be available on the Pisanix website.

Contribution

Please follows Contributing Guide

Community & Support

Mailing List https://groups.google.com/g/database-mesh
Dev Meetings (Starting Feb 16th, 2022), Bi-weekly Wednesday 9:00AM PST https://meet.google.com/yhv-zrby-pyt
Dev Meetings APAC Friendly (Starting April 27th, 2022), Bi-weekly APAC Wednesday 9:00PM GMT+8 https://meeting.tencent.com/dm/6UXDMNsHBVQO
Wechat Broker pisanix
Slack https://join.slack.com/t/databasemesh/shared_invite/zt-19rhvnxkz-USjZ~am~ghd_Q0q~8bAJXA
Meetings Notes https://bit.ly/39Fqt3x
  • Wechat User Group: Broker wechat to add you into the user group.

Roadmap

The Pisanix project is still at an early stage. In the next work, it will focus on enhancing the governance capabilities of traffic, such as data sharding, application data access auditing , and runtime resource QoS, etc. And it will continuously improve the performance and provide an ease of use experience, support plugin extensions to fit different business scenarios.

License

FOSSA Status

pisanix's People

Contributors

arthur-zhang avatar dongzl avatar fossabot avatar mlycore avatar teslacn avatar tuichenchuxin avatar wbtlb avatar windghoul avatar xuanyuan300 avatar zhang-arvin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pisanix's Issues

Connect to database cause Pisa-Proxy panic.

Bug Report

What version of Pisanix are you using?

v0.1.1

Steps to reproduce

Config a Pisa-Proxy to proxy to tidb, connect Pisa-Proxy by MySQL client.

What did you expect?

I can query data correctly from Pisa-Proxy, when it proxy tidb.

What did happened?

When I connect to tidb by Pisa-Proxy I get a panic, the error log is as follows:
thread 'pisa-proxy' panicked at 'no entry found for key', protocol/mysql/src/client/auth.rs:167:33

Define an authentication mechanism based on HashiCorp Vault

Development Task

The pisa proxy shall support an authentication mechanism based on HashiCorp Vault, the vault might on premise or hashicorp stuff.
For the moment we'd like to keep the keys in the vault and use paseto_token as authentication mechanism. https://github.com/rrrodzilla/rusty_paseto. The process will be the following:

  1. The user register in the vault is id:key in the vault, where key is a json { database_type:"", url:"", key:""}
  2. The user provide to Pisa the id.
  3. Pisa lookup to the vault microservice the id to fetch the key
  4. Add random data to the key
  5. Create a paseto token and give to the client, store inside a local cache (hash table).
  6. In each request proxy request the client add the paseto token.
    @mlycore feedbacks.

Thanks to @giorgiozoppi , this is an issue proposed by him.

Can't parse `select * from test.test limit 1`

Bug Report

What version of Pisanix are you using?

v0.1.1

What operating system and CPU are you using?

Steps to reproduce

What did you expect?

parse select * from test.test limit 1 success

What did happened?

[ParseError { details: "Parsing error at line 1 column 31. No repair sequences found." }]

unit test build error for mac

Bug Report

error: This macro cannot be used on the current target.
You can prevent it from being used in other architectures by
guarding it behind a cfg(any(target_arch = "x86", target_arch = "x86_64")).
--> parser/mysql/src/lex.rs:675:16
|
675 | if is_x86_feature_detected!("sse2") {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in the macro is_x86_feature_detected (in Nightly builds, run with -Z macro-backtrace for more info)

What version of Pisanix are you using?

What operating system and CPU are you using?

image

Steps to reproduce

What did you expect?

What did happened?

mysql connection pool call select * query specific column value throw column not found exception

Bug Report

java.sql.SQLException: Column 'order_code' not found.

What version of Pisanix are you using?

0.1.0

What operating system and CPU are you using?

Linux,Kubernetes1.19.4

Steps to reproduce

using multiple thread to trigger the query method
using hikari 3.3.0 to init conn pool and execute select *
using preparedStatement and call getString method on a specific column

What did you expect?

return the value of the specific column

What did happened?

java.sql.SQLException: Column 'order_code' not found.

Modify the sidecar name

Feature Request

Describe the feature you'd like:

At this version, the name of the sidecar is pisanix-proxy , and I hope it will become pisa-proxy

Add imagePullSecrets separately for sidecar

Feature Request

Is your feature request related to a problem? Please describe:

If the sidecar needs a separate imagePullSecrets , it cannot be implemented at this stage

Describe the feature you'd like:

In the injection phase, inject imagePullSecrets for the sidecar

Add a running mode for Pisa-Proxy

Feature Request

Is your feature request related to a problem? Please describe:

Currently Pisa-Proxy use env variables like LOCAL_CONFIG for reading local configurations. This is not very clear and clean. Supposed to have a running mode for Pisa-Proxy, with subcommand and argument to handle this.

Describe the feature you'd like:

Using sidecar subcommand and daemon sidecar to match different running mode.

e.g:

# sidecar mode
pisa-proxy sidecar --pisa-controller-host pisa-controller.pisanix-system:8080 --pisa-deployed-namespace default --pisa-deployed-name test

# daemon mode
pisa-proxy daemon -c etc/default.toml

How to run pisa-proxy with the master branch code.

Question

image

  1. I create a config file at pisa-proxy/ect/.
  2. Cargo bulid the code.
  3. cd to pisanix/pisa-proxy/target/debug and execute ./proxy.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("pisa-controller.pisa-system")), port: Some(8080), path: "/apis/configs.database-mesh.io/v1alpha1/namespaces/default/proxyconfigs/default", query: None, fragment: None }, source: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Uncategorized, error: "failed to lookup address information: nodename nor servname provided, or not known" })) }', app/config/src/config.rs:132:46
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Why the MySQL EOF and OK protocol length need add 4?

Question

#[inline]
pub fn is_eof(data: &[u8]) -> bool {
    data.len() < 9 + 4 && *unsafe { data.get_unchecked(4) } == EOF_HEADER
}

#[inline]
pub fn is_ok(data: &[u8]) -> bool {
    data.len() > 7 + 4 && *unsafe { data.get_unchecked(4) } == OK_HEADER
}

https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_ok_packet.html

These rules distinguish whether the packet represents OK or EOF:

OK: header = 0 and length of packet > 7
EOF: header = 0xfe and length of packet < 9

Can't connect using mysql5.7 client

Bug Report

What version of Pisanix are you using?

master branch

What operating system and CPU are you using?

system: Centos8
mysql client: 5.7.37

Steps to reproduce

What did you expect?

connect success

What did happened?

connect hang
image
image

set transaction level sql parser failed

Bug Report

What version of Pisanix are you using?

pisa-proxy master branch

What operating system and CPU are you using?

Steps to reproduce

Execute sql set session transaction isolation level read committed; by pisa-proxy.

What did you expect?

What did happened?

sql parse set session transaction isolation level read committed; return error:
ERROR runtime_mysql::server::server: err: ParseError { details: "Parsing error at line 1 column 41. No repair sequences found." }

feat(strategy): dynamic read write splitting

Development Task

Description

Now,Pisa-Proxy has supported static read write splitting strategy. Static strategy depends on config, once datasource status is changed, Pisa-Proxy can't route SQL correctly. In dynamic read write splitting strategy, Pisa-Proxy will probe datasource status and reconcile loadbalance strategy dynamicly.

Implement

In this version, Pisa-proxy will support MHA high availability strategy. Pisa-Proxy will spawn four kind of monitor to probe the status of datasource.The rules match module will fork a thread to recive loadbalance strategy from dynamic read write splitting module by channel.In this module, there are four kind of monitor to probe datasource status.

  • Connect Monitor: probe the connectivity of datasource.
  • Ping Monitor: probe health status of datasource.
  • Lag Monitor: probe the late time between master node and slave node.
  • ReadOnly Monitor: probe the role of datasource.

Future Design Chart

image

Glossary

  • Monitor Reconcile: Monitor Reconcile. Get status from monitors and compute final read write splitting strategy.
  • Discovery: The kind of discovery, like MHA,RDS,MGR etc.
  • Monitor: The kind of Monitor, includes Connect Monitor, Ping Monitor, ReadOnly Monitor, Lag Monitor.

Probe Flow

  1. Start Monitor to probe datasource.
  2. Probe the connectivity of Master node and Slave node.
  3. If connectivity is ok.
    3.1. probe the role of datasource.
    3.2. probe the late time between master node and slave node.
    3.2.1. If slave node is not late from master node, enter next time probe.
    3.2.2. If slave node is late from master node, update the list of loadbalance and enter next time probe.
  4. If connectivity is not ok.
    4.1. If probe slave change to master,start Lag probe.
    4.1.1. If slave node is not late from master node, enter next time probe.
    4.1.2. If slave node is late from master node, update the list of loadbalance and enter next time probe.
    4.2.If slave node is not change to master, enter next time probe.

image

Configuration

param type required default description
user string yes None Monitor user name
password string yes None Monitor password
monitor_period u64 yes 1000 The interval of Reconcile Monitor update strategy(millisecond)
connect_period u64 yes 1000 The interval of Connect Monitor probe (millisecond)
connect_timeout u64 yes 6000 The timeout of Connect Monitor(millisecond)
connect_failure_threshold u64 yes 3 The max failures times of Connect Monitor probe
ping_period u64 yes 1000 The interval of Ping Monitor probe(millisecond)
ping_timeout u64 yes 6000 The timeout of Ping Monitor probe(millisecond)
ping_failure_threshold u64 yes 3 The max failures times of Ping Monitor probe
replication_lag_period u64 yes 1000 The interval of Lag Monitor probe(millisecond)
replication_lag_timeout u64 yes 6000 The timeout of Lag Monitor probe(millisecond)
replication_lag_failure_threshold u64 yes 3 The max failures of Lag Monitor probe
max_replication_lag u64 yes 10000 The threshold of Lag Monitor probe(millisecond)
read_only_period u64 yes 1000 The interval of ReadOnly Monitor probe(millisecond)
read_only_timeout u64 yes 6000 The timeout of ReadOnly Monitor probe(millisecond)
read_only_failure_threshold u64 yes 3 The max failures times of ReadOnly Monitor probe

Configuration

Configuration Structure
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct ReadWriteSplitting {
    #[serde(rename = "static")]
    pub statics: Option<ReadWriteSplittingStatic>,
    pub dynamic: Option<ReadWriteSplitting>
}

#[derive(Debug, Serialize, Deserialize, Clone, Default)]
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ReadWriteSplittingDynamic {
    pub default_target: TargetRole,
    #[serde(rename = "rule")]
    pub rules: Vec<ReadWriteSplittingRule>,
    pub discovery: Discovery,
}

#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(rename_all = "lowercase", tag="type")]
pub enum Discovery {
    Mha(MasterHighAvailability),
}

#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)]
pub struct MasterHighAvailability {
    pub user: String,
    pub password: String,
    pub pool_size: Option<u8>,
    pub monitor_period: u64,
    pub connect_period: u64,
    pub connect_timeout: u64,
    pub connect_failure_threshold: u64,
    pub ping_period: u64,
    pub ping_timeout: u64,
    pub ping_failure_threshold: u64,
    pub replication_lag_period: u64,
    pub replication_lag_timeout: u64,
    pub replication_lag_failure_threshold: u64,
    pub max_replication_lag: u64,
    pub read_only_period: u64,
    pub read_only_timeout: u64,
    pub read_only_failure_threshold: u64,
}
Configuration Example
[proxy.config.read_write_splitting]

[proxy.config.read_write_splitting.dynamic]
default_target = "readwrite"

[proxy.config.read_write_splitting.dynamic.discovery]
type = "mha"
user = "monitor"
password = "monitor"
pool_size = 16
monitor_period = 1000
connect_period = 2000
connect_timeout = 200
connect_failure_threshold = 3
ping_period = 1000
ping_timeout = 100
ping_failure_threshold = 3
replication_lag_period = 1000
replication_lag_timeout = 3
replication_lag_failure_threshold = 3
max_replication_lag = 3
read_only_period = 1000
read_only_timeout = 3
read_only_failure_threshold = 3

[[proxy.config.read_write_splitting.dynamic.rule]]
name = "write-rule"
type = "regex"
regex = ["^insert"]
target = "readwrite"
algorithm_name = "roundrobin"

[[proxy.config.read_write_splitting.dynamic.rule]]
name = "read-rule"
type = "regex"
regex = ["^select"]
target = "read"
algorithm_name = "roundrobin"

Check List

  • Monitor Reconcile.
  • configuration.
  • Pisa-Controller. @mlycore
  • Rules Match dynamic update.
  • Connect Monitor.
  • Ping Monitor.
  • Lag Monitor.
  • ReadOnly Monitor.
  • com query raw parse. @xuanyuan300

Association Issue #88

When using sysbench, pisanix's memory slowly growing

Bug Report

What version of Pisanix are you using?

branch: master

What operating system and CPU are you using?

system: fedora35

Steps to reproduce

use sysbench stress test

What did you expect?

pisanix's memory dose not slowly grow

What did happened?

pisanix's memory slowly growing

Pisa-Proxy supports generic route rule in read write splitting

Feature Request

Background

Pisa-Proxy has supported static and dynamic read write splitting.The route rule is based on regex.Pisa-Proxy should provides a generic route rule to route SQL without regex.

Implement

Description

In Pisa-Proxy we could config regex rule to match different rule to route to different backend.Now, user can config a generic rule to route SQL.If user config regex rule and generic rule at the same time, Pisa-Proxy shoud match with regex firstly.If the rule is not hinted, SQL will be routed to default backend.

Configuration

Toml configuration example

[proxy.config.read_write_splitting]

[proxy.config.read_write_splitting.static]
default_target = "read"

# generic rule match
[[proxy.config.read_write_splitting.static.rule]]
name = "generic-rule"
type = "generic"
algorithm_name = "random"

# base on regex rule match
[[proxy.config.read_write_splitting.static.rule]]
name = "read-rule"
type = "generic"
regex = ["^select"]
target = "read"
algorithm_name = "random"

CRD configration example

apiVersion: core.database-mesh.io/v1alpha1
kind: TrafficStrategy
metadata:
  name: test 
  namespace: default 
spec:
  selector:
    matchLabels:
      source: test
  loadBalance:
    readWriteSplitting:
      static:  
        defaultTarget: read # or readwrite
        rules:
          - name: generic-rule
            type: generic
            algorithmName: random # lb algorithm

Develoment Task

  • support generic configuration
  • add generic rule
  • Pisa-Controller supports generic read write splitting

using pisa select on 5.6 MySQL throws Unknown system variable 'transaction_isolation' SQL exception

Bug Report

What version of Pisanix are you using?

0.1.0

What operating system and CPU are you using?

Linux,Kubernetes1.19.4

Steps to reproduce

using hikari 3.3.0 to init conn pool and execute select *
using preparedStatement and call getString method on a specific column
jdbc version:5.1.43
MySQL version:5.6

What did you expect?

get the column value

What did happened?

throw an error :Unknown system variable 'transaction_isolation'

when connect to mysql, an "InvalidPacket" error occurred.

Question

config.toml

# api 配置块,对应命令行参数和环境变量
[admin]
# api 地址
host = "0.0.0.0"
# api 端口
port = 8082
# 日志级别
log_level = "INFO"

# pisa-proxy 代理配置块
[proxy]
# config a proxy
[[proxy.config]]
# proxy 代理地址
listen_addr = "0.0.0.0:9088"
# proxy 认证用户名
user = "root"
# proxy 认证密码
password = "123456"
# proxy schema
db = "test"
# 配置后端数据源类型
backend_type = "mysql"
# proxy 与后端数据库建连连接池大小,值范围:1 ~ 255, 默认值:64
pool_size = 3
# 服务端版本
server_version = "5.7.37"

# 后端负载均衡配置
[proxy.config.simple_loadbalance]
# 负载均衡算法:[random/roundrobin], 默认值: random 算法
balance_type = "random"
# 选择挂载后端节点
nodes = ["ds001"]

[proxy.config.read_write_splitting]

[proxy.config.read_write_splitting.static]
default_target = "read"

[[proxy.config.read_write_splitting.static.rule]]
name = "read-rule"
type = "regex"
regex = [".*"]
target = "read"
algorithm_name = "random"

[[proxy.config.read_write_splitting.static.rule]]
name = "write-rule"
type = "regex"
regex = [".*"]
target = "readwrite"
algorithm_name = "roundrobin"

[[proxy.config.plugin.concurrency_control]]
regex = ["aaa"]
max_concurrency = 5
duration = 333

[[proxy.config.plugin.concurrency_control]]
regex = ["bbb"]
max_concurrency = 5
duration = 333

[[proxy.config.plugin.circuit_break]]
regex = ["111"]

[[proxy.config.plugin.circuit_break]]
regex = ["222"]

# 后端数据源配置
[mysql]
[[mysql.node]]
# 数据源 name
name = "ds001"
# database name
db = "employees"
# 数据库 user
user = "root"
# 数据库 password
password = "123456"
# 数据库地址
host = "127.0.0.1"
# 数据库端口
port = 3307
# 负载均衡节点权重
weight = 1
role = "read"

error msg:

Jul 05 16:08:35.864 ERROR runtime_mysql::server::server: exec command err: Error { kind: Protocol(InvalidPacket { method: "handle_auth_data", data: [0, 7, 4, 71, 42, 86, 92, 37, 19, 6, 56, 1, 125, 47, 111, 120, 24, 104, 3, 37, 61, 0] }) }

client:
image

when use master branch code compile proxy, client connect proxy cause "Connection refused".

Bug Report

What version of Pisanix are you using?

newest master code.

What operating system and CPU are you using?

Steps to reproduce

What did you expect?

What did happened?

Server:

Jun 27 10:05:26.206 ERROR runtime_mysql::server::server: err:Error { kind: Protocol(Io(Os { code: 61, kind: ConnectionRefused, message: "Connection refused" })) }
thread 'pisa-proxy' panicked at 'called `Option::unwrap()` on a `None` value', runtime/mysql/src/transaction_fsm.rs:314:43
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Jun 27 10:05:43.582  INFO proxy::listener: pisa-proxy client_ip: 127.0.0.1 - backend_type: "mysql"
Jun 27 10:05:43.667 ERROR runtime_mysql::server::server: err:Error { kind: Protocol(Io(Os { code: 61, kind: ConnectionRefused, message: "Connection refused" })) }
thread 'pisa-proxy' panicked at 'called `Option::unwrap()` on a `None` value', runtime/mysql/src/transaction_fsm.rs:314:43

Client:

mysql> show databases;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    2
Current database: *** NONE ***

ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql>

Add server_version field for proxy config

Feature Request

Is your feature request related to a problem? Please describe:

ref issue #109

Describe the feature you'd like:

Currently, The default protocol version is 5.7.37 returned by pisa-proxy, may be inconsistent with the version of the backend database,so server_version field should be added for proxy config ,pisa-proxy will return server_version to prevent issue #109

Feature(strategy): support mysql read_write_splitting

Read-write-splitting is an important part of pisa-proxy traffic management, which can be improve query performance and reduce server load in practical scenarios.

The following is internal design diagram:
image

RuleMatch is used to match SQL statements by rules. Currently, only support regex and in the future, rego will be supported.
TargetGroup is the backend target group after successful matching, and the target group corresponds to the role attribute, the role need to be defined in annotations field of DatabaseEndpoint crd.
LoadBalance is used to select an instance in the target group by the LoadBalance algorithm.
TargetInstace is executes the instance of the SQL statement.

An complete config of TrafficStrategy as follows:

apiVersion: core.database-mesh.io/v1alpha1
kind: TrafficStrategy
metadata:
  name: test 
  namespace: default 
spec:
  selector:
    matchLabels:
      source: test
  loadBalance:
    readWriteSplitting:
      static:  
        defaultTarget: read // or read_write
        rules:
          - name: read-rule
            type: regex
            regex: ".*"
            target: read // read_write
            algorithmName: random // lb algorithm
          - name: write-rule
            type: regex
            regex: ".*"
            target: read_write
            algorithmName: roundrobin

An complete config of DatabaseEndpoint as follows:

apiVersion: core.database-mesh.io/v1alpha1
kind: DatabaseEndpoint
metadata:
  annotations:
    database-mesh.io/role: read // or read_write
    database-mesh.io/weight: 1
  labels:
    source: test 
  name: mysql 
  namespace: default 
spec:
  database:
    MySQL:
      db: test 
      host: mysql.default 
      password: root 
      port: 3306
      user: root

Development Task

pisa-proxy

  • Add config for read_write_splitting.
  • Add RuleMatch engine.
  • Update build_loadbalance logic for RuleMatch engine.
  • Implement RuleMatch engine for mysql server runtime.
  • Implement new LoadBalancer to FSM.
  • Add unit test.

pisa-controller

  • Add config for read_write_splitting.
  • Update DatabaseEndpoint crd for role.

integration test

  • pisa-proxy and pisa-controller integration test.

Add gRPC support

Feature Request

Is your feature request related to a problem? Please describe:

For better communication performance, Pisanix should support gRPC.

Describe the feature you'd like:

Firstly, Pisa-Controller and Pisa-Proxy will support gRPC in the same pattern with current Http.
Then will consider xDS style protocol.

Describe alternatives you've considered:

n/a

Teachability, Documentation, Adoption, Migration Strategy:

n/a

how to run with a netcore api program

how to run with a netcore api program ,I have used the Deployment of K8S, other steps are according to the document, the startup of the sidecar reported an error, can I only start through helm?

error full info:
thread 'main' panicked at 'called Result::unwrap() on an Err value: reqwest::Error { kind: Decode, source: Error("missing field admin", line: 1, column: 301) }', app/config/src/config.rs:132:46 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

[enhancement] add a session manage module

Feature Request

Is your feature request related to a problem? Please describe:

Describe the feature you'd like:

After added the readwritesplitting feature, the processing of ConnAttr has been added. Currently, the attrs processed are charset, autocommit, but the processing logic is integrated in Pool, and Pool is a public component, different databases may also there are different session attrs, so ConnAttr should be separated from Pool.

Describe alternatives you've considered:

Add a specifiy session manage module.

Teachability, Documentation, Adoption, Migration Strategy:

cargo build error caused by no authentication available

Bug Report

What version of Pisanix are you using?

master - 277161b

What operating system and CPU are you using?

No matter.

Steps to reproduce

cargo build

What did you expect?

Build succeeded.

What did happened?

➜  pisa-proxy git:(master) cargo build
    Updating git repository `ssh://[email protected]/database-mesh/lrpar.git`
error: failed to resolve patches for `https://github.com/rust-lang/crates.io-index`

Caused by:
  failed to load source for dependency `lrpar`

Caused by:
  Unable to update ssh://[email protected]/database-mesh/lrpar.git?rev=12c5175#12c5175f

Caused by:
  failed to fetch into: /Users/wuweijie/.cargo/git/db/lrpar-45e6e4e3f7532881

Caused by:
  failed to authenticate when downloading repository

  * attempted ssh-agent authentication, but no usernames succeeded: `git`

  if the git CLI succeeds then `net.git-fetch-with-cli` may help here
  https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli

Caused by:
  no authentication available

kubectl api-resources find a error

Bug Report

run kubectl api-resources find a error

error: unable to retrieve the complete list of server APIs: admission.database-mesh.io/v1alpha1: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }

need fix

image

pisa-proxy get endpoint failed

Bug Report

What version of Pisanix are you using?

v0.1.1

Steps to reproduce

What did you expect?

get backend endpoint correctly.

What did happened?

error log is as follows:
thread 'pisa-proxy' panicked at 'called Option::unwrap() on a None value', runtime/mysql/src/transaction_fsm.rs:340:43

How to view observability page

Whether there is an integrated indicator observation interface or whether Grafana and Prometheus need to be installed to display the observation interface?

Support AWS CloudWatch for SQL audit sinking

Feature Request

Is your feature request related to a problem? Please describe:

n/a

Describe the feature you'd like:

Support AWS CloudWatch for SQL auditing sinking, store the data in log groups of AWS S3

Describe alternatives you've considered:

Using Kinesis, or EventBridge to send the audit data to other systems.

Teachability, Documentation, Adoption, Migration Strategy:

n/a

Introduce kind as integration environment in action workflow

Development Task

This issue focus on integration test and plan to introduce kind as integration environment.

The procedure is:

  • Setup Kind
  • Setup Helm
  • Setup kubectl
  • Build Pisa-Controller
  • Build Pisa-Proxy
  • Install Pisa-Controller
  • Install MySQL-CLI
  • Run some tests

Start a separate project for CRD and configure the struct dependencies in the form of gomod

Feature Request

Describe the feature you'd like:

Start a separate project for CRD and configure the struct dependencies in the form of gomod

Describe alternatives you've considered:

Start a separate project for CRD and configure the struct dependencies in the form of gomod.
In order to

  1. Use the tool to quickly update the CRD after modifying the structure definition
  2. You can easily use the sdk for secondary development

Add docs automation workflow

Development Task

For better documentation, there is a need for docs automation workflow. Trying to build a Github Action to implementing the following steps:

  1. Run npm run build under docs
  2. Copy all generated files under docs/build to cloned pisanix.io
  3. Push the changes of pisanix.io to its branch gh-pages

Test Improvement

Development Task

Currently Pisanix is under low code coverage, neither with integration. For better maintainability, more test cases including unit test and integration test should be added.

Dynamic read write splitting monitor support switch

Feature Request

Describe the feature you'd like:

In dynamic read write splitting,there are 4 kinds of monitor.They will start with Pisa-Proxy startup.Some time,maybe user doesn't need all of them,we could add a switch in monitor.User can start monitor they need.

Feature(strategy): Pisa-Proxy mysql read_write_splitting config

Development Task

In Pisa-Proxy, maybe the config of read_write_spliting is like this:

[proxy.config.read_write_spliting]

[proxy.config.read_write_spliting.static]
default_target = "read"

[[proxy.config.read_write_spliting.static.rule]]
name = "read-rule"
type = ["regex"]
regex = ".*"
target = "read"
algorith_name = "random"

[[proxy.config.read_write_spliting.static.rule]]
name = "write-rule"
type = "regex"
regex = [".*"]
target = "read_write"
algorith_name = "roundrobin"

The structure of config is as follows

pub struct ReadWriteSplitting {
    #[serde(rename = "static")]
    pub undynamic: Option<ReadWriteSplittingStatic>,
}

#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct ReadWriteSplittingStatic {
    pub default_target: TargetRole,
    #[serde(rename = "rule")]
    pub rules: Vec<ReadWriteSplittingRule>,
}

#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(untagged)]
pub enum ReadWriteSplittingRule {
    Regex(RegexRule),
}

#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct RegexRule {
    pub name: String,
    #[serde(rename = "type")]
    pub rule_type: String,
    pub regex: Vec<String>,
    pub target: TargetRole,
    pub algorithm_name: AlgorithmName,
}

#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "lowercase")]
pub enum TargetRole {
    Read,
    ReadWrite,
}

After toml parse, we can get the data is as follows:

read_write_splitting: Some(
                ReadWriteSplitting {
                    undynamic: Some(
                        ReadWriteSplittingStatic {
                            default_target: Read,
                            rules: [
                                Regex(
                                    RegexRule {
                                        name: "read-rule",
                                        rule_type: "regex",
                                        regex: [
                                            ".*",
                                        ],
                                        target: Read,
                                        algorithm_name: Random,
                                    },
                                ),
                                Regex(
                                    RegexRule {
                                        name: "write-rule",
                                        rule_type: "regex",
                                        regex: [
                                            ".*",
                                        ],
                                        target: ReadWrite,
                                        algorithm_name: RoundRobin,
                                    },
                                ),
                            ],
                        },
                    ),
                },
            ),

dbep matching method is wrong

Bug Report

What version of Pisanix are you using?

master

What did happened?

Under certain conditions, the following code does not work correctly

pisa-controller/pkg/proxy/http.go:77

		for _, dbep := range dbeplist.Items {
			if reflect.DeepEqual(dbep.Labels, tsobj.Spec.Selector.MatchLabels) {
				dbeps.Items = append(dbeps.Items, dbep)
			}
		}

If DeepEqual is used and the two are in a containment relationship, the code will not correctly handle the relationship between the two

A better way to inject pods

Feature Request

fmt.Sprintf(podsSidecarPatch,
		pisaProxyImage,
		SidecarNamePisaProxy,
		pisaProxyAdminListenPort,
		pisaControllerService,
		pisaControllerNamespace,
		ar.Request.Namespace,
		strings.Join(podSlice, "-"),
		pisaProxyAdminListenHost,
		pisaProxyAdminListenPort,
		pisaProxyLoglevel,
	)

Using sprintf as variable injection is not robust enough and needs a better way to do value writing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.