Git Product home page Git Product logo

servicecomb-kie's Introduction

Apache-ServiceComb-Kie

Build Status License Coverage Status A service for configuration management in distributed system.

Conceptions

Key

Key could indicate a configuration like "timeout", then the value could be "3s" or indicates a file name "app.properties", then the value could be content of app.properties

Labels

Each key could has labels. labels indicates a unique key. A key "log_level" with labels "env=production" may saves the value "INFO" for all application log level in production environment. A key "log_level" with labels "env=production, component=payment" may saves the value "DEBUG" for payment service in production environment.

It means all payment service print debug log, but for other service print info log.

So you can control your application runtime behaviors by setting different labels to a key.

Why use kie

kie is a highly flexible config server. Nowadays, an operation team is facing different "x-centralized" system. For example a classic application-centralized system. A operator wants to change config based on application name and version, then the label could be "app,version" for locating a app's configurations. Meanwhile some teams manage app in a data center, each application instance will be deployed in a VM machine. then label could be "farm,role,server,component" to locate a app's configurations. kie fit different senario for configuration management which benifit from label design.

Components

It includes 1 components

  • server: rest api service to manage kv

Features

  • kv management: you can manage config item by key and label
  • kv revision mangement: you can mange all kv change history
  • kv change event: use long polling to watch kv changes, highly decreased network cost
  • polling detail track: if any client poll config from server, the detail will be tracked

Quick Start

Run locally with Docker compose

git clone [email protected]:apache/servicecomb-kie.git
cd servicecomb-kie/deployments/docker
sudo docker-compose up

It will launch 3 components

Development

To see how to build a local dev environment, check here

Build

This will build your own service image and binary in local

cd build
export VERSION=0.0.1 #optional, it is latest by default
./build_docker.sh

This will generate a "servicecomb-kie-0.0.1-linux-amd64.tar" in "release" folder, and a docker image "servicecomb/kie:0.0.1"

API Doc

After you launch kie server, you can browse API doc in http://127.0.0.1:30110/apidocs.json, copy this doc to http://editor.swagger.io/

Documentations

https://kie.readthedocs.io/en/latest/

or follow here to generate it in local

Clients

Contact

Bugs: issues

Contributing

See Contribution guide for details on submitting patches and the contribution workflow.

Reporting Issues

See reporting bugs for details about reporting any issues.

servicecomb-kie's People

Contributors

alec-z avatar angli2 avatar asifdxtreme avatar chinx avatar develpoerx avatar guoyl123 avatar humingcheng avatar kkf1 avatar little-cui avatar liuqi04 avatar mabingo avatar popozy avatar robotljw avatar ryaninvoker avatar sphairis avatar surechen avatar threeq avatar tianxiaoliang avatar tornado-ssy avatar wangqj avatar willemjiang avatar zhulijian1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

servicecomb-kie's Issues

The API paths in swagger are inconsistent with the actual paths when I start the service with docker

when I start the service with docker, return API path as follows, and access success.

"Add route path: [/v1/kv/{key}] Method: [PUT] Func: [Put]. 
"Add route path: [/v1/kv/{key}] Method: [GET] Func: [GetByKey].
"Add route path: [/v1/kv] Method: [GET] Func: [SearchByLabels]. 
"Add route path: [/v1/kv/{kvID}] Method: [DELETE] Func: [Delete].

but in the swagger, the API path as follows, and access failed.

"Add route path: [/v1/{project}/kie/kv/{key}] Method: [PUT] Func: [Put]. 
"Add route path: [/v1/{project}/kie/kv/{key}] Method: [GET] Func: [GetByKey].
"Add route path: [/v1/{project}/kie/kv] Method: [GET] Func: [SearchByLabels]. 
"Add route path: [/v1/{project}/kie/kv/{kvID}] Method: [DELETE] Func: [Delete].

I'm a bit confused. I think the two API paths should be the same, and what they should be.

全新环境,按文档部署失败

docker-compose启动时,

mongo-express服务有如下提示:

(node:7) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.

kie服务有如下提示:

{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:181","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:186","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:181","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:186","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:181","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:186","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:181","msg":"Parse services from config failed: input must be an ptr"}
{"level":"ERROR","timestamp":"2022-12-21 07:43:50.350 +00:00","file":"servicecomb/transfer.go:186","msg":"Parse services from config failed: input must be an ptr"}

关于kie部署session问题

还有一个问题 kie 存在session问题吗? 如果我采用轻量的swarm的方式部署 kie的话,kie实例有三个,会有问题吗?也就是和K8s部署差不多,对面一个node-ip,连过来那样

kv collection should not refer to label collection

kv记录应当能支持更新label内容
场景:
创建一个key为timeout。 label内容是集群为A,服务名为购物车。但事后希望给这个配置进行变更新增一个label version。

已有功能依然保留:

  • 创建kv时根据label内容生成唯一的label id并记录到label collection
  • 更新kv时支持不传入key id,根据key name和label内容查到唯一记录并更新

后台改造点:

  • 更新kv时,允许传入key id,根据key id,查出已存在key记录,并对label进行覆盖。还需要对这个新的label判断是否需要create一个新的label doc并生成新的label id并更新kv的label id

Kie client can't get value.#28

I do following program. The result is kvList &{ObjectID("000000000000000000000000") map[] 0 }.
package main

import (
"context"
"fmt"
"github.com/apache/servicecomb-kie/pkg/model"
"github.com/apache/servicecomb-kie/client"
)

func main() {
config := client.Config {
Endpoint: "http://127.0.0.1:30110",
VerifyPeer: false, //TODO make it works, now just keep it false
}
clients, err := client.New(config)
if err != nil {
fmt.Println(" new err", err.Error())
}
kvBody := model.KVDoc{}
kvBody.Key = "hellomesher"
kvBody.Value = "100s"
kvBody.ValueType = "string"
kvBody.Project = "test"
kvBody.Labels = make(map[string]string)
kvBody.Labels["evn"] = "test"

kvInfo, err := clients.Put(context.TODO(), kvBody, client.WithProject("test"))
if err != nil {
	fmt.Println(" put err", err.Error(), kvInfo)
}

kvList, err := clients.Get(context.TODO(),"hellomesher", client.WithGetProject("test"))
if err != nil {
	fmt.Println(" get err", err.Error())
}
for _, info  := range kvList {
	fmt.Println("kvList", info.Key, info.Value, info)
	for key, value := range info.Labels {
		fmt.Println("key value ", key, value)
	}
}

}

It may because of format. The client.get http result body is in format of KVResponse :
[
{
"label": {
"label_id": "5d678340460ce839af7e5edd",
"labels": {
"evn": "test"
}
},
"data": [
{
"_id": "5d678ebb460ce839af7e5ef4",
"key": "hellomesher",
"value": "100s",
"value_type": "string"
}
]
}
]

which use kvs []*model.KVDoc to do Unmarshal.

添加match=exact query条件时,total值有可能不正确

image

image

在不加match=exact应该有2条符合(如图1),加了后有1条符合(如图2)的情况时,加match=exact的时候的返回体里data数组里只有1个元素,这是正确的,但total值却是2,是不加match=exact时查询到的数量

label数据一致性问题

当前实现,如果并发创建label完全一致的两条key的时候(且label之前从未存在过),会导致重复label数据。

一种方法是可以将map进行format按序排列生成一个unique id,避免重复label记录

也可以想想有什么更好的方法

wait不支持无label情况

get请求不添加label,添加wait和revision,在有更新事件时解析出错,而get请求也不会立马返回

should enable access log to help diagnosis problems like request timeout

Polling kie got a timeout exception, but do not know if kie has received the request and how long it takes. Recommented kie enable access log.

2021-08-28 15:01:37,678 [ERROR] get configurations from KieConfigCenter failed, and will try again. org.apache.servicecomb.config.kie.client.KieConfigManager$PollConfigurationTask.execute(KieConfigManager.java:136)
org.apache.servicecomb.config.kie.client.exception.OperationException: read response failed.
        at org.apache.servicecomb.config.kie.client.KieClient.queryConfigurations(KieClient.java:110)
        at org.apache.servicecomb.config.kie.client.KieConfigManager$PollConfigurationTask.execute(KieConfigManager.java:124)
        at org.apache.servicecomb.http.client.task.AbstractTask.lambda$startTask$1(AbstractTask.java:89)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
        at sun.security.ssl.InputRecord.read(InputRecord.java:503)
        at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
        at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
        at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
        at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
        at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
        at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
        at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
        at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
        at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
        at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at org.apache.servicecomb.http.client.common.HttpTransportImpl.doRequest(HttpTransportImpl.java:93)
        at org.apache.servicecomb.config.kie.client.KieClient.queryConfigurations(KieClient.java:87)
        ... 5 more

一个api用于修改label的alias字段

label记录作为一种不易读的数据,需要进行管理以使人类可读,并且与domain,project一起作为一个unique id

要求可以根据label id进行alias值更新,其他的不可以更改

是否考虑增量配置更新

  1. JAVA客户端当前实现是longpolling+全量拉取
    在微服务场景下,一般是针对指定app级别下面所有kv的监听,一旦有一个kv发生变化,服务端就会进行一次全量的推送。
    目前方案存在的问题是:如果app级别下面存在的kv数量过多或者过大,在客户端实例数量较多的情况下进行一次全量推送会给客户端和服务端的缓冲区以及内存带来较大压力,对于网络也不友好。

  2. 可以考虑客户端在进行请求时header携带increment=true来决定是否进行增量更新模式
    如果开启增量模式,kie可以选择在数据发生变化时只推送发生变化的kv。

sync task

背景:需要支持同步,开启同步后,增删改配置,都会形成 task 任务,用于同步

消息推送跟踪

场景:
用户关心当前客户端拉取配置的状况。并且帮助定位为何配置没有生效。

需求:
对于所有客户端,需要追踪客户端(记录IP,User-Agent),以及他的拉取条件(比如key,label,revision,wait),返回值(response body,header)

不要记录历史信息,只需要把user agent和ip作为unique id存储,记录他最新的拉取信息即可

collection: polling_detail
列:
id,polling date,IP,user agent,url path,response body,response header

请问关于配额设计是如何考虑的 以及如何修改配额

func PreCreate(service, domain, project, resource string, number int64) error {
	if defaultManager == nil {
		openlog.Debug("quota management not available")
		return nil
	}
	qs, err := defaultManager.GetQuotas(service, domain, project)
	if err != nil {
		openlog.Error(err.Error())
		return ErrGetFailed
	}
	var resourceQuota *Quota
	for _, q := range qs {
		if q.ResourceName == resource {
			resourceQuota = q
			break
		}
	}
	if resourceQuota == nil {
		//no limits
		openlog.Debug("no limits for " + resource)
		return nil
	}
	if number > resourceQuota.Limit-resourceQuota.Used {
		return ErrReached
	}
	return nil
}

配额的维度是和 service , domain, project 相关的
这里的domain和project可以说是从请求参数中读取的
但是service这个 从命名来看是和服务挂钩的,这个维度在请求时该如何体现
做性能测试时 发现默认最大就10000, 这个配额该如何配置?

事件推送应当支持match参数

当前如果有事件推送出来,那么label只需要有部分匹配就会把时间推送回去,并未对match参数进行处理,应当支持exact模式支持完全匹配

长轮询的时间为何最长是5min

请问这个数值的设计是出于什么角度考虑?
另外 长轮询保持阶段 的连接断开或者网络波动 是否能够有办法感知?

docker compose启动servicecomb-kie报错

使用命令docker-compose up启动kie时报错,
命令:

cd servicecomb-kie/deployments/docker
sudo docker-compose up

部分日志如下:

mongo-express_1    | /node_modules/mongodb/lib/server.js:265
mongo-express_1    |         process.nextTick(function() { throw err; })
mongo-express_1    |                                       ^
mongo-express_1    | MongoError: failed to connect to server [mongo:27017] on first connect
mongo-express_1    |     at Pool.<anonymous> (/node_modules/mongodb-core/lib/topologies/server.js:326:35)
mongo-express_1    |     at emitOne (events.js:116:13)
mongo-express_1    |     at Pool.emit (events.js:211:7)
mongo-express_1    |     at Connection.<anonymous> (/node_modules/mongodb-core/lib/connection/pool.js:270:12)
mongo-express_1    |     at Object.onceWrapper (events.js:317:30)
mongo-express_1    |     at emitTwo (events.js:126:13)
mongo-express_1    |     at Connection.emit (events.js:214:7)
mongo-express_1    |     at Socket.<anonymous> (/node_modules/mongodb-core/lib/connection/connection.js:175:49)
mongo-express_1    |     at Object.onceWrapper (events.js:315:30)
mongo-express_1    |     at emitOne (events.js:116:13)
docker_mongo-express_1 exited with code 1
mongo-express_1    | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-express_1    | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-express_1    | Mon Jul 20 02:54:58 UTC 2020 retrying to connect to mongo:27017 (2/5)
mongo-express_1    | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-express_1    | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-express_1    | Mon Jul 20 02:55:04 UTC 2020 retrying to connect to mongo:27017 (3/5)
mongo-express_1    | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-express_1    | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-express_1    | Mon Jul 20 02:55:10 UTC 2020 retrying to connect to mongo:27017 (4/5)
mongo-express_1    | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-express_1    | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-express_1    | Mon Jul 20 02:55:16 UTC 2020 retrying to connect to mongo:27017 (5/5)

当前kie的配置查询接口,无层级的概念,只能按照label查询

问题描述:

  1. 当前kie的配置查询接口,无层级的概念,只能按照label查询
  2. 当label间存在层级关系,则希望通过某一层级的label检索出其和其下层label的所有配置,如:label=service和label=version,业务定义上是version属于service的下层,如果需要查询service和version两层的配置,按当前的设计,则需要请求2次,希望能通过一种机制,只需要查询service层级就能带出version层级的配置。

部分API不太符合restful的规范

需求背景:
restful应当保证有个api路径能够保证唯一的资源,而不是当前这样一key这种不唯一的资源作为路径,不符合设计风格

需求详情:
使API符合restful规范性

apollo生态兼容

背景:
如今,apollo已经非常的流行,大量的遗留系统都使用了这套系统。

需求描述:

我们的label有很大的灵活性,尝试找到一个兼容apollo API的方案,来无缝支持apollo生态的client和agent,可能只是有限支持,要看下

Inconsistent script database name in Quick Start causes docker-compose service startup to fail

【Scenes】
According to the quick start document, use the docker-compose command to start the service, but the database authentication fails

【Reason】
db.js

  • mongodb init database name is servicecomb
  • db.js init database name is kie
 {
        user: "kie",
        pwd: "123",
        roles:[
            {
                role: "readWrite",
                db:   "kie"
            }
        ]
    }

[Error Log]
can not dial db:server returned error on SASL authentication step: Authentication failed

长轮询和长连接的抉择

现有kie的实现是通过长轮询去监听配置变更,有没有考虑过长连接去做?
理论上长连接的资源消耗更低?
长轮询的优势在哪里

服务中心如果开启rbac则无法像java版一样注册,header没有token

Describe the bug
我使用kie打开了服务中心注册,且服务中心开启了rbac,则kie无法正常注册进服务中心,因为http header缺少token,java版
可以通过yaml加入
credentials:
rbac:
enabled: true
account:
name: #服务中心支持的用户名
password: #用户名对应的密码
cipher: default #账号密码加解密用的算法实现类
自动支持服务中心rbac注册,但kie使用的go-chassis无法支持,已在参数文件内加入以上配置,仍提示header 不含token

Version of go chassis

To Reproduce
Steps to reproduce the behavior:

Logs

about api : /v1/{project}/kie/summary

I noticed that in api /v1/{project}/kie/summaryq was used as the query.
But in api /v1/{project}/kie/kv/{key} and /v1/{project}/kie/kv ,the query is label.
q and label seem like the same meaning, should we unit them ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.