Git Product home page Git Product logo

incubator-seata-k8s's Introduction

seata-k8s

中文文档

Associated Projects:

Method 1: Using Operator

Usage

To deploy Seata Server using the Operator method, follow these steps:

  1. Clone this repository:

    git clone https://github.com/apache/incubator-seata-k8s.git
  2. Deploy Controller, CRD, RBAC, and other resources to the Kubernetes cluster:

    make deploy
    kubectl get deployment -n seata-k8s-controller-manager  # check if exists
  3. You can now deploy your CR to the cluster. An example can be found here seata-server-cluster.yaml:

    apiVersion: operator.seata.apache.org/v1alpha1
    kind: SeataServer
    metadata:
      name: seata-server
      namespace: default
    spec:
      serviceName: seata-server-cluster
      replicas: 3
      image: seataio/seata-server:latest
      store:
        resources:
          requests:
            storage: 5Gi

    For the example above, if everything is correct, the controller will deploy 3 StatefulSet resources and a Headless Service to the cluster. You can access the Seata Server cluster in the cluster through seata-server-0.seata-server-cluster.default.svc.

Reference

For CRD details, you can visit operator.seata.apache.org_seataservers.yaml. Here are some important configurations:

  1. serviceName: Used to define the name of the Headless Service deployed by the controller. This will affect how you access the server cluster. In the example above, you can access the Seata Server cluster through seata-server-0.seata-server-cluster.default.svc.

  2. replicas: Defines the number of Seata Server replicas. Adjusting this field achieves scaling without the need for additional HTTP requests to change the Seata raft cluster list.

  3. image: Defines the Seata Server image name.

  4. ports: Three ports need to be set under the ports property: consolePort, servicePort, and raftPort, with default values of 7091, 8091, and 9091, respectively.

  5. resources: Used to define container resource requirements.

  6. store.resources: Used to define mounted storage resource requirements.

  7. env: Environment variables passed to the container. You can use this field to define Seata Server configuration. For example:

    apiVersion: operator.seata.apache.org/v1alpha1
    kind: SeataServer
    metadata:
      name: seata-server
      namespace: default
    spec:
      image: seataio/seata-server:latest
      store:
        resources:
          requests:
            storage: 5Gi
      env:
      - name: console.user.username
        value: seata
      - name: console.user.password
        valueFrom:
          secretKeyRef:
            name: seata
            key: password
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: seata
    type: Opaque
    data:
      password: seata

For Developer

To debug this operator locally, we suggest you use a test k8s environment like minikube.

  1. Method 1. Modify code and build the controller image:

    Assume you are using minikube for testing,

    eval $(minikube docker-env)
    make docker-build deploy
  2. Method 2. Locally debug without building images

    You need to use telepresence to proxy traffic to the k8s cluster, see telepresence tutorial to install its cli tool and traffic manager. After installing telepresence, you can connect to minikube by following commands:

    telepresence connect
    # Check if traffic manager connected
    telepresence status

    By executing above commands, you can use in-cluster DNS resolution and proxy your requests to the cluster. And then you can use IDE to run or debug locally:

    # Make sure generate proper resources first
    make manifests generate fmt vet
    
    go run .
    # Or you can use IDE to run locally instead

Method 2: Example without Using Operator

Due to certain reasons, Seata Docker images currently do not support external container calls. Therefore, the example projects should also be kept in link mode with the Seata image inside the container.

# Start Seata deployment (nacos,seata,mysql)
kubectl create -f deploy/seata-deploy.yaml
# Start Seata service (nacos,seata,mysql)
kubectl create -f deploy/seata-service.yaml
# Get a NodePort IP (kubectl get service)
# Modify the IP in examples/examples-deploy for DNS addressing
# Connect to MySQL and import table structure
# Start example deployment (samples-account,samples-storage)
kubectl create -f example/example-deploy.yaml
# Start example service (samples-account,samples-storage)
kubectl create -f example/example-service.yaml
# Start order deployment (samples-order)
kubectl create -f example/example-deploy.yaml
# Start order service (samples-order)
kubectl create -f example/example-service.yaml
# Start business deployment (samples-dubbo-business-call)
kubectl create -f example/business-deploy.yaml
# Start business deployment (samples-dubbo-service-call)
kubectl create -f example/business-service.yaml

Open the Nacos console in your browser [http://localhost:8848/nacos/] to check if all instances are registered successfully.

Testing

# Account service - Deduct amount
curl -H "Content-Type: application/json" -X POST --data "{\"id\":1,\"userId\":\"1\",\"amount\":100}" cluster-ip:8102/account/dec_account
# Storage service - Deduct stock
curl -H "Content-Type: application/json" -X POST --data "{\"commodityCode\":\"C201901140001\",\"count\":100}" cluster-ip:8100/storage/dec_storage
# Order service - Add order and deduct amount
curl -H "Content-Type: application/json" -X POST --data "{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"orderCount\":10,\"orderAmount\":100}" cluster-ip:8101/order/create_order
# Business service - Client Seata version too low
curl -H "Content-Type: application/json" -X POST --data "{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"count\":10,\"amount\":100}" cluster-ip:8104/business/dubbo/buy

incubator-seata-k8s's People

Contributors

funky-eyes avatar iceber avatar niaoshuai avatar ptyin avatar slievrly avatar zhangthen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-seata-k8s's Issues

Occasional Java UnknownHostException at pod startup

Pods managed by seata-server StatefulSet sometimes fail to start. The detail error stack is as follow:

java.lang.RuntimeException: java.net.UnknownHostException: seata-server-0.seata-server-cluster: Name or service not known
	at io.seata.common.util.NetUtil.convertIpIfNecessary(NetUtil.java:330)
	at io.seata.common.util.NetUtil.isValidIp(NetUtil.java:309)
	at io.seata.server.Server.start(Server.java:68)
	at io.seata.server.ServerRunner.run(ServerRunner.java:60)
	at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:765)
	at org.springframework.boot.SpringApplication.lambda$callRunners$2(SpringApplication.java:749)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:357)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:483)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
	at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:744)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1300)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1289)
	at io.seata.server.ServerApplication.main(ServerApplication.java:30)
Caused by: java.net.UnknownHostException: seata-server-0.seata-server-cluster: Name or service not known
	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1330)
	at java.net.InetAddress.getAllByName0(InetAddress.java:1283)
	at java.net.InetAddress.getAllByName(InetAddress.java:1199)
	at java.net.InetAddress.getAllByName(InetAddress.java:1127)
	at java.net.InetAddress.getByName(InetAddress.java:1077)
	at io.seata.common.util.NetUtil.convertIpIfNecessary(NetUtil.java:328)
	... 18 common frames omitted

The underlying reason is that seata-server would call NetUtil.convertIpIfNecessary method to translate host name of the headless service subdomain, e.g., "seata-server-0.seata-server-cluster". However, the coredns service may not add the service name registry yet, leading to the occasional UnknownHostException.

CI workflow required

A CI workflow is required to ensure that the compilation of PR code is correct.

Add UTs to test functionalities

Add UTs to ensure the correctness of operator functionalities. Same as seata-go, we can name test sources as '*_test.go' under the same directory as the target sources.

关于在K8S中部署的问题

在 K8S 中,是否可以用 spring-cloud-kubernetes 提供的服务发现来进行服务的连接,而不使用 eureka 、nacos 等服务发现组件 ?

Allows control of volumn reclaim behaviour for SeataServer

The current behaviour when deleting a SeataServer CR is to preserve the volume claimed by associated Statefulset. It would be nice to add an option for user to control the volumn recalim behaviour.

My idea is to add a new volumeReclaimPolicy property inside store structure, and make valid values ("Delete", "Retain"). "Retain" stands for current implementation, whereas "Delete" stands for delete associated volumns when deleting the SeataServer CR. It is recommended to use Kubernetes finalizer to implement the "Delete" policy.

Add ConfigMap and Secret support

The present method to configure SeataServer is to use env property like following,

apiVersion: operator.seata.apache.org/v1alpha1
kind: SeataServer
# ...
spec:
  # ...
  env:
    console.user.username: seata
    console.user.password: seata

However, the env property is just a map, it does not support valueFrom like Kubernetes EnvVar. We would like to refactor the current env implementation from map type to EnvVar so to support ConfigMap and Secret. The expected form is something like following,

apiVersion: operator.seata.apache.org/v1alpha1
kind: SeataServer
# ...
spec:
  # ...
  env:
  - name: console.user.username
    value: seata
  - name: console.user.password
    valueFrom:
       secretKeyRef:
         name: seatapwd
         key: console

Adjusted frame length exceeds 8388608: 791339008 - discarded

`2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : channel:[id: 0xd5e9c670, L:/10.244.2.50:8091 - R:/10.244.2.1:55830] read idle.
2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : 10.244.2.1:55830 to server channel inactive.
2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xd5e9c670, L:/10.244.2.50:8091 - R:/10.244.2.1:55830]
2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : closeChannelHandlerContext channel:[id: 0xd5e9c670, L:/10.244.2.50:8091 - R:/10.244.2.1:55830]
2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : 10.244.2.1:55830 to server channel inactive.
2021-06-12 02:28:33.318 INFO --- [NIOWorker_1_1_2] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xd5e9c670, L:/10.244.2.50:8091 ! R:/10.244.2.1:55830]
2021-06-13 09:33:11.742 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : channel exx:Adjusted frame length exceeds 8388608: 791339008 - discarded,channel:[id: 0xc71255bf, L:/10.244.2.50:8091 - R:/10.244.0.0:50741]
2021-06-13 09:33:11.743 WARN --- [NIOWorker_1_2_2] io.netty.channel.DefaultChannelPipeline : An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
==>
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 8388608: 791339008 - discarded
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:387) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:430) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.seata.core.rpc.netty.v1.ProtocolV1Decoder.decode(ProtocolV1Decoder.java:82) ~[seata-core-1.3.0.jar:na]
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:343) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]
<==

2021-06-13 09:33:26.755 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : channel:[id: 0xc71255bf, L:/10.244.2.50:8091 - R:/10.244.0.0:50741] read idle.
2021-06-13 09:33:26.756 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : 10.244.0.0:50741 to server channel inactive.
2021-06-13 09:33:26.756 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xc71255bf, L:/10.244.2.50:8091 - R:/10.244.0.0:50741]
2021-06-13 09:33:26.756 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : closeChannelHandlerContext channel:[id: 0xc71255bf, L:/10.244.2.50:8091 - R:/10.244.0.0:50741]
2021-06-13 09:33:26.756 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : 10.244.0.0:50741 to server channel inactive.
2021-06-13 09:33:26.756 INFO --- [NIOWorker_1_2_2] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xc71255bf, L:/10.244.2.50:8091 ! R:/10.244.0.0:50741]`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.